Euro Security Watch with Mathew J. Schwartz

Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development

AI Surveillance Tech Promises Safety, But at What Cost?

Security, Privacy, Data Protection and Liability Questions Remain Unanswered
AI Surveillance Tech Promises Safety, But at What Cost?
Photo: Kevan (via Flickr/CC)

Surveillance technologies can be used to enhance our collective security. But without careful rules and regulations, they can also erode our liberties.

See Also: Webinar | Prisma Access Browser: Boosting Security for Browser-Based Work

Hence there's rising cause for concern, as a new study finds that governments are adopting at a rapid pace artificial intelligence technology that promises to deliver greater surveillance capabilities (see: Adoption of AI Surveillance Technology Surges).

The impetus for the new "Global Expansion of AI Surveillance" report from the Carnegie Endowment for International Peace is to promote a discussion of such technology, including where its use is warranted, the risks it poses and what laws and oversight mechanisms we need to protect personal privacy.

"As these technologies become more embedded in governance and politics, the window for change will narrow," says report author Steven Feldstein, an associate professor of public affairs at Boise State University who formerly served as a deputy assistant secretary in the Democracy, Human Rights and Labor Bureau at the U.S. Department of State.

His research found that at least 75 countries are already using AI technologies for surveillance purposes, including 64 using facial recognition systems, 56 using safe city platforms and 52 countries relying on predictive policing.

What is AI Surveillance Technology?

Feldstein says that many technologies have become robust and complementary enough to drive rapid advances in surveillance, including "the maturation of machine learning and the onset of deep learning; cloud computing and online data gathering; a new generation of advanced microchips and computer hardware; improved performance of complex algorithms; and market-driven incentives for new uses of AI technology."

While Chinese vendors - including Huawei and ZTE - are big players, so too are vendors from Japan, the U.S. and elsewere. For example, many vendors - including Affectiva, Amazon Google, IBM, Kairos, Microsoft, NEC, OpenCV and others - are already selling facial recognition capabilities and the ability to easily use these to search big data repositories.

Source: Carnegie Endowment for International Peace

Not all of these approaches necessarily work, or work as advertised, and they are continuing to be refined. But with so many technology firms taking a "move fast and break things" approach, there's significant risk of people's rights getting trampled in the name of techno-progress (see: Amazon Rekognition Stokes Surveillance State Fears).

Governments of all stripes are testing these technologies. "China is exporting surveillance tech to liberal democracies as much as it is targeting authoritarian markets," Feldstein says. "Likewise, companies based in liberal democracies - for example, Germany, France, Israel, Japan, South Korea, the U.K., the United States - are actively selling sophisticated equipment to unsavory regimes."

Rigorous Debate Required

Given the technology's potential impact on society, Alan Woodward, a computer science professor at the University of Surrey, tells me that the time for robust public debate is now.

"I personally think there should be a much greater public discussion about the use of AI in a variety of fields: law enforcement, medicine, transportation and many others," he says. "Each has safety implications and that at the very least should cause scrutiny."

Beyond safety, other questions include security, privacy, data protection and liability. If something goes wrong with AI surveillance technology - for example, a biometric face-scanning system based on CCTV footage "matches" the wrong identity to a suspected robber - who should legally be held responsible?

But Woodward says it's "no surprise" that governments are rushing to adopt AI - including facial recognition and predictive policing - given the potential impact on law enforcement. "I suspect it's been seen as a force multiplier in times of budget restraints, but it hasn't necessarily been thought through," he says.

For example, there's no mandatory retention or expiration period for biometric data in the U.K. Contrast that with Britain's Investigatory Powers Act 2016, which requires ISPs and mobile phone services to retain for 12 months the internet browsing, voice call, email, text, internet gaming and mobile phone usage records for every subscriber. After 12 months, the expectation is that they will delete it. Getting rid of the data has an immediate upside, in that it cannot then be accidentally exposed or stolen.

Retention is a key concern for other reasons too, because it's unclear how data gathered today might be used against individuals in the future. "If data is kept for long enough, you may find it being used against you in ways you never imagined today: Regimes, laws, social norms all change significantly over time, and what you may have no problem with data being used for today may be very different in a few years' time," Woodward says.

Predictive Policing - Pros and Cons

But it's not all doom and gloom, if handled correctly. "Where AI is proving useful is predicting where police forces should deploy. It monitors various sources - e.g. social media - and can help head off trouble at the pass," Woodward says. "My concern here is that it's a short step from there to 'pre-crime.' We need to understand the limits of AI and always have a human in the loop."

Pre-crime is the concept coined by science fiction writer Philip K. Dick via his short story "Minority Report," referring to a fictional police agency that eliminates people before they can commit a future crime.

How Might Surveillance AI Fail?

Cesar Jimenez and Fran Gomez of Devo demonstrate low-cost defenses against automated facial-recognition technologies at Black Hat Europe 2018 in London.

Another cause for concern is that criminals might game AI surveillance systems. "We do not fully understand how some of these models might be attacked," Woodward says.

Fingerprints or DNA collected at a crime scene, for example, can later be re-checked, and technicians can be cross-examined during a court case to demonstrate how they reached their conclusion. But not so AI.

"If an AI system identifies you as being the perpetrator of a crime, how do you know how it came to that conclusion?" Woodward asks. "Worse still, we are beginning to understand how some AI models can be perturbed to cause misidentification: We've all seen the T-shirts that confuse AI systems so they can't even identify you as human. How do we know that there may not be some way of attacking a model to make it appear that you were somewhere you weren't?" (See: Visual Journal: Black Hat Europe 2018.)

"We need to have a much better understanding of AI models, and there has to be the ability for human, manual verification, before we rely upon anything they provide being used as evidence," he says.



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.