Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

UK AI National Institute Urges 'Red Lines' For Generative AI

Alan Turing Institute Calls for 'Shift in Mindset' to Tackle National Security Risk
UK AI National Institute Urges 'Red Lines' For Generative AI
The Alan Turing Institute called for measures including red lines, traceability and transparency for generative AI systems. (Image: Shutterstock)

The U.K. national institute for artificial intelligence urged the government to establish red lines against the use of generative AI in scenarios in which the technology could take an irreversible action without direct human oversight.

See Also: Stop Sensitive Data Loss with AI Powered DLP

The Alan Turing Institute in a report published late Friday night said generative AI tools are currently "too unreliable and error-prone to be trusted in the highest stakes contexts within national security."

It also warned against complacency in human supervision, stating that users' propensity to overly trust the outputs of large language models could lead to a reluctance to challenge AI-generated output.

The institute, founded in 2015 with government funding, called for "a shift in mindset to account for all the unintentional or incidental ways in which generative AI can pose national security risks." It isn't the first organization to flag excessive autonomy as a danger of AI. The OWASP Foundation lists "excessive agency" as one of the top 10 concerns about large language model applications.

The Conservative U.K. government has sought to place the country on the forefront of responsible AI development, hosting a two-day international summit in November that attracted the participation of U.S. Vice President Kamala Harris and OpenAI founder Sam Altman. At the same time, a key U.K. political figure told Parliament on Wednesday that the government is in no rush to regulate AI. "We are not saying that we will never regulate AI, rather, but the point is: We don't want to rush and get it wrong and stymie innovation," said Secretary of State for Science, Innovation and Technology Michelle Donelan (see: UK in No Rush to Legislate AI, Technology Secretary Says).

The report singled out autonomous agents as a specific application of generative AI that warrants close oversight in a national security context. Autonomous agents build on LLMs by interacting with their environment and taking actions with little human intervention.

The technology has the potential to accelerate national security analysis such as by rapidly processing vast amounts of open-source data, providing preliminary risk assessments and generating hypotheses for human analysts to pursue, the report said. But critics told report authors that the technology falls short of human-level reasoning and can't reproduce the innate understanding of risk that humans use to avoid failure.

Among the mitigations the report suggested are recording actions and decision taken by autonomous agents. "The agent architecture must not obscure or undermine any potential aspects of explainability originating from the LLM." It also suggests attaching warnings to "every stage" of generative AI output and documenting what an agent-based system would do in a worst-case scenario.

The institute also suggested the government may implement stringent restrictions in areas in which "perfect trust" is required, such as nuclear command and control and possibly less-existential areas such as policing and criminal justice. "Safety brakes for AI systems that control the operation of critical infrastructure could be similar to the braking systems that engineers have long built into other technologies," the report says (see: US Lawmakers Warned That AI Needs a 'Safety Brake').

Some experts interviewed by report authorities told authors that they were confident that operational staff in national security and defense are aware of AI's limitations, but others were skeptical about whether senior officials share their underlings' caution. "Individuals charged with achieving workforce savings and finding 'an edge' may see some of the risks and prerequisites around data governance and surrounding AI infrastructure as constraints."

Outside of clear-cut situations, the case for red lines diminishes, the report said. Limits should adjust with capabilities, provided that measures such as the ability to trace the influence of AI and a "multi-layered and socio-technical evaluation" of the technology are in place.

The report also flagged malicious use of generative AI as a safety concern while noting that the technology mostly augments existing societal risks such as disinformation, fraud and child sexual abuse material.

Still, bad actors are unconstrained by the need for accuracy and transparency that drives governmental use. Should a large language model underperform in its generation of deepfakes or in writing malware, "the cost of failure to the attacker remains low."

To tackle the issue of AI-generated content, the report recommends that the government support watermarking features resistant to tampering, such as adding a signal to AI-generated content at the compute stage through the underlying hardware. "This would require significant commitments from GPU manufacturers such as NVIDIA alongside international government coordination, however it would ensure that models are automatically watermarked." The challenges to doing so are "formidable," the report says.

On the regulatory front, the researchers recommend that updating existing regulation would more practical than introducing AI-specific rules, due to the slow pace of legislative efforts, which could take several years to be finalized.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.