Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
UK Intelligence Agency Warns of Mounting AI Cyberthreat
British Lawmakers Call on Government to Boost Protections From AI ScamsGenerative artificial intelligence-enabled ransomware and nation-state hacks in the United Kingdom are "almost certainly" likely to surge after this year, the National Cyber Security Center warned. And British lawmakers called on the government to roll out measures to prevent AI scams.
See Also: The future is now: Migrate your SIEM in record time with AI
In a report evaluating the cyber risk posed by artificial intelligence, the NCSC evaluated the probabilities of AI cyberthreats to the country and concluded that developments in generative AI and large language models will "certainly increase the volume" as well as "heighten the impact" of hacks - although more sophisticated malicious applications of the technology "are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources." The report also says that advanced uses are not likely to be seen before 2025.
"Ransomware continues to be a national security threat," said James Babbage, director general for threats at the National Crime Agency. "These threats are likely to increase in the coming years due to advancements in AI."
Threat actors will use AI to enhance social engineering and security evasion while hackers may use the technology to exploit zero-days faster and to exfiltrate large swaths of data quicker, the agencies said.
The technology will lower the entry barrier into cybercrime, allowing even a novice actor to craft offensive capabilities that can later be sold in underground forums, they predicted. A notable development in this area is the availability of generative AI-as-a-service offensive tools in the underground forums.
Previously, threat researchers uncovered the emergence of WormGPT and FraudGPT - AI chatbots designed to create malicious code or phishing emails - for sale on Telegram and other dark web forums for amounts ranging from $200 to $1,700 (see: Criminals Are Flocking to a Malicious Generative AI Tool).
The warnings from the agencies come as British lawmakers fear that scammers could use AI voice cloning and deepfakes to target vulnerable citizens and commit crimes that could create losses worth millions of pounds.
At a U.K. parliamentary hearing on Monday, lawmakers called on the government to roll out adequate measures to protect consumers from AI scams. This would include labeling for AI applications such as chatbots.
"In the same way as antivirus software warns of computer users of malware risks, that could become a commonplace system that allows the public to be alerted to AI risks," said Conservative Member of Parliament Dean Russell.
Responding to these concerns, Saqib Bhatti, parliamentary undersecretary for science and technology, said the recently passed Online Safety Act will play a crucial role in identifying online fraud, as it imposes additional duties on social media companies to put mitigation measures in place to prevent promotion of AI-enabled fraud.
The law also aims to tackle AI-generated deepfakes (see: UK's Ofcom Prepares to Enforce Online Safety Bill).
The government is working with industry to remove the vulnerabilities that fraudsters exploit, with intelligence agencies to shut down fraudulent infrastructure, and it is working with law enforcement "to identify and bring the most harmful offenders to justice," Bhatti said.
In guidance on securing AI systems, released in November, the NCSC, along with 22 global cyber agencies, recommended auditing external APIs for flaws, preventing an AI system from loading untrusted models and limiting the transfer of data to external sources to prevent potential cyber misuse of AI (see: US, UK Cyber Agencies Spearhead Global AI Security Guidance).