Artificial Intelligence & Machine Learning , Election Security , Fraud Management & Cybercrime
APT Hacks and AI-Altered Leaks Pose Biggest Election Threats
Microsoft and Social Media Firms Warn UK Lawmakers About Election Security ThreatsElection security threats are real, and attacks will come from sophisticated nation-state threat actors who will hack victims and leak sensitive information paired with AI-generated deepfakes as part of disinformation campaigns across Western nations, social media companies told the U.K. government.
See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom
Responding in writing to a U.K. parliamentary committee's inquiry on threats to elections security, social media companies said artificial intelligence-enabled disinformation poses the most common risk to the elections in the U.K., U.S, Europe and other parts of the world. Nearly 4 billion individuals are expected to cast a vote in 2024 elections (see: Erosion of Trust Most Concerning Threat to UK Elections)./p>
Threat actors are likely to combine synthetic AI with legitimate content, such as AI-generated voice added to actual videos or fake news stories with spoofed media logos to run disinformation campaigns, Microsoft told the Joint Committee on the National Security Strategy.
"Sophisticated actors influencing and interfering in elections likely will employ a combination of targeted hacking operations with strategically timed leaks to drive media coverage to elevate their preferred candidates," Microsoft said.
The company said it has recently dismantled Russian and Chinese nation-state attempts to use chatbot OpenAI for malicious purposes (see: OpenAI and Microsoft Terminate State-Backed Hacker Accounts).
While AI can "significantly augment" the capabilities of "information operations actors," mainly in terms of increasing scale, Google and YouTube said they have seen only limited activity from threat actors deploying artificial intelligence.
Still, the company is working on watermarking, fingerprinting and signed metadata to detect the use of malicious AI.
A spokesperson for X, formerly known as Twitter, said its media policy restricts the sharing of synthetically manipulated media and that it is working toward more accurate labeling of misleading content. The company is also rolling out measures to assess AI model risks and identify networks spreading AI-enabled disinformation.
Despite the measures taken by the companies to curb AI threats, committee Chair Margaret Beckett expressed concern over the lack of unified efforts by the platforms to address AI-related risks, which she said could expose platform users to a "dizzying variety of information."
"Much of the written evidence that was submitted shows - with few and notable exceptions - an uncoordinated, siloed approach to the many potential threats and harms facing U.K. and global democracy," Beckett said.
Lack of adequate moderation of AI-generated content also risks creating "echo chambers" for users, limiting their access to varied content and information on these platforms, she added.
Similar concerns were raised by George Washington University researchers, who advised social media companies to coordinate their efforts to prevent disinformation campaigns rather than attempting to contain or filter them out to reduce the scale of such operations (see: AI Disinformation Likely a Daily Threat This Election Year).
The committee has not set a date for a hearing on election security.
Efforts from the committee come amid mounting concerns over risks to U.K. election security. British Deputy Prime Minister Oliver Dowden in March publically blamed China-backed APT31 for an attack against the Inter-Parliamentary Alliance, an international pressure group of lawmakers dedicated to countering Beijing (see: UK Discloses Chinese Espionage Activities).
To beef up cyber defense, the U.K. government recently launched a system for alerting political parties and candidates to cyberthreats (see: UK NCSC Launches New Hacking Alert System for Politicians).