Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development
Experts Urge Congress to Combat Deepfake Technology Threats
Digitally Manipulated Media Already Poses National Security and Privacy ConcernsLegal experts and technologists urged the U.S. Congress to set restrictions against the use of deepfake technologies and provide new protections for women and minority communities against the use of digitally manipulated media, warning that the deceptive content is already affecting national security, personal privacy and public trust.
See Also: Establishing a Governance Framework for AI-Powered Applications
Deepfake detection systems can help prevent disinformation campaigns and social engineering attacks. They can also identify potential threats and prevent unauthorized access. But research shows that those systems, which are designed to identify and mitigate the presence of manipulated or synthetic media, suffer from high error rates in detecting deepfake content targeting women and people of color.
"While technologies are being developed to detect deepfakes, initial studies have demonstrated that many of these systems are more accurate in detecting deepfakes featuring whites than people of color," Spencer Overton, a George Washington University School of Law professor, testified Wednesday to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. Overton said detection systems "are not trained on datasets that include a sufficiently robust number of images of people of color," resulting in a gap in their effectiveness and accuracy when combating deepfake threats targeting individuals from diverse racial backgrounds.
"Congress should come together, understand these emerging challenges and really take action to protect all Americans and our democracy from the harms" of deepfake technologies, he added.
In September, the Cybersecurity and Infrastructure Security Agency urged organizations to improve their verification capabilities and deepfake detection techniques, warning in a joint advisory with the FBI and NSA that threats from synthetic media "have exponentially increased" and present "a growing challenge for users of modern technology and communications," including national critical infrastructure owners and operators (see: US Federal Agencies Urge Firms to Prepare for Deepfakes)
Sam Gregory, executive director of the human rights nonprofit organization Witness, told lawmakers that women disproportionately face threats from deepfake technologies that produce nonconsensual sexual content and pornography. Celebrities and politicians have also had their likenesses used in altered digital content, including on the eve of Chicago's mayoral election in February, when a deepfake of former candidate Paul Vallas was posted to X that featured false incendiary comments and controversial policy standpoints. The video garnered thousands of views before it was removed from the social media platform.
"Existing harms are exacerbated by deepfake technologies," Gregory said, urging the subcommittee "to support responsible detection and provenance approaches that protect privacy and free expression" and "consider targeted legislation on known harms."
Earlier this year, lawmakers introduced the Preventing Deepfakes of Intimate Images Act, which prohibits the nonconsensual development of deepfake intimate images. Rep. Gerry Connolly, D-Va., said the bill "creates additional legal courses of action for those who are affected" and helps prevent the "harmful proliferation of deepfake pornography."
"It's not just deepfake videos we have to worry about," he added, saying that the technology can be used to conduct malicious cyber activities with altered audio and telecommunications operations.
"Government and the private sector must collaboratively highlight the dangers and consequences of deepfakes and teach how to combat this disinformation and abuse," Connolly said.