Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Tech Execs, Political Leaders Call for Deepfake Regulation

Open Letter Calls for Holding the Whole Deepfake Supply Chain Accountable
Tech Execs, Political Leaders Call for Deepfake Regulation
Artificial intelligence experts have called for deepfake regulation. (Image: Shutterstock)

Nearly 1,000 artificial intelligence and technology experts globally have called for regulation around deepfakes to mitigate risks including fraud and political disinformation that could cause "mass confusion."

See Also: Safeguarding Election Integrity in the Digital Age

In an open letter, the signatories detailed recommendations for how lawmakers can regulate deepfakes, including by criminalizing deepfake child pornography that depicts harmful deepfakes and penalizing those who knowingly create or facilitate the spread of these deepfakes. The experts also suggest that software developers and distributors take measures to prevent their products from creating harmful deepfakes and be held liable if those steps are too easy to get around.

"The whole deepfake supply chain should be held accountable, just as they are for malware and child pornography," the letter said.

Experts in academia, entertainment, politics and technology industries backed the letter, including Yoshua Bengio, a deep learning scientist and winner of the A.M. Turing Award, known as the Nobel Prize of computing. Two former presidents of Estonia, researchers at Google DeepMind and a researcher from OpenAI also signed the letter.

"Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed," the letter says.

The European Union is working on criminalizing AI-generated images and deepfakes that depict child pornography, and England's law enforcement is already cracking down on such cases.

The letter comes on the heels of a 400% spike in deepfake content in the past four years. The United States is taking steps to curb deepfakes, and the Federal Trade Commission recently proposed having additional authorities go after individual deepfake impersonators and having the Department of Homeland Security recruit AI experts.

Technology giants recently pledged to address election misinformation spread via deepfakes, and Meta and the Misinformation Combat Alliance launched a helpline in India to combat AI-generated misinformation as the world's largest democracy is set to go to polls.

The letter is not the first one to seek deepfake regulation. The Center for AI Safety, which signed the letter, stated last year that AI posed an existential threat to society, and that statement was supported by OpenAI chief Sam Altman and senior staffers from Google, Anthropic and Skype. A separate letter from the Future of Life Institute said AI companies must pause in developing their systems and instead focus on mitigating risks.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.