Artificial Intelligence & Machine Learning , Cybercrime , Fraud Management & Cybercrime

US Law Enforcement Cracks Down on AI-Led Child Abuse Content

Police, Prosecutors Say CSAM Generated by AI Is the Same as Traditional CSAM
US Law Enforcement Cracks Down on AI-Led Child Abuse Content
Image: Shutterstock

U.S. law enforcement is cracking down on users who use artificial intelligence to generate child sexual abuse material, stating there is no difference between material made by a computer and material from real life.

See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation

Federal prosecutors announced Monday the arrest of an Army soldier for allegedly using AI to morph images of real children into CSAM. Seth Herrera allegedly generated, stored and distributed "tens of thousands" of such images and videos.

Herrera's case is at least the fourth in a string of federal prosecutions involving child abuse deepfakes. Law enforcement in May charged a Wisconsin man for allegedly using AI to generate fake child sexual abuse material, likely the first federal charge of its kind. Two separate cases after that - one charging a North Carolina child psychiatrist and another a Pennsylvania CSAM recidivist, also involve allegedly using AI to morph children's faces on sexually explicit scenes or digitally undress them.

"Put simply, CSAM generated by AI is still CSAM, and those who sexually exploit children, through whatever technological means, will be held accountable," U.S. Attorney S. Lane Tucker for the District of Alaska said.

"Criminals considering the use of AI to perpetuate their crimes should stop and think twice - because the Department of Justice is prosecuting AI-enabled criminal conduct to the fullest extent of the law and will seek increased sentences wherever warranted," Deputy Attorney General Lisa Monaco said earlier this year.

Herrera during a three-year period allegedly took pictures of children and infants he knew in private settings and used AI tools to enhance the photos and create realistic-looking pornographic content. If convicted, he faces a maximum penalty of 20 years in prison for one count each of transporting child sexual abuse imagery, receiving it and possessing it.

The police seized three smartphones during their search at the time of his arrest. Apart from generating his own content, Herrera had also received, stored and trafficked such material on Telegram, Potato Chat, Enigma and Nandbox.

Prosecutors have highlighted technology used by suspects. The Wisconsin man charged in May allegedly used text-to-image product Stable Diffusion to create "thousands of realistic images of prepubescent minors," which he distributed on Instagram and Telegram. Court documents said that Steven Anderegg, 42, used "extremely specific and explicit prompts" to create the images, along with third-party add-ons that "specialized in producing genitalia."

Stability AI, which owns Stable Diffusion, said at the time that it bans the creation of CSAM images, assists law enforcement investigations into "illegal or malicious" uses and has removed explicit material from its training data, reducing the "ability for bad actors to generate obscene content." But the tool's open-source license allows anyone to download and use it with little oversight, and safety features, such as a filter for explicit images meant to prevent such abuse, are easily bypassed by adding a few lines of code to the program. CEO Emad Mostaque reportedly said that "ultimately, it's people's responsibility as to whether they are ethical, moral and legal in how they operate this technology."

Image generators Dall-E and Midjourney are not open-sourced. They have banned sexual content and can record and track all images. OpenAI, for Dall-E and ChatGPT, reportedly hired humans to enforce CSAM bans and remove explicit content from image training data.

Tech giants including Google, Meta, OpenAI, Microsoft and Amazon pledged in April to review their AI training data for CSAM and committed to "stress-testing" models to ensure that no CSAM can be created on their platforms. Stability AI also agreed to implement the new set of principles.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.