Artificial Intelligence & Machine Learning , Government , Industry Specific

US DOJ Developing Guidelines for AI Use in Law Enforcement

Justice Department Aiming to Emphasize Privacy and Security in AI Deployment
US DOJ Developing Guidelines for AI Use in Law Enforcement
A top U.S. Department of Justice official said the department's AI compliance plan will be released "as soon as possible." (Image: Shutterstock)

The U.S. Department of Justice is drafting guidelines for law enforcement agencies nationwide on the use of generative artificial intelligence and facial recognition tools, a senior official said Wednesday.

See Also: AI and ML: Ushering in a new era of network and security

The department plans to issue recommendations for law enforcement agencies on the use of emerging technologies to enhance public safety, including best practices and privacy safeguards, said Michelle Ramsden, senior counsel in the DOJ Office of Privacy and Civil Liberties.

Justice recently finalized an AI compliance plan and "has initiated consultations with external experts on AI governance" to ensure the responsible deployment of AI technologies and address potential risks related to privacy, ethics and security, Ramsden said during a presentation at the Department of Agriculture's cybersecurity summit, where she discussed efforts to advance federal AI use. Ramsden added the compliance plan will be published on the department's website "as soon as possible."

The department appointed its first-ever chief AI officer in February to help study internal and external use cases while developing regulatory frameworks that ensure the responsible and ethical use of artificial intelligence across various sectors. Attorney General Merrick Garland a href="https://www.justice.gov/opa/pr/attorney-general-merrick-b-garland-designates-jonathan-mayer-serve-justice-departments-first" target="_blank">said at the time that DOJ "must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe, and protect civil rights."

Ramsden said the DOJ's recently-established emerging technology board is also taking a key role in spearheading the new guidelines and recommendations for law enforcement. The board was established in 2023 to advise Justice leaders on the ethical and lawful use of AI within the agency and to ensure department-wide coordination on the use of emerging tech (see: DOJ to Launch Emerging Tech Board, Ensure Ethical Use of AI).

Deputy Attorney General Lisa Monaco at the time said the board would be responsible for sharing best practices related to AI use and for establishing principles governing the deployment of facial recognition and other technologies. The department has already integrated facial recognition technologies into its own high-profile use cases, including the ongoing investigation into the Jan 6, 2021 insurrection.

The White House unveiled a wide-ranging executive order on AI in October 2023 that directed the Justice Department to coordinate with federal civil rights offices in developing best practices for investigating and prosecuting civil rights violations related to AI (see: White House Issues Sweeping Executive Order to Secure AI). The executive order tasked Justice with tackling algorithmic discrimination through training, technical assistance and enhanced collaboration with stakeholders.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.