Artificial Intelligence & Machine Learning , Government , Industry Specific

NTIA Pushes for Independent Audits of AI Systems

Accountability Needed to Unleash Full Potential of AI, Says NTIA Administrator
NTIA Pushes for Independent Audits of AI Systems
The Biden administration says accountability is a prerequisite for unlocking artificial intelligence benefits. (Image: Shutterstock)

The Biden administration is calling for mandatory audits of high-risk artificial intelligence systems and greater clarity on where liability for applications gone wrong should lie in the AI supply chain.

See Also: Maximizing data utility in mission delivery, citizen services, and education

The recommendations come from a National Telecommunications and Information Administration report published Wednesday morning calling for accountability conceived "as a chain of inputs linked to consequences."

"To achieve real accountability and harness all of AI's benefits, the United States - and the world - needs new and more widely available accountability tools and information, an ecosystem of independent AI system evaluation, and consequences for those who fail to deliver on commitments or manage risks properly," the 77-page report states.

"We need accountability to unleash the full potential of AI," NTIA Administrator Alan Davidson said in a statement, adding that the agency's recommendations "will empower businesses, regulators and the public to hold AI developers and deployers accountable for AI risks."

The recommendations align with President Joe Biden's October executive order on AI, which invoked the Defense Production Act to require developers of high-risk AI models to notify the government when they're training such tools (see: White House Issues Sweeping Executive Order to Secure AI).

NTIA officials told reporters Tuesday afternoon that the private sector should welcome independent audits of certain high-risk systems since they would boost public and marketplace confidence.

The report also calls for possible pre-release certification of claims for AI models used in high-risk sectors such as healthcare and finance, as well as periodic recertification to confirm the model still does what its makers claimed it does.

The NTIA said the federal government should collaborate with the private sector to improve standard information disclosures through new concepts such as "AI nutrition labels," which would present standardized information similar to food labels mandated by the Food and Drug Administration. The report calls on federal agencies to develop recommendations on how best to apply existing liability rules and standards to AI systems.

Courts and lawmakers may ultimately define where liability lies for harms stemming from AI models in the supply chain leading from developers to users, but the agency proposes a meeting of legal experts and stakeholders to hammer out how existing regulation applies. The third-party evaluation method of accountability that the NTIA endorses may well hinge on exposure and protection from liability, the report says.

Additional recommendations include tasking the federal government with strengthening its capacity to address cross-sectoral risks associated with AI tools and developing a national registry of high-risk AI deployments and a reporting database for national AI-adverse incidents.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.