Artificial Intelligence & Machine Learning , Government , Industry Specific

California Executive Order Hopes to Ensure 'Trustworthy AI'

Governor Sets 2-Year Deadline for Policies on AI Use, Risks at Public Agencies
California Executive Order Hopes to Ensure 'Trustworthy AI'
California Gov. Gavin Newsom speaks at a press conference on April 26, 2022. (Image: Shutterstock)

California Gov. Gavin Newsom on Wednesday signed an executive order to study the development, use and risks of artificial intelligence and develop a process to deploy "trustworthy AI" in the state government.

See Also: Establishing a Governance Framework for AI-Powered Applications

Executive Order N-12-23, which calls for a staggered implementation over the next two years, is needed because "we’re only scratching the surface of understanding what generative AI is capable of," Newsom said.

The unprecedented speed of innovation and deployment of the technology makes it necessary for the state to put guardrails in place to defend against risks or malicious uses of AI, including cyberattacks and disinformation, the document says.

The executive order calls for state agencies and departments to submit a report detailing the use cases and risks of generative AI tools within the next 60 days. The risks must include those "stemming from bad actors and insufficiently guarded governmental systems, unintended or emergent effects, and potential risks toward democratic and legal processes, public health and safety, and the economy," the order says.

The order also directs the California Cybersecurity Integration Center and California State Threat Assessment Center to analyze potential threats of generative AI to California's critical energy infrastructure by March 2024 and recommend safeguard against those potential risks.

The Government Operations Agency, the California Department of General Services, the California Department of Technology, and the California Cybersecurity Integration Center must issue guidelines for the adoption and use of generative AI by the public sector by January 2024 and evaluate the impact of these tools on communities and governments and make recommendations on overcoming those impacts by July 2024. Adoption of new guidelines is set to follow by 2025. These guidelines must be built on the White House's Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology's AI Risk Management Framework, and they must address safety, algorithmic discrimination and data privacy.

The order directs the California Department of Technology to establish infrastructure to carry out generative AI pilot projects by March and set up sandboxes to test the projects to ensure that state agencies can begin to consider their implementation by July.

It looks to engage with legislative partners and key stakeholders to develop policy recommendations for the responsible use of AI and evaluate the evolving technology's impact on an ongoing basis.

The executive order seeks to partner with the University of California, Berkeley and Stanford University to assess the impact of generative AI on California and to brainstorm ways to "advance its leadership in this industry." It looks to host a joint summit in 2024 to "engage in meaningful discussions about the impacts of generative AI on California and its workforce."


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.