Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

US CISA Urges Security by Design for AI

Part of Agency's Campaign to Align Design, Development With Security From the Start
US CISA Urges Security by Design for AI

The U.S federal government is advocating for artificial intelligence developers to embrace security as a core requirement, warning that machine learning code is particularly difficult and expensive to fix after deployment. The Cybersecurity and Infrastructure Security Agency in a Friday blog post urged that AI be secure by design - as part of CISA's ongoing campaign to promote aligning design and development programs with security from the start (see: CISA, Others Unveil Guide for Secure Software Manufacturing).

"Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system. And like any software system, AI must be secure by design," the agency said.

See Also: GDPR & Generative AI: A Guide for Customers

Security experts across the world have for years been pushing companies to develop software and products with security baked in rather than added as an afterthought. The era of treating security as an externality whose costs are born by consumers should be replaced by new commitment to security, including through a shift in liability to software developers, CISA Director Jen Easterly said in a February speech.

CISA's Friday blog post doesn't discuss legislative proposals, but it does highlight previous research that draws attention to machine learning's tightly coupled nature. Changing one input changes everything, according to a 2014 paper by Google researchers that celled unresolved difficulties "the high-interest credit card of technical debt."

The blog post acknowledged that providing security by design for AI could differ from providing it for other types of software. It contains a list of basic, sector-agnostic security practices that "still apply to AI software." Implementing the guidelines, even if not specific to AI, is especially important because threat actors have exploited AI systems by using known vulnerabilities of non-AI software elements, CISA said.

AI software design, development, deployment and testing; data management; system integration; and vulnerability and incident management should apply "existing community-expected security practices and policies," the agency said.

Systems that process AI model file formats should protect against untrusted code execution attempts and also use memory-safe languages, CISA said. The AI engineering community must also implement vulnerability identifiers such as CVEs, capture a software bill of materials for AI models and their dependencies, and follow fundamental privacy principles by default.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.