Artificial Intelligence & Machine Learning , Cyberwarfare / Nation-State Attacks , Fraud Management & Cybercrime

AI Heightens Cyber Risk for Legacy Weapon Systems

'Blind Faith' Architectures Pervade Military Arsenal
AI Heightens Cyber Risk for Legacy Weapon Systems
Jason Matheny, president and CEO, Rand Corp.; Josh Lospinoso, co-founder and CEO, Shift5; Shyam Sankar, CTO, Palantir (Image: U.S. Congress/Information Security Media Group)

The U.S. weapons arsenal developed without a zero trust architecture is at growing risk from cyberattacks, lawmakers heard today in a panel dedicated to how artificial intelligence can simultaneously help and hurt efforts to protect warfighters from digital attacks.

See Also: Safeguarding Election Integrity in the Digital Age

Weapon systems contains dozens, even hundreds, of special-purpose computers that perform digital functions from the control surfaces on an aircraft to data radios on submarines. The Defense Science Board a decade ago said that nearly every conceivable component of the military is networked.

Little has changed since the Government Accountability Office in 2018 concluded the Pentagon is poorly prepared to handle mounting evidence of pervasive vulnerabilities in already-built weapons systems, said Shyam Sankar, chief technology officer of Palantir, in testimony Wednesday before a Senate Armed Services subcommittee. "Unlike modern IT systems built with zero trust architectures, these weapons systems were built with blind faith architectures," Sankar said.

Defense officials have made progress in building cybersecurity into new weapons systems, but a lack of funding and direction has resulted in "disconcertingly little progress" for legacy systems, he said.

Foreign adversaries will use AI to speed up development of offensive cyber weapons, the panel heard, but Shankar said AI could also be used to detect anomalies and prevent intrusions into legacy systems. One complication is that the vast majority of data generated by legacy weapons systems evaporates into the ether, and extracting even the simplest data streams off weapons systems is a struggle, he said.

"If we're going to win in a military conflict, the DOD needs to own the data that its weapon systems are generating in a combat environment," Sankar said. "We really need to pay attention to that."

Artificial intelligence's nascent centrality to offensive weapons development means the United States should take bold steps to ensure that adversaries are unable to develop their models, said Rand Corp. CEO Jason Matheny.

"These AI models right now are very brittle," Matheny said. "We need to be thinking about ways that we can slow down progress elsewhere by doing things like adversarial attacks, data poisoning and model inversion. Let's use the tricks that we're seeing used against us and make sure that we understand the state of the art."

Data poisoning - in which adversaries alter the data used to train AI models in order to distort the resulting algorithms - is already a risk for the United States, said Shift5 co-founder and CEO Josh Lospinoso. "These are real problems," he said. "We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly and have to clean the mess up afterwards."

Matheny recommended several measures that could reduce the impact if sophisticated AI systems fall into the wrong hands. He urged for enacting strong export controls on leading-edge AI chips while licensing benign uses of chips that can be remotely throttled. Companies should have to record the development and distribution of large computing clusters or trained AI models, he said.

Authorities should built a licensing regime or governance systems around AI models that record the amount of compute that's being used and the reliance on open-source components, to prevent misuse by bad actors, Matheny said. The intelligence community should also expand its collection efforts on what foreign adversaries are doing when it comes to AI investments, capabilities, materials and talent.

"We're going to need a regulatory approach that allows the government to say, 'Tools above a certain size with a certain level of capability can't be freely shared around the world, including to our competitors,'" Matheny said. "We need to have certain guarantees of security before they're deployed."

Finally, Matheny said, he would like to see the U.S. government pursue AI security "moonshots," including microelectronic controls embedded in AI chips to thwart the development of large AI models without security safeguards. In addition, Matheny hopes lawmakers develop a generalizable approach to evaluate the security and safety of AI systems before they're deployed.

"One of the threats I see is that the very technology that we develop in the United States for benign use can be stolen and misused by others," Matheny said. "We need to prevent that."

About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.