Artificial Intelligence & Machine Learning , Cyberwarfare / Nation-State Attacks , Fraud Management & Cybercrime
AI Heightens Cyber Risk for Legacy Weapon Systems
'Blind Faith' Architectures Pervade Military ArsenalThe U.S. weapons arsenal developed without a zero trust architecture is at growing risk from cyberattacks, lawmakers heard today in a panel dedicated to how artificial intelligence can simultaneously help and hurt efforts to protect warfighters from digital attacks.
See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom
Weapon systems contains dozens, even hundreds, of special-purpose computers that perform digital functions from the control surfaces on an aircraft to data radios on submarines. The Defense Science Board a decade ago said that nearly every conceivable component of the military is networked.
Little has changed since the Government Accountability Office in 2018 concluded the Pentagon is poorly prepared to handle mounting evidence of pervasive vulnerabilities in already-built weapons systems, said Shyam Sankar, chief technology officer of Palantir, in testimony Wednesday before a Senate Armed Services subcommittee. "Unlike modern IT systems built with zero trust architectures, these weapons systems were built with blind faith architectures," Sankar said.
Defense officials have made progress in building cybersecurity into new weapons systems, but a lack of funding and direction has resulted in "disconcertingly little progress" for legacy systems, he said.
Foreign adversaries will use AI to speed up development of offensive cyber weapons, the panel heard, but Shankar said AI could also be used to detect anomalies and prevent intrusions into legacy systems. One complication is that the vast majority of data generated by legacy weapons systems evaporates into the ether, and extracting even the simplest data streams off weapons systems is a struggle, he said.
"If we're going to win in a military conflict, the DOD needs to own the data that its weapon systems are generating in a combat environment," Sankar said. "We really need to pay attention to that."
Artificial intelligence's nascent centrality to offensive weapons development means the United States should take bold steps to ensure that adversaries are unable to develop their models, said Rand Corp. CEO Jason Matheny.
"These AI models right now are very brittle," Matheny said. "We need to be thinking about ways that we can slow down progress elsewhere by doing things like adversarial attacks, data poisoning and model inversion. Let's use the tricks that we're seeing used against us and make sure that we understand the state of the art."
Data poisoning - in which adversaries alter the data used to train AI models in order to distort the resulting algorithms - is already a risk for the United States, said Shift5 co-founder and CEO Josh Lospinoso. "These are real problems," he said. "We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly and have to clean the mess up afterwards."
Matheny recommended several measures that could reduce the impact if sophisticated AI systems fall into the wrong hands. He urged for enacting strong export controls on leading-edge AI chips while licensing benign uses of chips that can be remotely throttled. Companies should have to record the development and distribution of large computing clusters or trained AI models, he said.
Authorities should built a licensing regime or governance systems around AI models that record the amount of compute that's being used and the reliance on open-source components, to prevent misuse by bad actors, Matheny said. The intelligence community should also expand its collection efforts on what foreign adversaries are doing when it comes to AI investments, capabilities, materials and talent.
"We're going to need a regulatory approach that allows the government to say, 'Tools above a certain size with a certain level of capability can't be freely shared around the world, including to our competitors,'" Matheny said. "We need to have certain guarantees of security before they're deployed."
Finally, Matheny said, he would like to see the U.S. government pursue AI security "moonshots," including microelectronic controls embedded in AI chips to thwart the development of large AI models without security safeguards. In addition, Matheny hopes lawmakers develop a generalizable approach to evaluate the security and safety of AI systems before they're deployed.
"One of the threats I see is that the very technology that we develop in the United States for benign use can be stolen and misused by others," Matheny said. "We need to prevent that."