Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance
NTIA Gives Nod to Unrestricted Open AI Model Access
Government Must to Prioritize Risk Evaluation of Dual-Use AI ModelsThe United States government gave a cautious blessing for unrestricted access to open artificial intelligence foundation models, warning that users should be prepared to actively monitor risks.
See Also: OnDemand | Fireside Chat: Staying Secure and Compliant Alongside AI Innovation
The National Telecommunications and Information Administration in a Tuesday report said open-weight models can make generative AI accessible to small companies, researchers, non-profits and individual developers. It recommended that there be no restrictions on access to the open models - at least until proof emerges that restrictions are warranted.
Open-weight AI models are essentially ready-to-use molds for developers to build applications on. Unlike open- source models, their code is not fully transparent. "Openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools," said NTIA Administrator Alan Davidson. "At the time of this report, current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future," the report says.
Other federal agencies have been vocal about the need for open models. The Federal Trade Commission supports their use. Agency Chair Lina Khan recently said open models allow small players to participate in the market with their ideas, and including such models could decentralize control and promote healthy competition.
Still, model abuse could pose risks to national security, privacy and civil rights, the NTIA report says. Foundation models can be exploited to spread disinformation, create deepfakes and automate cyberattacks - highlighting their potential to serve as what the government calls "dual-use" technology. Bad actors can also manipulate foundation models to amplify biases in their training data, leading to unfair outcomes in critical areas where fairness is key, such as hiring, law enforcement and lending. Their manipulation can lead to the AI model owner losing control over their behavior and the potential mishandling of personal data that can result in privacy breaches.
Nation-state actors could use the technology to develop advanced weapons, such as autonomous drones or cyberwarfare tools, which reportedly is already occurring in the Russia-Ukraine conflict in the form of "killer algorithms" for target selection and "warbot" armies.
In the report, the NTIA advises the government to create a program aimed at gathering and assessing evidence on the risks and benefits of open AI models. It recommends research into the safety aspects of different AI models, support for risk mitigation studies and the development of "risk-specific" indicators to determine when policy changes may be needed.
Evidence collection could include encouraging the industry to set up standards, audits, disclosures and transparency for dual-use foundation models. The process could also include conducting and supporting research on the safety, security and future capabilities of these models.
Evaluating the evidence could include developing benchmarks and definitions for monitoring and potential action in cases of escalating risk, including access restrictions on models and other methods of risk mitigation.
The report comes at a time when AI regulation in the U.S. comprises guidelines rather than stringent rules, in contrast with Europe's recently approved AI Act. The Biden administration has extracted promises of secure and trustworthy development from Silicon Valley heavyweights. An October executive order calls for developers of generative AI foundation models that could pose a "serious risk" to national security, national economic security or national public health to notify the government when they're training the model. Developers must also share the results of all red-team safety tests with the government.