Artificial Intelligence & Machine Learning , Cybercrime , Fraud Management & Cybercrime

New Bill Would Secure Government Contractors' Use of AI

Co-Sponsor of Bipartisan Proposal Calls Bill 'Common-Sense Legislation'
New Bill Would Secure Government Contractors' Use of AI
Sen. Gary Peters, D-Mich., who introduced the AI bill Thursday (Photo: Gerald R. Ford School of Public Policy, University of Michigan via Flickr)

Two Senate leaders on Thursday introduced legislation that would form a working group charged with monitoring the security of AI data obtained by federal contractors. This body would also ensure that the data adequately protects national security and recognizes privacy rights, the lawmakers say.

See Also: Cybersecurity for the SMB: Steps to Improve Defenses on a Smaller Scale

Sens. Gary Peters, D-Mich., chairman of the Homeland Security and Governmental Affairs Committee, and Rob Portman, R-Ohio, the committee's ranking member, introduced the legislation. In a statement, the senators say the bill will help secure and protect information handled by federal contractors using tools such as biometric data from facial recognition scans.

The bill would require the director of the Office of Management and Budget to establish and consult with the Artificial Intelligence Hygiene Working Group, which is charged with ensuring appropriate storage and use of AI-collected data.

Peters says in the statement introducing the bill, "While artificial intelligence applications have the potential to strengthen our national security, we must ensure data collected by this technology is secure, used appropriately, and does not compromise the privacy and rights of Americans.

"This bipartisan bill will help ensure that federal contractors are using artificial intelligence properly and for the benefit of the country - and that the information collected through these technologies is not misused."

The Senate Homeland Security and Governmental Affairs Committee said in its statement Thursday that if used improperly, AI technology "could harm Americans" or "compromise national security." In some instances, committee members say, AI technology companies that have contracted with federal law enforcement agencies "have misused data and information."

Portman says of the proposed legislation, "The bipartisan GOOD AI Act helps strengthen the accountability and security of federal AI systems and I urge my colleagues to join us in supporting this common-sense legislation."

Perry Carpenter, a board member for the National Cyber Security Alliance, tells Information Security Media Group, "Regulation takes time to pass, but when it does, it can be effective at setting a bar. Proposed bills like this send a clear message: 'We are watching you. We are concerned. Regulate yourselves or you will be regulated. It's only a matter of time.'"

Objective: Secure and Safeguard

If passed, the Government Ownership and Oversight of Data in Artificial Intelligence, or GOOD AI, Act, would require the OMB to spearhead this working group - ultimately comprised of subject matter experts from across the federal government.

Peters and Portman say the group would be tasked with developing and implementing solutions that ensure government contracts for AI services require data and systems to be secure, safeguard Americans' civil rights and liberties, and give the federal government ownership of collected information. Third parties, then, would not be able to publicly post, sell or misuse the data "in a way that compromises privacy rights."

However, James A. Lewis, a cybersecurity researcher at the Center for Strategic and International Studies, suggests that this focus may be misplaced, telling ISMG: "Americans spend way too much time worrying about risk and not enough time thinking how to create opportunity. I want to see AI used to drag federal services into the late 20th century. What we really need is a real privacy bill and more action on mandatory requirements for cybersecurity."

Senate Homeland Security and Governmental Affairs Committee ranking member Rob Portman, R-Ohio (Photo: Gage Skidmore via Flickr)

Other Efforts to Secure AI

Peters and Portman also introduced the Artificial Intelligence Training for the Acquisition Workforce Act in July, saying it would help federal employees responsible for purchasing AI-based technologies better understand the associated risks. The bill has passed out of the committee.

In a statement in July, Peters and Portman said that the legislation would help ensure the U.S. maintains a global leadership role in emerging technologies - while foreign competitors like China prioritize AI investments.

The bill encourages the OMB director to work with scholars and experts from the public and private sectors to create requisite training.

An AI-based provision authored by Peters was also passed as part of the year-end government funding bill in late 2020. An "appropriate use" provision offers resources and guidance to federal agencies, ensuring that the government's AI use is "effective, ethical and accountable," Peters, then the committee's ranking member, said at the time.

AI technologies remain a topic of concern at the White House. Earlier this month, officials in the Biden administration said they were exploring an AI "bill of rights" that would govern facial recognition and other applications.

The news came from the White House's Office of Science and Technology Policy, which posted a request for information in the Federal Register this month that may ultimately yield a "bill of rights," OSTP Director Eric Lander and OSTP Deputy Director for Science and Society Alondra Nelson confirmed in a Wired op-ed.

Potential Vulnerabilities

Outlining the importance of AI protections, Carpenter, who is the chief evangelist and strategy officer for the security firm KnowBe4, says, "Imagine [automated systems] leaking [biometric] data because of a security vulnerability. Now imagine someone - a cybercriminal gang, nation-state, rogue insider, or an autonomous computer program - now has the ability to 'replay' these biometrics into various systems that require identification, authentication and authorization.

"When you have nation-states, espionage, or high-stakes cybercrime involved, [a 'replay' attack] becomes a probability," Carpenter adds.

Last month, the Department of Commerce announced the establishment of an AI advisory committee set to counsel Biden and other federal agencies on issues ranging from privacy to data security, plus global competition, among others (see: Department of Commerce Establishes AI Advisory Committee).

The Commerce Department is working with the National Artificial Intelligence Initiative Office within the White House Office of Science and Technology Policy on the committee formation. Secretary of Commerce Gina Raimondo said at the time that the group sought "top-level candidates" to advise on AI advancements.


About the Author

Dan Gunderman

Dan Gunderman

Former News Desk Staff Writer

As staff writer on the news desk at Information Security Media Group, Gunderman covered governmental/geopolitical cybersecurity updates from across the globe. Previously, he was the editor of Cyber Security Hub, or CSHub.com, covering enterprise security news and strategy for CISOs, CIOs and top decision-makers. He also formerly was a reporter for the New York Daily News, where he covered breaking news, politics, technology and more. Gunderman has also written and edited for such news publications as NorthJersey.com, Patch.com and CheatSheet.com.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.