Artificial Intelligence & Machine Learning , Governance & Risk Management , Legislation & Litigation
Experts: Federal Privacy Law Needed to Curb AI Data Misuse
New Bill Would Create Data Minimization Measures, Express Permission RequirementsThe continued lack of a comprehensive federal privacy law in the United States is allowing major technology and artificial intelligence firms to surveil and hyper-target the American public, experts testified Thursday.
See Also: Establishing a Governance Framework for AI-Powered Applications
At least 140 countries around the world have some form of a national privacy law, according to Udbhav Tiwari, director of global product policy for Mozilla, who told the Senate Commerce Committee the current patchwork of state laws leads to regulatory inconsistencies and economic disadvantages for U.S. businesses.
Congress has tried - and failed - to pass a federal data privacy bill for years, even decades. Other countries, including the European Union and Canada, have established safeguards and protective measures to prevent leading tech companies - such as Microsoft, Google and Meta - from harvesting user data to train large-language learning models (see: Europe Reaches Deal on AI Act, Marking a Regulatory First).
"Now is the moment when passing such a law actually matters - before the trajectory has been set," Amba Kak, co-executive director of the AI Now Institute, told lawmakers. "Data privacy regulation is AI regulation, and provides many of the tools that we need to protect the public from harm."
Under a strong data privacy act, Kak said, companies operating in the U.S. would be required to determine whether the utility of new AI components outweighs the potential for harm. She said Microsoft was recently forced to backtrack its rollout of a new AI feature dubbed Recall, which provides automatic screenshot retrieval, due to concerns that hackers could steal sensitive information (see: Microsoft's Recall Stokes Security and Privacy Concerns).
AI "fuels an insatiable demand for consumer data" while allowing companies and governments "to derive intimate details about people from widely available information," said Ryan Calo, professor and co-director of the University of Washington Tech Policy Lab.
"American society can no longer afford to sacrifice consumer privacy on the altar of innovation," he said.
Witnesses recommended a range of specific measures that could be included in a national privacy bill, including transparency mandates and data minimization rules that Kak said would put "reason in place of recklessness." The U.S. currently lacks any federal law preventing companies from using data without notifying users to train large language models, while the EU's General Data Protection Regulation and other countries' data protection laws, such as Canada's Personal Information Protection and Electronic Documents Act and Brazil's General Data Protection Law, require explicit user consent.
"We've come so close, so many times, and yet we've never made it just quite across the finish line," said Sen. Jerry Moran, R-Kan., said about congressional attempts to pass both data privacy and comprehensive federal privacy legislation. "The problems and challenges with our lack of success continue to mount."
Sen. Maria Cantwell, D-Wash., who chairs the Senate Commerce Committee, unveiled the American Privacy Rights Act earlier this year alongside U.S. Rep. Cathy McMorris Rodgers, R-Wash. The bill includes data minimization policies that restrict what companies can collect, keep and use about users. It also allows for stricter protections for sensitive data by requiring express consent for third-party transfers and requires companies to allow users to access, correct, delete and export their data.
The bill includes significant discrimination protections while allowing users to opt out of a company's use of algorithms to make decisions about credit, housing and employment opportunities, among other civil rights measures. Cantwell said Thursday the legislation would prevent private data from being bought or sold without approval and ensure companies "implement these laws and help stop this kind of interference."
The panelists acknowledged that the U.S. has managed to supersede other countries in its development of AI technologies thanks in part to underregulation, but they warned that the lack of stringent data privacy laws could lead to significant issues around privacy violations and diminished public trust.
Morgan Reed, president of the application industry group The App Association, said the U.S. economy could fare better on the global stage "if Congress were to enact a strong, preemptive federal privacy framework that bolsters trust in cutting-edge AI tools while curbing mismanagement of personal data."
"The privacy risks AI poses are outgrowths of existing privacy issues," Reed said. "With AI, these same challenges emerge, but on a larger scale and with greater intensity on the foreseeability factor."