Artificial Intelligence & Machine Learning , General Data Protection Regulation (GDPR) , Next-Generation Technologies & Secure Development
Irish Data Protection Commission Probes Google's AI Model
Inquiry Launched to Determine the Company's Compliance With GDPRThe Irish data regulator launched an investigation to determine Google's compliance with a European privacy law when it was developing its PaLM 2 artificial intelligence model.
See Also: Software Supply Chain Platform for Financial Services
The Data Protection Commission on Thursday said it is probing whether Google assessed privacy risks ahead of developing the Pathways Language Model.
Google launched the multilingual generative AI model last year. The model is equipped with reasoning and coding capacities and is embedded in 25 Google products.
Under the General Data Protection Regulation, companies developing technology with potentially "high risk to the rights and freedoms of natural persons" are required to conduct a data protection impact assessment. When the companies identify risks, they are obliged to deploy adequate privacy measures, such as obtaining user consent to process data.
As many artificial intelligence companies mainly rely on scraped data to train their AI models, regulators worry that AI developers are violating Europeans' privacy.
The Irish investigation is part of a broader effort from the European data regulators to identify potential privacy malpractices of AI developers, the DPC said Thursday. Social media giant Meta in June postponed the launch of its AI systems trained with data taken from European Instagram and Facebook users weeks after a rights group lodged a complaint against the company with 11 European data regulators (see: Meta's AI Model Training Comes Under European Scrutiny).
The Irish regulator has also clashed with X, formerly Twitter, over its use of tweets to train the company's Grok chatbot (see: Breach Roundup: Irish DPC Ends Case Against GrokAI).
Google did not immediately respond to a request for comment.