Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance
EU AI Act Will Be an 'Enabler for Trust,' Lawmaker Says
Trilogue Talks in Final Stages, Says European Parliament MemberEuropean lawmakers behind an artificial intelligence regulation that's close to finalization predicted Thursday the law will set global standards.
See Also: GDPR & Generative AI: A Guide for Customers
Speaking at the European Commission's fourth AI Alliance Assembly, Carme Artigas, Spain's secretary of state for digitalization and artificial intelligence, praised the regulation for its "balanced," risk-based approach.
"This is one technology that can continue to evolve without human agency, but if we regulate too much then we could kill innovation. And if we regulate without control, then we might not be able to protect citizens on time. So, we have struck the right balance, which also acts as an enabler for trust and innovations," she said.
The European Parliament in June approved the regulation, framed as a way to mitigate AI's potential for negative effects on society. The AI Act is in its final stages, which include negotiations between Parliament members and a committee of member state representatives called the European Council. Backers say the proposal is set to come into force in early 2026 (see: EU Will Stand Up Office to Enforce AI Act, Says EU Lawmaker).
Italian AI Act co-rapporteur Brando Benifei said the "risk pyramid," within the draft text is a model that can be replicated across the world. The act assigns different levels of risk to certain applications and bans AI-based social scoring and real-time biometric surveillance in public places.
"We want AI to develop in Europe, and this is why we want to build a trustworthy ecosystem," Benifei said.
Some critics said the proposal defers too much to industry, giving AI developers the leeway to assess their systems as low-risk and allowing them to skip security checks put in place for high-risk AI systems.
Dragoş Tudorache, a Romanian representative and co-rapporteur of the AI Act, said those concerns had been addressed in the trilogue process, and Parliament and the Council have accepted a position that will not "over- or under-regulate" the technology while ensuring that the "compliance burden" on companies is reduced.
"There is a huge market that is going to open up as a result of us setting these rules, setting the standards, and creating the space for trustworthy AI," Tudorache said, addressing concerns that the legislation could stymie innovation. "This will benefit startups who are looking into opportunities for creating tools for trustworthy AI," he added.
Tudorache said lawmakers in the trilogue are now mulling the rules on foundational models, which were added to the legislation following the emergence of OpenAI's large language model chatbot, ChatGPT.
Speaking at the event, EU Internal Market Commissioner Thierry Breton announced the new AI Start-Up Initiative and testing center that will allow AI startups to access supercomputers to train their models faster and to test their models before its launch in the market.
European startups will be able to access sector-specific and language data using the newly passed Data Act and Data Governance Act to train their models, Breton added.