Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

EU's New AI Office Is Set to Begin Operating in June

The Office Will Oversee the Implementation of the European Union's AI Act
EU's New AI Office Is Set to Begin Operating in June
The European AI Office will begin operating in June 2024. (Image: Shutterstock)

The European AI Office, which is tasked with overseeing implementation of the AI Act, the first-ever binding regulation on artificial intelligence, is set to begin operating next month.

See Also: Top 10 DSPM Requirements: Data Security Challenges in the Cloud Era

The AI Office will be headed by Lucilla Sioli, who previously served as the artificial intelligence and digital industry chief within the European Commission's Directorate-General for Communications Networks, Content and Technology.

The AI Act bans the use of high-risk artificial intelligence applications, such as emotion recognition, in the workplace and schools. It also bans social scoring and the scraping of CCTV footage to create facial recognition databases. Any violations could cost companies up to 35 million euros or 7% of a corporate annual turnover (see: EU Parliament Approves the Artificial Intelligence Act).

The office will be supported by 140 staffers consisting of technology and policy specialists, lawyers and economists who will oversee activities such as model risk evaluation and investigate possible violations.

The office, which is set to open on June 16, will soon propose rules on general-purpose AI and will release guidelines on AI systems and prohibitions that are set to kick in toward the end of this year.

The European Union's efforts come amid mounting concerns over artificial intelligence. Many governments, including the United States and the United Kingdom, rely on voluntary codes of practice for guardrails over the burgeoning industry.

These include the Bletchley Declaration signed by companies including Microsoft, and ChatGPT-maker OpenAI, in which companies agreed to adhere to a set of safety practices such as model evaluation before releasing their AI algorithms to the market. Those same companies signed a similar voluntary commitment at the AI Seoul Summit in South Korea, with measures that include requiring companies to develop thresholds to prevent misuse of AI systems and reviewing AI governance frameworks, among others.

Much like the two voluntary declarations, the AI Office will focus on developing trustworthy AI by conducting real-world testing of models and providing access to AI sandboxes, among other measures.

Critics say a key limitation model evaluation is that algorithms are tested in silos and not in the context of their wider application, which limits recorded outcomes.

Although the AI Act requires that general-purpose AI and other high-risk AI systems undergo compliance measures such as model evaluations, systemic risk assessments and mitigation, and security incidents reporting, AI privacy groups say loopholes in the law allow AI companies to categorize themselves as "low-risk," which allows them to bypass safety assessments.

"Current evaluation practices are better suited to the interests of companies than publics or regulators," researchers from the Ada Lovelace Institute said. "Within major tech companies, commercial incentives lead them to prioritize evaluations of performance and of safety issues posing reputational risks (rather than safety issues that might have a more significant societal impact)."

About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.