Published on 2 July 2024 by Claire Filliatre

European Commission establishes AI Office to strengthen EU leadership in trustworthy Artificial Intelligence

On May 29, 2024, the European Commission unveiled the Artificial Intelligence Office aimed at “enabling the future development, deployment and use of AI in a way that fosters societal and economic benefits and innovation, while mitigating risks”.

This new Office will play a key role in implementing the European Artificial Intelligence Act, strengthen the development and use of safe and trustworthy artificial intelligence, and positioning the European Union as a leader in international discussions.


In April 2021, the European Commission (the “Commission”) proposed a new regulatory framework and a coordinated plan with Member States for excellence and trust in Artificial Intelligence (“AI”). The proposal for a Regulation on laying down harmonized rules on Artificial Intelligence (known as the “AI Act”) was approved by the European Parliament in March 2024.

At the same time, in January 2024, Commission launched a package of measures to support European startups and Small- and Medium-Sized Businesses in the development of trustworthy AI that respects European Union’s values and rules. These measures aim at facilitating and enhancing cooperation on AI across the European Union to boost its competitiveness.

It is in this context that the creation of the AI Office was announced on May 29, 2024.

The AI Act, a unique framework in the world

The overall objective of the AI Act is to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.

It creates obligations with respect to AI, based on its potential risks and level of impact. For this purpose, 4 risk levels have been identified: minimal risk, limited risk, high risk, and unacceptable risk. It also introduces dedicated rules for general purpose AI models (“GPAI”).

As such, the new rules ban some AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics (race, sexual orientation, political or religious opinions, etc.) and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling persons or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities are also forbidden.

Generative AI, like ChatGPT, are not classified as high-risk, but must comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI;
  • Designing the model to prevent it from generating illegal content;
  • Publishing summaries of copyrighted data used for training.

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, must undergo thorough evaluations and any serious incidents must be reported to the Commission.

Content that is either generated or modified with the help of AI – images, audio or video files (for example deepfakes) – need to be clearly labelled as AI generated so that users are aware when they come across such content.

The AI Act should be published in the Official Journal of the European Union in June-July 2024.

The structure of the new AI Office

The AI Office is part of the Directorate‑General for Communications Networks, Content and Technology (DG Connect) of the Commission, and will ultimately employ more than 140 staff members, including technology specialists, administrative assistants, lawyers, policy specialists, and economists.

It consists of 5 Units:

  • Regulation and Compliance Unit. This Unit will coordinate the regulatory approach to facilitate the uniform application and enforcement of the AI Act across the Union, working closely with Member States. It will contribute to investigations and handle investigations and sanctions;
  • AI Safety Unit. This Unit focuses on the identification of systemic risks of general-purpose models, possible mitigation measures as well as evaluation and testing approaches;
  • Excellence in AI and Robotics Unit. This Unit supports and funds research and development of AI. It coordinates the GenAI4EU initiative[1], stimulating the development of models and their integration into innovative applications;
  • AI for Societal Good Unit. This Unit designs and implements the engagement of the AI Office, in particular in putting forward AI systems at the service of society, such as weather modeling, or cancer diagnostics;
  • AI Innovation and Policy Coordination Unit. The Unit oversees the implementation of the EU AI strategy, monitors trends and investment, stimulates the uptake of AI and the establishment of AI factories[2], and fosters an innovative ecosystem by supporting regulatory “sandboxes”[3] and real-world testing.

The AI Office works under the guidance of a Lead Scientific Adviser and an Adviser for International Affairs.

Tasks of the AI Office

Supporting the implementation and enforcement of the AI Act

The AI Office will ensure the coherent implementation of the AI Act across the Member States.

It will coordinate the creation of a governance system, including by preparing the establishment of advisory bodies at the EU level and monitoring the establishment of relevant national authorities, and monitor, supervise, and enforce the AI Act requirements on general purpose AI (GPAI) models and systems within the European Union.

Concretely speaking, this includes developing tools, methodologies and benchmarks for evaluating capabilities and reach of GPAI models, and analyzing the emerging unforeseen systemic risks stemming from their development and deployment.

The AI Office will investigate potential incidents of infringement or non-compliance and, as the case may be, demand corrective actions. It will provide coordination support for joint investigations conducted by one or more competent authorities.

In cooperation with AI developers, the scientific community and other stakeholders, the AI Office will coordinate the drawing up of state-of-the-art codes of practice with specific, measurable objectives, including key performance indicators, and monitor and evaluate the implementation and effectiveness of these codes.

In his context, the AI Office will prepare guidance and guidelines as well as implementing and delegated acts to support effective implementation of the AI Act and monitor due compliance.

Strengthening the development and use of trustworthy AI

The AI Office, in collaboration with relevant public and private stakeholders and the startup community, will contribute to this objective by advancing actions and policies to reap the societal and economic benefits of AI across the EU.

In particular, it will provide technical support, advice and tools enabling ready-access to AI sandboxes, real-world testing and other European support structures for AI uptake, such as the European digital innovation hubs, and the AI factories.

The AI Act indeed provides that Members States must establish at least one AI regulatory sandbox either independently or by joining other Member States’ sandbox(es), within two years from the entry into force of the Act.

The overall objective is to foster innovation, particularly for Small- and Medium-sized Businesses, by facilitating AI training, testing, and validation, before the placement on the market or the introduction into service, under the competent national authorities’ supervision.

Such supervision is designed to provide legal clarity and improve regulatory expertise. The AI Office must be notified by competent national authorities of any suspension in sandbox testing due to significant risks. These authorities must also submit publicly available annual reports to the AI Office detailing sandbox progress, incidents, and recommendations.

In addition, the AI Office will develop and maintain a single information platform providing easy to use information for all operators across the European Union, organize appropriate communication campaigns to raise awareness about the obligations arising from the AI Act, and evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.

Fostering international cooperation

The AI Office will ensure a strategic, coherent and effective European approach on AI at the international level.

To achieve this, it will:

  • Promote the European Union’s approach to trustworthy AI, including collaboration with similar institutions worldwide;
  • Foster international cooperation and governance on AI, with the aim of contributing to a global approach to AI;
  • Supporting the development and implementation of international agreements on AI.

Necessary cooperation with institutions, experts and stakeholders

To ensure well-informed decision-making, the AI Office will collaborate with Member States and the wider expert community through dedicated fora and expert groups.

At the EU-level, the AI Office will work closely with the European Artificial Intelligence Board composed of representatives of Member States.

The Scientific Panel of independent experts will ensure a strong link with the scientific community and further expertise will be gathered in an Advisory Forum, representing a balanced selection of stakeholders, including industry, startups and SMEs, academia, think tanks and civil society.

The AI Office may also partner up with individual experts and organizations, and create fora for cooperation of providers of AI models and systems.

[1] The GenAI4EU initiative aims at promoting the adoption of generative AI in 14 strategic sectors (including healthcare, biotechnology, mobility, robotics, etc.) and the public sector.

[2] AI factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of AI. They bring together the necessary ingredients – computer power, data, and talent– to create cutting-edge generative AI models.

[3] Regulatory sandboxes are controlled environments where companies can test their products and services while engaging with relevant regulators.