Will the technical discussions reach an end before the EU elections?
The last and final round of interinstitutional negotiations faced intense scrutiny and attention over the past few days. Negotiators have been celebrating what they describe as a “historic deal” that will make Europe “the best place in the world to develop artificial intelligence”, as per Commissioner Thierry Breton.
However, the reality is that the marathon 36-hour bargaining session does not put an end to discussions. No less than 10 technical meetings have been scheduled to translate the political deal in proper legal terms, by February. As we have seen in many recent legislative files, the devil can sometimes hide in the detail: a single word or comma can bring back sensitive issues on the table and derail discussions. For now, France, Germany, Italy, Hungary and Poland stressed that they remain cautious, as long as a consolidated version of the text is not available, whereas Finland, Sweden and Slovakia have already expressed doubts about the agreement in relation to law enforcement powers and national security prerogatives.
Once finalized, the text will need to receive both the formal nod of both Parliament and Council to become EU law. If approved before the June 2024 European elections, this means that new rules would apply as early as 2026.
The political deal can be briefly summarized as follows, feel free to reach out should you need further details !
1. Guardrails for general purpose artificial intelligence systems (GPAI) and foundation models
Please note that the terminology and definitions set out below should be refined and clarified during the technical discussions.
The AI Act is likely to include provisions concerning what has been dubbed “general-purpose AI” systems, including AI models such as ChatGPT or Google Bard. GPAI models would be broadly defined as any model trained with a large amount of data using self-supervision at scale and capable of performing a wide range of tasks. The models that are used before release on the market for research, development and prototyping activities would however remain outside the scope of the AI Act.
The final version of the text would feature a mix of horizontal requirements for all providers of such AI systems, including a transparency obligation about the datasets used to train algorithms and a duty to provide information about the model’s use and functioning. On top of this first layer of requirements, the deal would provide for stricter requirements for AI models with a high (or systemic) impact on society. This would include the assessment and mitigation of systemic risks or the mandatory conduct of adversarial testing to detect errors or malfunctioning.
France and Germany’s efforts to protect their AI champions seem to have paid off, since the political deal would include a tailored-made exemption for non-systemic open-source models. However, as French President Emmanuel Macron criticised the deal subsequently, France is likely to try and modify these provisions.
2. A bargaining balance between Council and Parliament on the list of prohibited applications and exemptions for law enforcement authorities
The Council managed to obtain some key concessions from Parliament on the use of biometric identification by law enforcement authorities, which was the main bone of contention between the two parties. In exchange for the introduction of several exemptions for the use of such technology, the European Parliament seems to have obtained an extension of the list of prohibited AI practices, biometric systems categorising people according to sensitive data, such as sexual orientation or religious beliefs.
Of note, the AI Act would feature a definition of “biometric data” that is different from the one provided by the General Data Protection Regulation (GDPR). The AI Act definition would have a broader scope, as the identification of individuals would not be a criterion anymore. Any data related to physical, physiological, or behavioural characteristics of a natural person would therefore qualify as biometric data.
3. Clear obligations for high-risk AI systems
Discussions on the regulation of the high-risk AI systems – the biggest bulk of the AI Act – ran quite smoothly. The Council and Parliament clarified the criteria used to consider a system as “high-risk” and added a possibility for providers to self-assess whether their systems fall or not into this category. However, this derogation would apply for a limited number of use cases, such as systems that are only meant to improve the results of tasks performed by a natural person.
The idea of introducing a fundamental rights impact assessment, initially proposed by the Parliament, also made its way through the discussions. However, its scope is more limited, as it would only apply to public sector bodies and private actors providing essential public services.
4. A revised governance architecture
The final deal would provide a three-layered oversight governance featuring: a new EU supervisory authority (European AI Office) responsible for enforcing the provisions relating to high-impact general purpose AI models, national bodies coordinating among each other within a new AI Board and a scientific committee.
Of note, France proposes that the European AI Office acts as a “trusted third party” in relation to the disclosure of training data. This entails that it could be entrusted with the power to inform concerned rights holders about whether their content has been used to train AI systems.
5. A gradual enforcement
The provisions are expected to enter into force gradually. The provisions on prohibitions are likely to enter into force six months after the entry into force of the regulation, twelve months for the provisions on general purpose AI governance and twenty-four months for all the other provisions.