The Critical Role of Pseudonymization in the Age of Artificial Intelligence: Moving Beyond Binary Approaches

Pseudonymization plays a central role in data protection in the age of artificial intelligence (AI) and data-driven innovation. While emerging technologies offer unprecedented opportunities in various sectors, they also present significant challenges in terms of confidentiality and the protection of individual rights. Privacy Enhancing Technologies (PETs), of which pseudonymization is an integral part, are essential for balancing data exploitation and respect for fundamental rights. 

The legal debate on pseudonymization traditionally focuses on two legal approaches: the relative approach, which considers the status of data in relation to the actual capacity for re-identification by a specific party; and the absolute approach, which is based on the abstract possibility of re-identification, regardless of who holds the data. European jurisdictions have not definitively resolved this debate but place emphasis on the practical likelihood of re-identification. Some data protection authorities are nevertheless adopting a strict stance, favoring an “all or nothing” approach: either the data is irreversibly anonymized, in which case the General Data Protection Regulation (GDPR) does not apply (“nothing”); or there is an abstract possibility of re-identification, in which case the data is considered personal and the GDPR fully applies, meaning pseudonymization does not offer any “exemption” to its controller (“all”).

 

The legal debate on pseudonymization traditionally focuses on two legal approaches: the relative approach, which considers the status of data in relation to the actual capacity for re-identification by a specific party; and the absolute approach, which is based on the abstract possibility of re-identification, regardless of who holds the data.

  

While acknowledging that data protection authorities are right to emphasize the distinction between pseudonymization and anonymization, this study demonstrates that pseudonymization is widely recognized at the European level as an effective technique for mitigating the risks associated with the processing of personal data. The GDPR assigns particular importance to pseudonymization, mentioning it several times as a key risk-reduction measure. Other recent European Union (EU) legislations, as well as the EU Agency for Cybersecurity (ENISA), underscore its crucial role as an essential security measure. National case law further recognizes its value in minimizing risks to individuals’ rights. 

Pseudonymization is also proposed as a practical solution for facilitating the secure sharing of data within European common data spaces, particularly in the healthcare sector, and for the responsible development of AI. It preserves the utility of data while offering, under certain conditions, robust protection, thereby helping to reconcile technological innovation with respect for individual rights. 

It is therefore crucial to move beyond the binary debate between “relative” and “absolute” approaches and recognize pseudonymization as an essential component of a comprehensive data protection strategy. By adopting a balanced, pragmatic, risk-based approach, it is possible to promote responsible use of data that respects the rights of individuals while fostering innovation. Enhanced collaboration between regulators, data controllers and stakeholders is essential for developing clear and effective guidelines, ensuring data protection is fit for the age of artificial intelligence.

 

The study is available below in its full version in French.

Partager

Articles similaires