Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis.
Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.
At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet suspendisse interdum consectetur libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.
Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat.
“Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque velit euismod in pellentesque massa placerat.”
Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget.
This is a three-part series in which we'll do a deep dive into every aspect of the new European Data Protection Board (EDPB) Pseudonymization guidance. This first article examines how data privacy best practices have evolved into clearly defined regulatory guidelines with ever-increasing specificity due to landmark European Court rulings and how those rulings already impact databases and sharing protocols. The second article will look very closely at the EDPB guidelines themselves and their influence on privacy-enhancing technologies (PETs). Finally, we'll look at the specific implications for organizations that use pseudonymized data and how they can follow this guidance on a practical level.
------------
For those who use, share, analyze, or simply store personal data, maintaining people's privacy while also leveraging the data in ways the organization needs is an ongoing balancing act. Throw in the labyrinth of international laws, regulations, rules, and guidance, and suddenly, analytics feels like a liability minefield.
Solving this complex issue is Blind Insight's raison d'etre–it's why we exist. We believe organizations should be able to leverage their data while maintaining compliance without headaches or big, expensive privacy and engineering teams. We're huge believers in Privacy Enhancing Technologies (PETs) and even bigger champions of anonymization and pseudonymization, transforming personal data so individuals are not directly identifiable.
So when the European Data Protection Board issued new pseudonymization guidelines, you bet we took notice. That's why we're publishing this three-part series on EDPB Pseudonymization guidance.
European data protection law has increasingly emphasized pseudonymization as an approved way to protect privacy while still using data.
From a legal and regulatory standpoint, pseudonymization and anonymization have historically been treated as distinct concepts, yet the boundary between them has remained unclear. Until recently, precise guidelines have been lacking, and two EU General Court rulings have actually brought this distinction into question rather than fully clarifying it.
One impactful case involved Deloitte, a Big Four accounting firm; another landmark ruling involved Mr. Breyer, a private citizen who sued the Federal Republic of Germany regarding how federal websites stored his personal data.
The Breyer case established that dynamic IP addresses are personal data if an online media services provider can legally identify the person using it. The Deloitte case established that pseudonymized data disclosed by a data firm is not considered disclosing personal data if the recipient of the data cannot identify who the pseudonymized information refers to. In other words, it's considered anonymized.
The latest guidelines issued by EDPB, based mainly on these two rulings, have the potential to significantly impact privacy-enhancing technologies and advanced and emerging technologies like searchable encryption.
European data protection has long distinguished between anonymous data (that is, data that simply does not identify who an individual is) and personal data (identifiable individuals).
Under the 1995 Data Protection Directive (95/46/EC, the precursor to the GDPR), basic privacy, security, and data guidelines were established. European member states, however, were given leeway to develop their own specific laws and ways to implement the directive.
The Data Protection Directive primarily focused on anonymization, truly stripping data of identifiers, since, at the time, “personal data” was defined very broadly as virtually any information associated with an individual. Because these were still fairly early days in terms of internet usage, and certain privacy-enhancing (and invading) technologies had not yet been developed, the directive did not explicitly define or deal with pseudonymization.
Still, the trillion-dollar question of how to maintain a person's privacy while leveraging valuable data for necessary and, at times, life-saving functions and processes remains.
It wasn't until the early 2000s that privacy regulators began to explore ways to de-identify data without fully anonymizing it. Early techniques used to achieve this included coding or hashing identifiers. This reduced privacy risk while preserving data utility.
But it wasn't until over a decade later, in 2014, that the Article 29 Working Party (WP29), an independent privacy and data advisory group and precursor to GDPR, clarified that pseudonymization is not the same as anonymization.
Rather, the WP29 said pseudonymization “merely reduces the linkability” of data to a person but does not irreversibly break that link. As such, since the data can still be re-associated with a person using additional information and sometimes legal means, pseudonymous data is still personal data.
However, that clarification wasn't officially codified into enforceable law until 2018.
It was then that the GDPR formally defined pseudonymization in Article 4(5) as processing personal data in a manner that the data can no longer be attributed to a specific subject without additional information, provided that the additional information - such as the “key” linking codes to identities - is kept separately and secured.
Now that pseudonymization was clearly defined, Recital 26 of GDPR further clarified that pseudonymized data is still "personal data" anytime additional means that can be used to re-link the pseudonymized data exists.
However, it's important to note that Recital 26 does offer guidance as to when pseudonymization becomes anonymous information.
If any entity can reasonably re-identify an individual, whether the original data owner or another party, then under the Breyer ruling and strict GDPR interpretation, the data is still considered pseudonymized and thus “personal.” By contrast, the Deloitte decision introduced a more recipient-focused approach: if the specific data recipient truly does not have access to the re-identification keys or the legal/technical means to obtain them, the data may be treated as effectively anonymous for that recipient. The newly issued guidelines aim to reconcile these perspectives, clarifying whether data remains “personal” simply because some other entity could eventually unlock its identifiers.
Since pseudonymized data can still significantly mitigate privacy risks, the GDPR encouraged its use as a safeguard. Pseudonymized data is mentioned in provisions on data minimization, purpose limitation, security, and accountability.
For example, Article 6(4) notes pseudonymization as a factor for assessing how data should be classified when it is used for purposes beyond the reason it was originally collected – guidance that has huge implications for analytics. Article 32 even lists pseudonymization as an appropriate security measure.
Both the Breyer and Deloitte rulings are worth closer examination as they give the regulatory articles enforceability and add nuance to the GDPR's definitions.
This was a pivotal interpretation: Pseudonymized data “will not be personal data” under EU law when transferred to a party that cannot link it to individuals. This underscored a key component of the earlier Breyer ruling that emphasized only realistic, plausible means of re-identification would keep data classified as personal information.
The Deloitte decision marked a turning point in how strictly to interpret pseudonymized data. It provides more certainty for businesses using pseudonymization, suggesting that sharing data in coded form can place it outside GDPR when the recipient truly lacks re-identification means.
Increased Clarity, But Not Without Lingering Questions
However, the Deloitte ruling also introduced some legal tension. The EDPS has appealed the ruling, arguing it’s wrong to consider only the receiver’s viewpoint because doing so undermines the principle that personal data should remain protected no matter who holds what piece of the encryption.
The EDPB’s draft guidelines (2025) echo both Deloitte and Breyer, essentially stating that if someone somewhere can re-identify, the data is not fully anonymous.
Still, within all this legal and regulatory grey area, one thing remains clear: the Breyer and Deloitte decisions serve to nudge companies toward strong pseudonymization and segregation of data to maintain compliance.
Until there’s harmony between court rulings and regulatory guidance, it's safest to design systems that assume GDPR applies to pseudonymized data. However, real-world considerations often hinge on why the data is being used (saving lives or detecting fraud may warrant different treatment than commercial analytics). Always consult your legal team to decide whether the Deloitte rationale and previous rulings can strengthen your defense-in-depth argument. After all, in the event of a breach, truly encrypted data alone typically isn’t enough to re-identify individuals.
In practice, companies should be cautious until the appeal is resolved or the EDPB updates its stance. The safest approach is to assume pseudonymized data is still personal data, requiring GDPR compliance. This is especially true for the organizations that hold the encryption key.
Deloitte also has implications for cross-border data strategies. The ruling implies, at least for EU entities that pseudonymize personal data and then share that data with non-EU partners or cloud processors who have no access to the identifying keys, that the data might be considered de facto anonymous for the recipients. This, in turn, affects whether certain data transfers are seen as transfers of “personal data” at all. If not, data transfer restrictions could be potentially eased in some scenarios.
For example, a U.S. service provider receiving only coded data might argue that it’s not processing personal data since there's no reasonable way for it to decrypt the information.
Both cases offer valuable guidance on how to architect global databases under GDPR. The Breyer ruling urges companies to assess identifiability from all angles. If data fragments can be combined through likely means (even across entities), you must treat the data as personal and protect it accordingly.
The Deloitte decision also suggests a strategy for distributed data storage: keep the “additional information” like mapping tables or decryption keys in a separate jurisdiction or system from the pseudonymized data.
For instance, a bank could store personal identifiers within the EU but transfer only tokenized or encrypted records to databases in other regions. If done properly, the data outside the EU would not allow identification by itself. This setup could be seen as aligning with the General Court’s approach, making the foreign-held dataset effectively non-personal from that foreign entity’s perspective.
However, organizations must ensure that re-identification is truly not “reasonably likely.” That means robust technical controls, including no secret backdoors to recombine data, and legal barriers, such as contracts prohibiting attempts to re-identify data subjects. Ideally, the laws in the recipient country would also prevent any unauthorized re-linking.
Ultimately, pseudonymization has crystallized into a privacy-protecting method now formally recognized by governing bodies for its ability to protect personal data while still allowing for technological advancements and innovations in analytics.
In our next article in this series, we'll closely examine the 2025 EDPB guidelines themselves and what they mean on a practical level for organizations.
Blind Insight is a new, developer-friendly tool that makes it easy for organizations to build privacy-preserving applications that leverage searchable encryption. Check out the free Beta to see the power of SE for yourself.
Sources: