Javascript is required
logo-dastralogo-dastra

Omnibus, GDPR, AI Act: what does the leak reveal?

Omnibus, GDPR, AI Act: what does the leak reveal?
Leïla Sayssa
Leïla Sayssa
November 14, 2025·11 minutes read time

Since a few days ago, a leak has been making waves in Brussels: a draft of the future “Digital Omnibus” regulation, which the European Commission is due to officially present on 19 November 2025.

The project’s stated goal is to simplify and “harmonize” the European digital framework (GDPR, AI Act, ePrivacy, Data Act, etc.): remove overlaps, clarify obligations, and reduce the burden on certain companies, in particular SMEs.

Initially, the Commission planned to carry out a “Digital Fitness Check” in 2026, gather solid evidence, and then propose a targeted and thoroughly prepared revision of the GDPR and other digital texts.

The current approach is quite different: an accelerated procedure, in which EU services have only five working days to review a draft text. This clearly moves away from an evidence-based approach.

In this article, we therefore break down what the leaked Omnibus draft actually contains, its potential impact on the GDPR and the AI Act, and what DPOs / lawyers should start anticipating — without giving in to panic.

Important: this is only a draft. The official text may be different and will still need to be discussed and amended by the European Parliament and the Council. In other words, nothing is set in stone.


1. What is the “Digital Omnibus” project?

The “Digital Omnibus” is a horizontal draft regulation aimed at revising, in a single text, several major European digital laws:

  • GDPR

  • ePrivacy Directive

  • Data Act

  • Certain cybersecurity and sectoral data rules

  • And a separate text specifically targeting the AI Act and its application timeline.

The Commission’s argument: too many texts, too much overlap, too much administrative burden, especially for SMEs. The answer: a “cleanup” of the digital acquis through “targeted adjustments.”

Problem: the leaked draft does more than clarify. It touches on structural concepts of the GDPR (definition of personal data, sensitive data, automated decision-making, access to terminals, etc.).


2. What the leak would change for data protection

2.1. A more “subjective” definition of personal data

Today, personal data means any information relating to an identified or identifiable person, taking into account the means reasonably available to any actor that may process those data.

The Omnibus project would introduce a much more subjective approach:

Information would no longer be personal data for an entity that does not itself have the reasonable means to identify the person, even if a third party can do so.

This interpretation was recently adopted by the CJEU in EDPS v SRB, which broke with the long-assumed “absolute” approach to the notion of personal data.

To learn more about this ruling, see our article here.

Possible consequences:

  • Pseudonymous identifiers, cookies, marketing IDs, or logs could be classified as “non-personal” for certain actors.

  • Part of the ecosystem could fall outside the scope of the GDPR… even though the CJEU has, for 20 years, taken a broad interpretation of the notion of personal data.

For DPOs, this creates a real fragmentation risk: the same dataset could be subject to the GDPR for A, but not for B.


2.2. Sensitive data: protection limited to what is “directly revealed”

Another major shift concerns “special categories of data” (Article 9 GDPR).

The Omnibus project:

  • Narrows the definition of “health data” to data that directly reveal health status.

  • Applies the same logic to other sensitive categories (racial or ethnic origin, political opinions, religious beliefs, sexual orientation, etc.): only data that directly reveal them would be protected under Article 9.

Anything that can be inferred (profiling, inferences, correlations) would fall under a much less protective regime. This is exactly the opposite of the CJEU’s consistent position and of the modernized Convention 108.

In addition, the text adds new grounds for processing sensitive data, in particular for:

  • The development of AI systems

  • And, in some cases, their “operation,” on the basis of legitimate interest, with “appropriate measures.”

In other words, processing highly sensitive data in AI models could become much easier as soon as innovation and a few generic “safeguards” are invoked.


2.3. Data subject rights: erosion by a thousand cuts

Critics call it “death by a thousand cuts”: not necessarily isolated revolutions, but a gradual weakening of several safeguards.

Among the notable points in the draft:

  • simplifying the processing record for SMEs with fewer than 750 employees,

  • Abusive requests may be rejected if they are considered excessive or made for other purposes (e.g. preparing a better litigation file)

  • Article 13 GDPR (information)

    • Possible exemption from information duties where processing takes place within a “clear and delimited” relationship, if it can reasonably be assumed that the person already knows the purposes, legal basis and identity of the controller.

    • Limits: this would not apply in the case of transfers to other recipients, transfers outside the EU, or high-risk processing.

  • Automated individual decisions (Article 22)

    • The substance would remain broadly similar, but the wording would shift from a “prohibited unless…” logic to an “allowed unless…” logic. A symbolic but important shift in future interpretation.
  • Personal data breach notifications (Article 33)

    • Only breaches resulting in a high risk to individuals would have to be notified to the authority.

    • Deadline extended from 72 hours to 96 hours.

  • Lists of processing operations requiring a DPIA

    • These would be established directly by the EDPB, no longer by each national authority (CNIL, etc.), and then sent up to the EDPB.

Each time, the logic is the same: fewer notifications, less paperwork… but also less visibility for individuals and less local oversight.


The Omnibus project partially merges the ePrivacy logic into the GDPR through new Articles 88a, 88b, 88c on:

  • Access to data stored on terminals (computer, smartphone, connected devices).

  • The possibility of expressing consent via automated signals (a “Do Not Track” / Privacy Signal type of mechanism).

Concretely:

  • Some access to terminals would “always” be possible without consent (routing a communication, providing a requested service, measuring audience, ensuring security).

  • For other uses (ads, tracking, AI training…), Article 6 GDPR legal bases would apply, including legitimate interest in some cases, whereas today the ePrivacy Directive effectively requires consent for most cookies.

  • Browsers would have to implement a standardized consent/objection signal, machine-readable by websites; controllers would have to respect it for at least 6 months.

  • Media outlets could benefit from exceptions given their dependence on advertising revenue.


3. And what about the AI Act?

The Digital Omnibus on AI Regulation Proposal does not rewrite the AI Act, but it adjusts it along two main lines:

  1. Relieve certain procedural obligations

    • Possible exemption from registration in the database of high-risk AI systems when they are used only for narrow or procedural tasks.
  2. Relax the implementation timeline

    • Introduction of a one-year grace period during which authorities could not impose sanctions before August 2027.

    • Obligations to label AI-generated content (deepfakes, disinformation) would also benefit from transitional periods.

In addition, the project provides for a second specific AI Act text in the digital package, more focused on interpretation and implementation than on a deep rewriting of the substantive rules.

👉 In short: the AI Act is not being “stopped,” but its curve is being softened and friction reduced for certain use cases, in a context where the aim is clearly to give AI players in Europe some breathing room.


4. Is the project compatible with the Charter and CJEU case law?

From the perspective of data protection law practice, several issues arise:

4.1. Notion of personal data

  • The CJEU has consistently adopted a broad interpretation (Breyer, Nowak, etc.), considering not only the means of a given actor, but also those of a reasonable third party.

  • The shift toward a “purely subjective” approach seems difficult to reconcile with Article 8 of the Charter of Fundamental Rights and modernized Convention 108.

4.2. Sensitive data and inferences

  • Limiting protection to data that “directly reveal” a health condition, sexual orientation, political opinion, etc., would leave the entire field of inferences without enhanced protection — even though these are precisely what most often drive discrimination.

4.3. AI training & legitimate interest

  • Making training and operation of AI systems a kind of standalone, preferred basis grounded in legitimate interest runs counter to the principle of technological neutrality in the GDPR: same processing, regardless of the technology.

Several NGOs (noyb, EDRi, Amnesty, etc.), as well as around a hundred civil society organizations, have already called on the Commission to revise its copy, speaking of a “GDPR stripped of its substance.”


5. What should DPOs and lawyers do now?

Spoiler: nothing changes legally today. The GDPR and the AI Act remain applicable in their current versions. But this is the moment to anticipate intelligently.

5.1. Do not “loosen” compliance based on a leak

  • As long as the Digital Omnibus is not adopted, any strategy anticipating a relaxation (for example, allowing more tracking or more AI uses on sensitive data) is legally risky.

  • On the contrary, it is better to document your current constraints: if the framework really does loosen, you will be ready to reassess your risks and legal bases.

5.2. Map your AI uses now

Use this moment to:

  • Clearly identify:

    • The AI systems used (internal / SaaS / API),

    • The processing involved (training, fine-tuning, inference),

    • The types of data (including sensitive or inferred data).

  • Check your current legal bases (consent, performance of contract, legitimate interest) and your DPIAs.

  • Anticipate AI Act obligations (risk classification, data governance obligations, documentation, logs).

5.3. Closely follow discussions on:

  • The redefinition of personal data (huge impact on the qualification of your datasets, logs, IDs).

  • The new regime for terminals & cookies (machine-readable consent signal, potential “do not track” models).

  • The room for maneuver offered for training AI models on personal data.

The idea is not to rebuild your entire compliance posture every time a rumor appears, but to include Omnibus in your structured monitoring.


6. How a platform like Dastra can help you stay “Omnibus-proof”

If this project goes all the way through (even partially), complexity will not disappear — it will just change shape. Concretely, a governance platform like Dastra can help you to:

  • Centralize your “AI” processing operations in the register, with specific tags (training, inference, high risk / non-high risk).

  • Link your processing operations to legal bases and the data used (including pseudonymized, inferred, sensitive data), so you can quickly reassess the impact of a change in definition.

  • Track DPIAs and, in the future, AI Act requirements (documentation, mitigation measures, evidence).

  • Record your compliance decisions (for example: why you consider a given dataset personal or not, a given legal basis applicable or not).

  • Automate rights management even in a context where processing becomes more complex (especially if AI use becomes widespread).


Key takeaways

  • The Omnibus project is not yet positive law, but it clearly signals a political will: to ease the GDPR and adjust the AI Act to support AI.

  • The proposed changes affect core pillars (definition of personal data, sensitive data, ePrivacy, automated decisions).

  • They are attracting strong criticism and will have to pass several legislative stages; the text will likely evolve.

  • For DPOs, the right reflex is not to ease off, but to document, map AI uses, and prepare to adapt compliance without sacrificing individuals’ rights.


See Dastra in action

In just a few minutes, schedule a personalized demo and discover how Dastra can adapt to your organization.

Ask for a demo
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.