Javascript is required
logo-dastralogo-dastra

Simpler, safer, stricter where it counts: inside the EU's AI Omnibus Deal

Simpler, safer, stricter where it counts: inside the EU's AI Omnibus Deal
Leïla Sayssa
Leïla Sayssa
May 7, 2026·10 minutes read time

May 7, 2026

Today, on Thursday 7 May 2026, and after one failed attempt, the European Union reached a landmark deal that will reshape how artificial intelligence is regulated across the continent. After weeks of intense negotiations, the European Parliament and the Council of the EU have struck a political agreement to significantly simplify and streamline the EU's AI rulebook, in what has become known as the "Digital Omnibus on AI".

Under the Omnibus proposal, companies would have until the end of 2027 to comply with the rules applicable to high-risk AI systems, while providers of AI-enabled machinery would be expressly exempted from certain obligations under this framework. The proposal also introduces a new ban targeting AI systems that enable sexually explicit content.

The amendments will now enter the formal approval process, with final adoption expected by August.

The European Commission, which first proposed this package just five months ago in November 2025, welcomed the deal warmly. As Henna Virkkunen, the EU's Executive Vice-President for Tech Sovereignty, Security and Democracy, put it:

"Our businesses and citizens want two things from AI rules. They want to be able to innovate and feel safe. Today's agreement does both. With simpler and innovation-friendly rules, we make it easier to innovate without lowering the bar on safety."

If you've been following the EU AI Act story, this is a big moment. If you haven't, don't worry. Here's everything you need to know.

First, a quick recap: what is the EU AI Act?

The AI Act (Regulation (EU) 2024/1689), is a legislation designed to regulate and promote the development and commercialization of artificial intelligence systems within the European Union.

Launched by the European Commission in April 2021, the AI Act came into effect on July 12, 2024, after three years of negotiations.

This initiative aims to foster the development of responsible AI, ensuring fundamental rights, safety, and ethical principles while encouraging and strengthening AI investment and innovation throughout the EU.

The Act was ambitious, groundbreaking, and, as it turned out, a bit too heavy on complexity for many businesses to digest.


So what is the "Omnibus," and why does it exist?

As part of its broader effort to streamline the EU’s digital regulatory framework, the European Commission introduced two proposals under the “Digital Omnibus” initiative in November 2025: one addressing data and cybersecurity legislation, and another focused on the AI Act.

The stated objective of the Omnibus project is to simplify and 'harmonise' the European digital framework (GDPR, AI Act, ePrivacy, Data Act, etc.): eliminating overlaps, clarifying obligations, and reducing the burden on certain businesses, particularly SMEs.

The "Digital Omnibus on AI" is essentially an amendment to the original AI Act.


What the deal actually changes

1. More time for high-risk AI Compliance

The most significant change is a timeline extension for companies building or deploying high-risk AI systems.

Under the original Act, obligations for high-risk AI were set to kick in on 2 August 2026. Under the new deal:

  • High-risk AI systems under Annex III (AI systems in sensitive areas like biometrics, critical infrastructure, education, employment, law enforcement, and border management) now have until 2 December 2027 to comply.
  • High-risk AI systems under Annex I (AI systems embedded in products covered by EU safety legislation like medical devices or machinery) get even more time, until 2 August 2028.

Why the delay? The co-legislators acknowledged that the technical standards and guidance documents that companies need to actually implement the rules aren't fully ready yet. This sequencing prevents businesses from being penalised for failing to meet standards that don't yet exist.


2. A complete ban on "nudification" apps or "AI nudifiers"

The deal introduces a full EU-wide ban on AI systems whose primary purpose is to generate non-consensual intimate images, commonly known as "deepfake nudifiers" or "nudification apps." These tools use AI to digitally undress photos of real, identifiable people without their consent.

The ban covers:

  • Apps that generate images of people in sexually explicit scenarios without consent
  • AI tools that create child sexual abuse material (CSAM)

Companies currently offering such products have until 2 December 2026 to comply; meaning these tools must be taken off the market entirely.

Legislators added this at the trilogue stage of the Omnibus, which is a strong signal that the EU is willing to use AI regulation not just to manage business risk, but to protect individuals from harm, particularly women and children who are disproportionately targeted by this type of abuse.


3. AI watermarking: delayed, but still coming

One of the AI Act's transparency tools requires that AI-generated content (images, audio, video) be labelled or "watermarked" so people know it wasn't made by a human.

Under the Omnibus deal, companies now have until 2 December 2026 (instead of August) to comply.


4. Simpler rules & clearer governance for businesses


The agreement introduces a suite of business-friendly changes:

Extended SME protections. Certain regulatory privileges that were previously available only to small and medium-sized enterprises (SMEs) are now extended to small mid-cap companies which are slightly larger businesses that still lack the compliance resources of major corporations. For Europe's fast-growing AI startup ecosystem, this is meaningful relief.

Resolving the overlap with product safety law. One of the thorniest issues in AI Act implementation has been how it interacts with existing EU product safety legislation. This is the reason previous negociations have come to a deadend. The Omnibus explicitly clarifies this relationship, eliminating duplicative requirements. Companies building AI into industrial products no longer face the prospect of complying with two overlapping regulatory regimes.

Therefore, under the Omnibus proposal, AI-powered machinery regulated by the EU Machinery Regulation would be excluded from the AI Act’s dedicated high-risk obligations and would only need to comply with the requirements established under the relevant sectoral framework.

Stronger AI Office powers. The Commission's AI Office, the body responsible for overseeing the most powerful AI systems, will see its enforcement powers strengthened. This is particularly significant for oversight of general-purpose AI models (like large language models) and AI systems embedded in very large online platforms and search engines, which fall under some of the most complex provisions of the Act.

Wider access to regulatory sandboxes. The agreement expands access to regulatory sandboxes — controlled environments where companies can test AI systems in real-world conditions with regulatory oversight and legal certainty. Notably, the deal includes provision for an EU-level sandbox, giving innovators the option to test at European scale, not just nationally.

Modifications in a nutshell

Obligation Status following the Omnibus Practical impact
Prohibited AI practices (Art. 5) Applicable since 2 February 2025. The Omnibus also introduces a new prohibition covering AI systems used to generate non-consensual sexual content and CSAM. These rules already apply and leave no transition period.

Organizations should immediately review their AI use cases to identify and prohibit any non-compliant practices.
AI literacy (Art. 4) Applicable since 2 February 2025. Providers and deployers must ensure an adequate level of AI literacy among staff. Organizations should already have awareness and training measures in place and be able to demonstrate them through documented programmes and internal governance.
AI-generated content labelling (Art. 50) Compliance deadline postponed to 2 December 2026, with an additional three-month transition period compared to the original August timeline. Organizations now have additional time to implement transparency and labelling mechanisms for AI-generated content, particularly in customer-facing environments.
High-risk AI systems (Annex III) Application postponed from 2 August 2026 to 2 December 2027. While the deadline has been extended, organizations should already start identifying and classifying potential high-risk AI systems to prepare for future compliance obligations.
High-risk AI systems embedded in regulated products (Annex I) Application postponed from 2 August 2027 to 2 August 2028. Mainly impacts AI integrated into regulated products such as medical devices, industrial equipment, or machinery, providing additional time for sector-specific compliance alignment.

What happens next?

Today's agreement is provisional: a political deal, but not yet law. Both the European Parliament and the Council must formally vote to adopt the text. This is expected to take place somewhere between June and July.

Once they do, the amendments will be published in the Official Journal of the European Union and enter into force just three days later. This is likely to happen around end of July.

The race is on: the original high-risk AI rules were due to start applying on 2 August 2026, and the formal adoption must happen before that date.

In parallel, the European Commission published a separate Digital Omnibus package on 19 November 2025 proposing amendments to the GDPR and the ePrivacy Directive. However, these proposals have not yet reached political agreement at EU level.


Why this matters now

The extension of certain deadlines under the Omnibus should not be interpreted as an invitation to pause AI governance efforts until 2027. While the timeline for some obligations has been adjusted, the AI Act is already in force and organizations remain expected to prepare for compliance now.

The companies that will be in the strongest and most defensible position by the time the new deadlines apply are those that use this additional time strategically: identifying their AI use cases, mapping data flows, assessing risks, and building the documentation and audit trails required by the regulation.

Moreover, the AI Act’s extraterritorial scope remains unchanged. Any organization serving EU clients, operating within the EU, or providing AI-enabled services to EU entities may still fall within the scope of the regulation.


Sources: EU Council Press Release · European Parliament

This note is published for informational purposes only. It does not constitute legal advice. Dastra makes no warranty as to the accuracy or completeness of this analysis.


See Dastra in action

In just a few minutes, schedule a personalized demo and discover how Dastra can adapt to your organization.

Ask for a demo
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.