[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fdsLqtdcYOVmlvZd1hIyfOJniYn9o5itm8ArrmJcl3OM":3,"white_papers":59},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":11,"nbDownloads":12,"excerpt":13,"lang":14,"url":15,"intro":16,"featured":4,"state":17,"author":18,"authorId":19,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":48,"imageUrl":49,"imageThumbUrls":50,"id":58},false,"*May 7, 2026*\n\nToday, on Thursday 7 May 2026, and after one failed attempt, the European Union reached a landmark deal that will reshape how artificial intelligence is regulated across the continent. After weeks of intense negotiations, the European Parliament and the Council of the EU have struck a political agreement to significantly simplify and streamline the EU's AI rulebook, in what has become known as the [**\"Digital Omnibus on AI\"**.](https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal)\n\nUnder the Omnibus proposal, companies would have until the end of 2027 to comply with the rules applicable to high-risk AI systems, while providers of AI-enabled machinery would be expressly exempted from certain obligations under this framework. The proposal also introduces a new ban targeting AI systems that enable sexually explicit content.\n\n**The amendments will now enter the formal approval process, with final adoption expected by August.**\n\nThe European Commission, which first proposed this package just five months ago in November 2025, welcomed the deal warmly. As Henna Virkkunen, the EU's Executive Vice-President for Tech Sovereignty, Security and Democracy, put it:\n\n> *\"Our businesses and citizens want two things from AI rules. They want to be able to innovate and feel safe. Today's agreement does both. With simpler and innovation-friendly rules, we make it easier to innovate without lowering the bar on safety.\"*\n\nIf you've been following the EU AI Act story, this is a big moment. If you haven't, don't worry. Here's everything you need to know.\n\n## First, a quick recap: what is the EU AI Act?\n\nThe [**AI Act** (Regulation (EU) 2024/1689),](https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng) is a legislation designed to regulate and promote the development and commercialization of artificial intelligence systems within the European Union.\n\nLaunched by the European Commission in April 2021, the AI Act came into effect on July 12, 2024, after three years of negotiations.\n\nThis initiative aims to **foster the development of responsible AI, ensuring fundamental rights, safety, and ethical principles while encouraging and strengthening AI investment and innovation throughout the EU.**\n\nThe Act was ambitious, groundbreaking, and, as it turned out, a bit too heavy on complexity for many businesses to digest.\n\n{% button href=\"https://www.dastra.eu/en/blog/ai-act-key-points-of-the-regulation-at-a-glance/59538\" text=\"To better understand the Regulation, click here\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\n\n---\n\n## So what is the \"Omnibus,\" and why does it exist?\n\nAs part of its broader effort to streamline the EU’s digital regulatory framework, the European Commission introduced two proposals under the “Digital Omnibus” initiative in November 2025: one addressing data and cybersecurity legislation, and another focused on the AI Act.\n\nThe stated objective of the Omnibus project is to simplify and 'harmonise' the European digital framework (GDPR, AI Act, ePrivacy, Data Act, etc.): eliminating overlaps, clarifying obligations, and reducing the burden on certain businesses, particularly SMEs.\n\nThe [\"Digital Omnibus on AI\"](https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal) is essentially an amendment to the original AI Act.\n\n{% button href=\"https://www.dastra.eu/en/blog/omnibus-gdpr-ai-act-what-does-the-leak-reveal/60026\" text=\"For further information on the Digital Omnibus, click here\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\n\n---\n\n## What the deal actually changes\n\n### 1. More time for high-risk AI Compliance\n\nThe most significant change is a timeline extension for companies building or deploying **high-risk AI systems**.\n\nUnder the original Act, obligations for high-risk AI were set to kick in on 2 August 2026. Under the new deal:\n\n- High-risk AI systems under Annex III (AI systems in sensitive areas like **biometrics, critical infrastructure, education, employment, law enforcement, and border management)** now have until **2 December 2027** to comply.\n- High-risk AI systems under Annex I (AI systems embedded in products covered by EU safety legislation like medical devices or machinery) get even more time, until **2 August 2028**.\n\nWhy the delay? The co-legislators acknowledged that the technical standards and guidance documents that companies need to actually *implement* the rules aren't fully ready yet. This sequencing prevents businesses from being penalised for failing to meet standards that don't yet exist.\n\n{% button href=\"https://www.dastra.eu/en/blog/ai-act-the-complete-guide-to-the-official-resources-for-compliance/59996\" text=\"Click here for a mapping of official resources\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\n\n---\n\n### 2. A complete ban on \"nudification\" apps or \"AI nudifiers\"\n\nThe deal introduces a **full EU-wide ban on AI systems whose primary purpose is to generate non-consensual intimate images**, commonly known as \"deepfake nudifiers\" or \"nudification apps.\" These tools use AI to digitally undress photos of real, identifiable people without their consent.\n\nThe ban covers:\n\n- Apps that generate images of people in sexually explicit scenarios without consent\n- AI tools that create child sexual abuse material (CSAM)\n\nCompanies currently offering such products have until **2 December 2026** to comply; meaning these tools must be taken off the market entirely.\n\nLegislators added this at the trilogue stage of the Omnibus, which is a strong signal that the EU is willing to use AI regulation not just to manage business risk, but to **protect individuals from harm**, particularly women and children who are disproportionately targeted by this type of abuse.\n\n---\n\n### 3. AI watermarking: delayed, but still coming\n\nOne of the AI Act's transparency tools requires that AI-generated content (images, audio, video) be labelled or \"watermarked\" so people know it wasn't made by a human. \\\n\\\nUnder the Omnibus deal, companies now have until **2 December 2026** (instead of August) to comply.\n\n---\n\n### 4. Simpler rules & clearer governance for businesses\n\n\\\nThe agreement introduces a suite of business-friendly changes:\n\n**Extended SME protections.** Certain regulatory privileges that were previously available only to small and medium-sized enterprises (SMEs) are now extended to **small mid-cap companies** which are slightly larger businesses that still lack the compliance resources of major corporations. For Europe's fast-growing AI startup ecosystem, this is meaningful relief.\n\n**Resolving the overlap with product safety law.** One of the thorniest issues in AI Act implementation has been how it interacts with existing EU product safety legislation. This is the reason previous negociations have come to a deadend. The Omnibus explicitly clarifies this relationship, eliminating duplicative requirements. Companies building AI into industrial products no longer face the prospect of complying with two overlapping regulatory regimes.\n\n> Therefore, under the Omnibus proposal, AI-powered machinery regulated by the [EU Machinery Regulation ](https://single-market-economy.ec.europa.eu/sectors/mechanical-engineering/machinery_en)would be excluded from the AI Act’s dedicated high-risk obligations and would only need to comply with the requirements established under the relevant sectoral framework.\n\n{% button href=\"https://www.dastra.eu/en/blog/which-laws-apply-alongside-the-eu-ai-act/60019 \" text=\"To better understand which laws apply alongside the AI Act, click here\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\n\n**Stronger AI Office powers.** The Commission's AI Office, the body responsible for overseeing the most powerful AI systems, will see its enforcement powers strengthened. This is particularly significant for oversight of **general-purpose AI models** (like large language models) and AI systems embedded in **very large online platforms and search engines**, which fall under some of the most complex provisions of the Act.\n\n**Wider access to regulatory sandboxes.** The agreement expands access to regulatory sandboxes — controlled environments where companies can test AI systems in real-world conditions with regulatory oversight and legal certainty. Notably, the deal includes provision for an **EU-level sandbox**, giving innovators the option to test at European scale, not just nationally.\n\n## Modifications in a nutshell\n\n| Obligation | Status following the Omnibus | Practical impact |\n| --- | --- | --- |\n| **Prohibited AI practices (Art. 5)** | Applicable since 2 February 2025. The Omnibus also introduces a new prohibition covering AI systems used to generate non-consensual sexual content and CSAM. | These rules already apply and leave no transition period. \u003Cbr>\u003Cbr>Organizations should immediately review their AI use cases to identify and prohibit any non-compliant practices. |\n| **AI literacy (Art. 4)** | Applicable since 2 February 2025. Providers and deployers must ensure an adequate level of AI literacy among staff. | Organizations should already have awareness and training measures in place and be able to demonstrate them through documented programmes and internal governance. |\n| **AI-generated content labelling (Art. 50)** | Compliance deadline postponed to 2 December 2026, with an additional three-month transition period compared to the original August timeline. | Organizations now have additional time to implement transparency and labelling mechanisms for AI-generated content, particularly in customer-facing environments. |\n| **High-risk AI systems (Annex III)** | Application postponed from 2 August 2026 to 2 December 2027. | While the deadline has been extended, organizations should already start identifying and classifying potential high-risk AI systems to prepare for future compliance obligations. |\n| **High-risk AI systems embedded in regulated products (Annex I)** | Application postponed from 2 August 2027 to 2 August 2028. | Mainly impacts AI integrated into regulated products such as medical devices, industrial equipment, or machinery, providing additional time for sector-specific compliance alignment. |\n\n---\n\n## What happens next?\n\nToday's agreement is **provisional**: a political deal, but not yet law. Both the European Parliament and the Council must formally vote to adopt the text. This is expected to take place somewhere **between June and July.**\n\nOnce they do, the amendments will be published in the **Official Journal of the European Union** and enter into force just **three days later**. This is likely to happen around end of July.\n\nThe race is on: the original high-risk AI rules were due to start applying on 2 August 2026, and the formal adoption must happen before that date.\n\n> In parallel, the European Commission published a separate Digital Omnibus package on 19 November 2025 proposing amendments to the GDPR and the ePrivacy Directive. However, these proposals have not yet reached political agreement at EU level.\n\n---\n\n## Why this matters now\n\nThe extension of certain deadlines under the Omnibus should **not be interpreted as an invitation to pause AI governance efforts until 2027**. While the timeline for some obligations has been adjusted, the AI Act is already in force and organizations remain expected to prepare for compliance now.\n\nThe **companies that will be in the strongest and most defensible position** by the time the new deadlines apply **are those that use this additional time strategically:** identifying their AI use cases, mapping data flows, assessing risks, and building the documentation and audit trails required by the regulation.\n\nMoreover, the AI Act’s extraterritorial scope remains unchanged. Any organization serving EU clients, operating within the EU, or providing AI-enabled services to EU entities may still fall within the scope of the regulation.\n\n{% button href=\"https://www.dastra.eu/en/white-papers/ai-act-deploy-an-ai-management-system-aims-in-6-steps/59953\" text=\"Start your compliance with the AI Act with these easy steps! \" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\n\n---\n\n*Sources: [EU Council Press Release](https://www.consilium.europa.eu/en/press/press-releases/2026/05/07/artificial-intelligence-council-and-parliament-agree-to-simplify-and-streamline-rules/) · [European Parliament](https://www.europarl.europa.eu/news/en/press-room/20260427IPR42011/ai-act-deal-on-simplification-measures-ban-on-nudifier-apps)*\n\n*This note is published for informational purposes only. It does not constitute legal advice. Dastra makes no warranty as to the accuracy or completeness of this analysis.*","\u003Cp>\u003Cem>May 7, 2026\u003C/em>\u003C/p>\n\u003Cp>Today, on Thursday 7 May 2026, and after one failed attempt, the European Union reached a landmark deal that will reshape how artificial intelligence is regulated across the continent. After weeks of intense negotiations, the European Parliament and the Council of the EU have struck a political agreement to significantly simplify and streamline the EU's AI rulebook, in what has become known as the \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal\" rel=\"nofollow\">\u003Cstrong>\"Digital Omnibus on AI\"\u003C/strong>.\u003C/a>\u003C/p>\n\u003Cp>Under the Omnibus proposal, companies would have until the end of 2027 to comply with the rules applicable to high-risk AI systems, while providers of AI-enabled machinery would be expressly exempted from certain obligations under this framework. The proposal also introduces a new ban targeting AI systems that enable sexually explicit content.\u003C/p>\n\u003Cp>\u003Cstrong>The amendments will now enter the formal approval process, with final adoption expected by August.\u003C/strong>\u003C/p>\n\u003Cp>The European Commission, which first proposed this package just five months ago in November 2025, welcomed the deal warmly. As Henna Virkkunen, the EU's Executive Vice-President for Tech Sovereignty, Security and Democracy, put it:\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cem>\"Our businesses and citizens want two things from AI rules. They want to be able to innovate and feel safe. Today's agreement does both. With simpler and innovation-friendly rules, we make it easier to innovate without lowering the bar on safety.\"\u003C/em>\u003C/p>\n\u003C/blockquote>\n\u003Cp>If you've been following the EU AI Act story, this is a big moment. If you haven't, don't worry. Here's everything you need to know.\u003C/p>\n\u003Ch2 id=\"first-a-quick-recap-what-is-the-eu-ai-act\">First, a quick recap: what is the EU AI Act?\u003C/h2>\n\u003Cp>The \u003Ca href=\"https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng\" rel=\"nofollow\">\u003Cstrong>AI Act\u003C/strong> (Regulation (EU) 2024/1689),\u003C/a> is a legislation designed to regulate and promote the development and commercialization of artificial intelligence systems within the European Union.\u003C/p>\n\u003Cp>Launched by the European Commission in April 2021, the AI Act came into effect on July 12, 2024, after three years of negotiations.\u003C/p>\n\u003Cp>This initiative aims to \u003Cstrong>foster the development of responsible AI, ensuring fundamental rights, safety, and ethical principles while encouraging and strengthening AI investment and innovation throughout the EU.\u003C/strong>\u003C/p>\n\u003Cp>The Act was ambitious, groundbreaking, and, as it turned out, a bit too heavy on complexity for many businesses to digest.\u003C/p>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/blog/ai-act-key-points-of-the-regulation-at-a-glance/59538\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">To better understand the Regulation, click here\u003C/a>\u003C/div>\n\u003Chr />\n\u003Ch2 id=\"so-what-is-the-omnibus-and-why-does-it-exist\">So what is the \"Omnibus,\" and why does it exist?\u003C/h2>\n\u003Cp>As part of its broader effort to streamline the EU’s digital regulatory framework, the European Commission introduced two proposals under the “Digital Omnibus” initiative in November 2025: one addressing data and cybersecurity legislation, and another focused on the AI Act.\u003C/p>\n\u003Cp>The stated objective of the Omnibus project is to simplify and 'harmonise' the European digital framework (GDPR, AI Act, ePrivacy, Data Act, etc.): eliminating overlaps, clarifying obligations, and reducing the burden on certain businesses, particularly SMEs.\u003C/p>\n\u003Cp>The \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal\" rel=\"nofollow\">\"Digital Omnibus on AI\"\u003C/a> is essentially an amendment to the original AI Act.\u003C/p>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/blog/omnibus-gdpr-ai-act-what-does-the-leak-reveal/60026\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">For further information on the Digital Omnibus, click here\u003C/a>\u003C/div>\n\u003Chr />\n\u003Ch2 id=\"what-the-deal-actually-changes\">What the deal actually changes\u003C/h2>\n\u003Ch3 id=\"more-time-for-high-risk-ai-compliance\">1. More time for high-risk AI Compliance\u003C/h3>\n\u003Cp>The most significant change is a timeline extension for companies building or deploying \u003Cstrong>high-risk AI systems\u003C/strong>.\u003C/p>\n\u003Cp>Under the original Act, obligations for high-risk AI were set to kick in on 2 August 2026. Under the new deal:\u003C/p>\n\u003Cul>\n\u003Cli>High-risk AI systems under Annex III (AI systems in sensitive areas like \u003Cstrong>biometrics, critical infrastructure, education, employment, law enforcement, and border management)\u003C/strong> now have until \u003Cstrong>2 December 2027\u003C/strong> to comply.\u003C/li>\n\u003Cli>High-risk AI systems under Annex I (AI systems embedded in products covered by EU safety legislation like medical devices or machinery) get even more time, until \u003Cstrong>2 August 2028\u003C/strong>.\u003C/li>\n\u003C/ul>\n\u003Cp>Why the delay? The co-legislators acknowledged that the technical standards and guidance documents that companies need to actually \u003Cem>implement\u003C/em> the rules aren't fully ready yet. This sequencing prevents businesses from being penalised for failing to meet standards that don't yet exist.\u003C/p>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/blog/ai-act-the-complete-guide-to-the-official-resources-for-compliance/59996\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">Click here for a mapping of official resources\u003C/a>\u003C/div>\n\u003Chr />\n\u003Ch3 id=\"a-complete-ban-on-nudification-apps-or-ai-nudifiers\">2. A complete ban on \"nudification\" apps or \"AI nudifiers\"\u003C/h3>\n\u003Cp>The deal introduces a \u003Cstrong>full EU-wide ban on AI systems whose primary purpose is to generate non-consensual intimate images\u003C/strong>, commonly known as \"deepfake nudifiers\" or \"nudification apps.\" These tools use AI to digitally undress photos of real, identifiable people without their consent.\u003C/p>\n\u003Cp>The ban covers:\u003C/p>\n\u003Cul>\n\u003Cli>Apps that generate images of people in sexually explicit scenarios without consent\u003C/li>\n\u003Cli>AI tools that create child sexual abuse material (CSAM)\u003C/li>\n\u003C/ul>\n\u003Cp>Companies currently offering such products have until \u003Cstrong>2 December 2026\u003C/strong> to comply; meaning these tools must be taken off the market entirely.\u003C/p>\n\u003Cp>Legislators added this at the trilogue stage of the Omnibus, which is a strong signal that the EU is willing to use AI regulation not just to manage business risk, but to \u003Cstrong>protect individuals from harm\u003C/strong>, particularly women and children who are disproportionately targeted by this type of abuse.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"ai-watermarking-delayed-but-still-coming\">3. AI watermarking: delayed, but still coming\u003C/h3>\n\u003Cp>One of the AI Act's transparency tools requires that AI-generated content (images, audio, video) be labelled or \"watermarked\" so people know it wasn't made by a human. \u003Cbr />\n\u003Cbr />\nUnder the Omnibus deal, companies now have until \u003Cstrong>2 December 2026\u003C/strong> (instead of August) to comply.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"simpler-rules-clearer-governance-for-businesses\">4. Simpler rules &amp; clearer governance for businesses\u003C/h3>\n\u003Cp>\u003Cbr />\nThe agreement introduces a suite of business-friendly changes:\u003C/p>\n\u003Cp>\u003Cstrong>Extended SME protections.\u003C/strong> Certain regulatory privileges that were previously available only to small and medium-sized enterprises (SMEs) are now extended to \u003Cstrong>small mid-cap companies\u003C/strong> which are slightly larger businesses that still lack the compliance resources of major corporations. For Europe's fast-growing AI startup ecosystem, this is meaningful relief.\u003C/p>\n\u003Cp>\u003Cstrong>Resolving the overlap with product safety law.\u003C/strong> One of the thorniest issues in AI Act implementation has been how it interacts with existing EU product safety legislation. This is the reason previous negociations have come to a deadend. The Omnibus explicitly clarifies this relationship, eliminating duplicative requirements. Companies building AI into industrial products no longer face the prospect of complying with two overlapping regulatory regimes.\u003C/p>\n\u003Cblockquote>\n\u003Cp>Therefore, under the Omnibus proposal, AI-powered machinery regulated by the \u003Ca href=\"https://single-market-economy.ec.europa.eu/sectors/mechanical-engineering/machinery_en\" rel=\"nofollow\">EU Machinery Regulation \u003C/a>would be excluded from the AI Act’s dedicated high-risk obligations and would only need to comply with the requirements established under the relevant sectoral framework.\u003C/p>\n\u003C/blockquote>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/blog/which-laws-apply-alongside-the-eu-ai-act/60019 \" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">To better understand which laws apply alongside the AI Act, click here\u003C/a>\u003C/div>\n\u003Cp>\u003Cstrong>Stronger AI Office powers.\u003C/strong> The Commission's AI Office, the body responsible for overseeing the most powerful AI systems, will see its enforcement powers strengthened. This is particularly significant for oversight of \u003Cstrong>general-purpose AI models\u003C/strong> (like large language models) and AI systems embedded in \u003Cstrong>very large online platforms and search engines\u003C/strong>, which fall under some of the most complex provisions of the Act.\u003C/p>\n\u003Cp>\u003Cstrong>Wider access to regulatory sandboxes.\u003C/strong> The agreement expands access to regulatory sandboxes — controlled environments where companies can test AI systems in real-world conditions with regulatory oversight and legal certainty. Notably, the deal includes provision for an \u003Cstrong>EU-level sandbox\u003C/strong>, giving innovators the option to test at European scale, not just nationally.\u003C/p>\n\u003Ch2 id=\"modifications-in-a-nutshell\">Modifications in a nutshell\u003C/h2>\n\u003Ctable>\n\u003Cthead>\n\u003Ctr>\n\u003Cth>Obligation\u003C/th>\n\u003Cth>Status following the Omnibus\u003C/th>\n\u003Cth>Practical impact\u003C/th>\n\u003C/tr>\n\u003C/thead>\n\u003Ctbody>\n\u003Ctr>\n\u003Ctd>\u003Cstrong>Prohibited AI practices (Art. 5)\u003C/strong>\u003C/td>\n\u003Ctd>Applicable since 2 February 2025. The Omnibus also introduces a new prohibition covering AI systems used to generate non-consensual sexual content and CSAM.\u003C/td>\n\u003Ctd>These rules already apply and leave no transition period. \u003Cbr>\u003Cbr>Organizations should immediately review their AI use cases to identify and prohibit any non-compliant practices.\u003C/td>\n\u003C/tr>\n\u003Ctr>\n\u003Ctd>\u003Cstrong>AI literacy (Art. 4)\u003C/strong>\u003C/td>\n\u003Ctd>Applicable since 2 February 2025. Providers and deployers must ensure an adequate level of AI literacy among staff.\u003C/td>\n\u003Ctd>Organizations should already have awareness and training measures in place and be able to demonstrate them through documented programmes and internal governance.\u003C/td>\n\u003C/tr>\n\u003Ctr>\n\u003Ctd>\u003Cstrong>AI-generated content labelling (Art. 50)\u003C/strong>\u003C/td>\n\u003Ctd>Compliance deadline postponed to 2 December 2026, with an additional three-month transition period compared to the original August timeline.\u003C/td>\n\u003Ctd>Organizations now have additional time to implement transparency and labelling mechanisms for AI-generated content, particularly in customer-facing environments.\u003C/td>\n\u003C/tr>\n\u003Ctr>\n\u003Ctd>\u003Cstrong>High-risk AI systems (Annex III)\u003C/strong>\u003C/td>\n\u003Ctd>Application postponed from 2 August 2026 to 2 December 2027.\u003C/td>\n\u003Ctd>While the deadline has been extended, organizations should already start identifying and classifying potential high-risk AI systems to prepare for future compliance obligations.\u003C/td>\n\u003C/tr>\n\u003Ctr>\n\u003Ctd>\u003Cstrong>High-risk AI systems embedded in regulated products (Annex I)\u003C/strong>\u003C/td>\n\u003Ctd>Application postponed from 2 August 2027 to 2 August 2028.\u003C/td>\n\u003Ctd>Mainly impacts AI integrated into regulated products such as medical devices, industrial equipment, or machinery, providing additional time for sector-specific compliance alignment.\u003C/td>\n\u003C/tr>\n\u003C/tbody>\n\u003C/table>\n\u003Chr />\n\u003Ch2 id=\"what-happens-next\">What happens next?\u003C/h2>\n\u003Cp>Today's agreement is \u003Cstrong>provisional\u003C/strong>: a political deal, but not yet law. Both the European Parliament and the Council must formally vote to adopt the text. This is expected to take place somewhere \u003Cstrong>between June and July.\u003C/strong>\u003C/p>\n\u003Cp>Once they do, the amendments will be published in the \u003Cstrong>Official Journal of the European Union\u003C/strong> and enter into force just \u003Cstrong>three days later\u003C/strong>. This is likely to happen around end of July.\u003C/p>\n\u003Cp>The race is on: the original high-risk AI rules were due to start applying on 2 August 2026, and the formal adoption must happen before that date.\u003C/p>\n\u003Cblockquote>\n\u003Cp>In parallel, the European Commission published a separate Digital Omnibus package on 19 November 2025 proposing amendments to the GDPR and the ePrivacy Directive. However, these proposals have not yet reached political agreement at EU level.\u003C/p>\n\u003C/blockquote>\n\u003Chr />\n\u003Ch2 id=\"why-this-matters-now\">Why this matters now\u003C/h2>\n\u003Cp>The extension of certain deadlines under the Omnibus should \u003Cstrong>not be interpreted as an invitation to pause AI governance efforts until 2027\u003C/strong>. While the timeline for some obligations has been adjusted, the AI Act is already in force and organizations remain expected to prepare for compliance now.\u003C/p>\n\u003Cp>The \u003Cstrong>companies that will be in the strongest and most defensible position\u003C/strong> by the time the new deadlines apply \u003Cstrong>are those that use this additional time strategically:\u003C/strong> identifying their AI use cases, mapping data flows, assessing risks, and building the documentation and audit trails required by the regulation.\u003C/p>\n\u003Cp>Moreover, the AI Act’s extraterritorial scope remains unchanged. Any organization serving EU clients, operating within the EU, or providing AI-enabled services to EU entities may still fall within the scope of the regulation.\u003C/p>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/white-papers/ai-act-deploy-an-ai-management-system-aims-in-6-steps/59953\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">Start your compliance with the AI Act with these easy steps! \u003C/a>\u003C/div>\n\u003Chr />\n\u003Cp>\u003Cem>Sources: \u003Ca href=\"https://www.consilium.europa.eu/en/press/press-releases/2026/05/07/artificial-intelligence-council-and-parliament-agree-to-simplify-and-streamline-rules/\" rel=\"nofollow\">EU Council Press Release\u003C/a> · \u003Ca href=\"https://www.europarl.europa.eu/news/en/press-room/20260427IPR42011/ai-act-deal-on-simplification-measures-ban-on-nudifier-apps\" rel=\"nofollow\">European Parliament\u003C/a>\u003C/em>\u003C/p>\n\u003Cp>\u003Cem>This note is published for informational purposes only. It does not constitute legal advice. Dastra makes no warranty as to the accuracy or completeness of this analysis.\u003C/em>\u003C/p>\n","New Omnibus Agreement: How the EU AI Act changes ","The EU just agreed to simplify the AI Act. Learn what the Digital Omnibus changes for businesses, citizens, and high-risk AI compliance deadlines in 2026.",1827,10,"Simpler, safer, stricter where it counts: inside the EU's AI Omnibus Deal",0,null,"en","simpler-safer-stricter-where-it-counts-inside-the-eu-ai-omnibus-deal","EU rewrites AI rules in 2026. Here's what you need to know","Published",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2026-05-07T15:38:00","2026-05-07T15:38:37.8262223","2026-05-11T14:53:32.3255669",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":32},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},[],"https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-original.jpg",[51,52,53,54,55,56,57],"https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-1000.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-1500.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-800.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-600.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-300.webp","https://static.dastra.eu/content/4a56d2ec-1ceb-4fc1-af76-6863e015f2d2/visuel-article-5-100.webp",60025,{"items":60,"total":100,"size":101,"page":101},[61],{"title":62,"nbDownloads":63,"excerpt":13,"lang":14,"url":64,"intro":65,"featured":4,"state":17,"author":66,"authorId":19,"datePublication":67,"dateCreation":68,"dateUpdate":69,"mainCategory":70,"categories":77,"metaDatas":85,"imageUrl":90,"imageThumbUrls":91,"id":99},"Your Checklist to Multi-State Privacy Impact Assessments ",7,"your-checklist-to-multi-state-privacy-impact-assessment-compliance","Master multi-state Privacy Impact Assessments by downloading this checklist.",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},"2026-02-23T10:07:00","2026-02-23T10:07:01.6114712","2026-02-24T15:38:38.0037058",{"id":71,"name":72,"description":13,"url":73,"color":74,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":75,"translations":76},70,"Livre blanc","white-papers","#1795d3",3,[],[78,83],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":79},[80,81,82],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},{"id":71,"name":72,"description":13,"url":73,"color":74,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":75,"translations":84},[],[86],{"typeMetaDataId":87,"value":88,"id":89},4,"https://static.dastra.eu/backofficefilescontainer/6c9c6770-09f5-44d2-ac35-466a87c40426/US PIA Cross State Checklist Best Practices.pdf",117305,"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-original.jpg",[92,93,94,95,96,97,98],"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1000.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1500.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-800.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-600.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-300.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-100.webp",59886,12,1]