[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fJ_tUbG9HTe6cJ3Su0Rt5QcL-V5mrY1uwAirbzlAY-Ss":3,"white_papers":59},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":11,"nbDownloads":12,"excerpt":13,"lang":14,"url":15,"intro":16,"featured":4,"state":17,"author":18,"authorId":19,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":48,"imageUrl":49,"imageThumbUrls":50,"id":58},false,"When the EU AI Act entered into force in August 2024, it did not arrive in an empty regulatory space. It arrived in one of the **most densely regulated environments** & it was designed to fit within it, not replace it.\n\nA close reading of the Act reveals references to more than 30 existing legal instruments. Each of these frameworks continues to apply in full alongside the AI Act.\n\nThe Regulation intersects with other EU law in five distinct ways: (1) it operates *without prejudice* to existing law (complementarity); (2) it defers conformity assessment to sector-specific bodies; (3) it assigns enforcement to existing supervisors in specific sectors; (4) it requires compliance with procedural rules of other frameworks; and (5) it directly amends certain acts.\n\nGetting this wrong (treating the AI Act as self-contained, or assuming compliance with a sector-specific framework is sufficient) is the **most common and consequential misreading of the AI Act.**\n\nFor compliance professionals, in-house counsel, and policymakers, **navigating this landscape requires more than familiarity with the AI Act in isolation.** It requires a clear map of which frameworks apply when, how they interact & where they reinforce each other.\n\nThat is precisely what we have built.\n\n---\n\n### 1. Data Protection Law\n\nThe AI Act explicitly states that it is \"without prejudice to existing Union law, in particular on data protection.\" The three pillars of EU data protection law all remain fully applicable alongside it:\n\n**GDPR — Regulation (EU) 2016/679** applies whenever an AI system processes personal data, which is the overwhelming majority of cases. The AI Act does not replace GDPR obligations; it adds to them. Data subjects retain all GDPR rights (access, erasure, objection) in full. A provider of a high-risk AI system must simultaneously satisfy AI Act conformity requirements *and* conduct a GDPR Data Protection Impact Assessment where required. The two frameworks are complementary.\n\n**Law Enforcement Directive — Directive (EU) 2016/680** is specifically relevant to AI systems used by police and criminal justice authorities. The AI Act's strict rules on real-time remote biometric identification in public spaces must be read together with this Directive, which governs the lawfulness of processing personal data for law enforcement purposes. Authorization under the AI Act does not substitute for the legal basis required under the Directive.\n\n**EU Institutions Data Protection — Regulation (EU) 2018/1725** applies when Union institutions, bodies, offices and agencies (such as Europol, Frontex or eu-LISA) deploy or develop AI systems. It is the institutional equivalent of GDPR and applies in parallel to AI Act obligations on those entities.\n\n**ePrivacy Directive — 2002/58/EC** (referenced alongside the above) continues to govern the confidentiality of electronic communications and conditions for data storage on terminal equipment, including where AI systems process such data.\n\n![](https://static.dastra.eu/richtext/93333322-b9d4-4476-826b-a6fdd4c8c253/notebooklm-mind-map-5-original.png)\n\n---\n\n### 2. Employment & consumer protection\n\n\\\n**Employment and Workers' Rights (Directive 2002/14/EC)**\n\nThe AI Act states it does not affect existing obligations under Directive 2002/14/EC for employers to *inform or consult* workers or their representatives when deciding to deploy AI systems. So if a works council or employee representatives already have consultation rights under 2002/14 that are triggered by the introduction of new technology, the AI Act does not override or displace those rights.\n\nWhere Directive 2002/14's conditions for triggering information and consultation are *not* met (e.g. because the employer is below the threshold of 50 employees, or the specific decision doesn't qualify), the AI Act creates its own standalone information right — workers must still be told about the planned deployment of a high-risk AI system in the workplace.\n\nIn practice this means: a deployer who is an employer has a *layered* obligation: consult workers under 2002/14 where it applies, and in any event inform them under the AI Act.\n\n**Consumer Protection**\n\n##### 1. Directive 2005/29/EC — Unfair Commercial Practices Directive: *explicitly complementary*\n\nThe AI Act prohibits AI systems that use subliminal techniques, deceptive techniques, or that exploit vulnerabilities (age, disability, poverty) to materially distort behaviour in a harmful way. The Act explicitly states these prohibitions are \"complementary to the provisions contained in Directive 2005/29/EC.\" In practical terms: unfair commercial practices causing financial or economic harm to consumers are already prohibited under Directive 2005/29 *regardless of the technology used* — AI or otherwise. The AI Act adds a layer on top targeting specifically AI-enabled manipulation, without displacing what Directive 2005/29 already covers.\n\nThe key consequence: a business using a manipulative AI system to push consumers into purchases could face enforcement under *both* Directive 2005/29 (via national consumer protection authorities) *and* the AI Act (via market surveillance authorities). The AI Act explicitly says its prohibitions \"should not affect\" what Directive 2005/29 already forbids.\n\n##### 2. Directive (EU) 2020/1828 — Representative Actions Directive: *directly amended by the AI Act*\n\nThis is one of the directives formally **amended** by the AI Act (it appears in the title of the Regulation alongside the sectoral acts). The amendment adds AI Act violations to the list of Union laws for which qualified consumer organisations can bring representative actions on behalf of a group of consumers. This is practically important: it means that where an AI system harms a large number of consumers (e.g. through discriminatory credit scoring, manipulative commercial profiling, or unlawful biometric processing), a consumer association can take collective legal action rather than requiring each individual to sue separately.\n\n![](https://static.dastra.eu/richtext/b67bbff7-e568-4892-92c4-4998f8dac61c/notebooklm-mind-map-10-original.png)\n\n### 3. Market Surveillance & the New Legislative Framework\n\n**Market Surveillance Regulation — Regulation (EU) 2019/1020** is the foundational framework that the AI Act is deliberately built on top of. The AI Act's rules on conformity assessment, CE marking, notified bodies, market surveillance authorities, and product enforcement are all modelled on and consistent with this Regulation and the wider New Legislative Framework (NLF). Member States' market surveillance authorities exercise their powers under the AI Act in accordance with 2019/1020. This means the same procedural tools (product withdrawal, recall, fines, border controls) used for physical products apply to high-risk AI systems.\n\n**Decision No 768/2008/EC** (common framework for product marketing) and **Regulation (EC) No 765/2008** (accreditation requirements) are also part of this NLF backbone and are incorporated by reference for conformity assessment procedures.\n\n---\n\n### 3. Sector-Specific Product Legislation (Annex I) — *Dual compliance, shared conformity assessment*\n\nThis is perhaps the most practically significant category. The AI Act's Annex I lists existing EU harmonisation legislation. AI systems that are safety components of products already regulated under these acts are automatically classified as **high-risk** under the AI Act. Crucially, the AI Act does not displace the sectoral rules — it adds a layer. A single product may need to comply with *both* its sector-specific legislation *and* the AI Act. The conformity assessment may be integrated: where an existing sectoral act already requires a third-party conformity assessment by a notified body, that body's assessment under the AI Act can be folded into the same procedure.\n\n![](https://static.dastra.eu/richtext/52111188-1456-4d67-a0d8-83e104f5468c/notebooklm-mind-map-8-original.png)The sectors and applicable legislation are:\n\n**Medical & Health**\n\n- **Medical Devices Regulation — Regulation (EU) 2017/745**: AI systems that are medical devices or safety components thereof are high-risk under the AI Act. The MDR notified body framework can be used for AI Act conformity assessment.\n- **IVD Regulation — Regulation (EU) 2017/746**: Same logic applies to in vitro diagnostic devices.\n\n**Transport & Vehicles**\n\n- **Motor Vehicles — Regulation (EU) 2019/2144**: Type-approval requirements for motor vehicles. AI systems in cars (e.g. advanced driver assistance, autonomous driving functions) must meet AI Act requirements; the AI Act amends this Regulation to require the Commission to account for AI requirements in delegated acts.\n- **Vehicle Type Approval — Regulation (EU) 2018/858**: The overarching framework for motor vehicle type approval is similarly amended.\n- **Agricultural Vehicles — Regulations (EU) No 167/2013 and No 168/2013**: Cover tractors and two/three-wheelers; AI systems in these vehicles are Annex I regulated.\n- **Civil Aviation Security — Regulation (EC) No 300/2008**: AI systems used in civil aviation security screening (e.g. threat detection in luggage) are captured.\n- **Aviation Safety — Regulation (EU) 2018/1139** (EASA Regulation): Establishes EASA and common rules for civil aviation. AI systems in aircraft or aviation infrastructure fall under both EASA rules and the AI Act; the AI Act amends this Regulation.\n- **Railway Safety — Directive (EU) 2016/797**: AI systems as safety components in rail infrastructure or rolling stock must comply. The AI Act amends this Directive.\n- **Marine Equipment — Directive 2014/90/EU**: AI systems in marine equipment fall under this regime. Also amended by the AI Act.\n\n**Machinery & Industrial Equipment**\n\n- **Machinery Directive — Directive 2006/42/EC**: AI systems embedded in machinery as safety components are high-risk. The \"Blue Guide\" principle applies: both the Machinery Directive and the AI Act may be applicable simultaneously to the same product.\n- **Lifts — Directive 2014/33/EU**: Safety components in lifts.\n- **ATEX (Explosive Atmospheres) — Directive 2014/34/EU**: Equipment in explosive atmospheres.\n- **Pressure Equipment — Directive 2014/68/EU**: Pressure vessels with AI safety components.\n- **Radio Equipment — Directive 2014/53/EU**: AI in radio equipment falls under both frameworks (this Directive is also referenced in the amended EASA Regulation).\n- **PPE — Regulation (EU) 2016/425**: Personal protective equipment.\n- **Gas Appliances — Regulation (EU) 2016/426**: Appliances burning gaseous fuels.\n- **Non-automatic Weighing Instruments — Directive 2014/31/EU** and **Measuring Instruments — Directive 2014/32/EU**: Relevant measuring equipment with AI components.\n\n---\n\n### 4. Financial Services\n\nThe AI Act takes a distinctive approach to the financial sector: rather than creating new supervisors, it designates existing financial regulators as the competent authorities for AI Act market surveillance within their domains. This means:\n\n**Banking — CRD/CRR framework (Directive 2013/36/EU and Regulation (EU) No 575/2013)**: National banking supervisors responsible for credit institutions under CRD IV are designated as the AI Act's competent authorities for those institutions. They must report relevant findings to the ECB under the Single Supervisory Mechanism.\n\n**Single Supervisory Mechanism — Council Regulation (EU) No 1024/2013**: National authorities participating in the SSM must notify the ECB of any AI Act market surveillance findings that may be of prudential interest. This creates a direct information bridge between AI oversight and banking prudential supervision.\n\n**Insurance — Solvency II (Directive 2009/138/EC)**: Insurance and reinsurance undertakings' AI compliance is supervised by the Solvency II competent authorities.\n\n**Insurance Distribution — Directive (EU) 2016/97**: Insurance intermediaries fall under the same logic.\n\n**Consumer Credit / Mortgages — Directives 2008/48/EC and 2014/17/EU**: Mentioned as part of the cluster of financial services directives whose supervisors take on AI Act competence.\n\nThe key principle here is that financial institutions subject to internal governance requirements under Union financial services law can use those existing governance frameworks to demonstrate compliance with the AI Act's obligations on providers and deployers of high-risk AI systems, ensuring consistency and avoiding duplication.\n\n---\n\n### 5. Migration, Asylum & Border Control\n\nAI systems used in migration, asylum, and border management are classified as high-risk and must comply *not only* with the AI Act but also with the specific procedural rules of the relevant Union law. The AI Act is explicit that it must never be used to circumvent international obligations (UN Refugee Convention).\n\n**Visa Code — Regulation (EC) No 810/2009**: AI systems used in visa processing or border management must comply with the procedural requirements of this Code.\n\n**Asylum Procedures Directive — Directive 2013/32/EU**: AI tools used in asylum determination processes must respect the procedural guarantees of this Directive, including the right to an effective remedy.\n\n**Large-Scale IT Systems (Annex X)**: The AI Act lists the large-scale IT systems in the Area of Freedom, Security and Justice to which it applies, including:\n\n- **SIS (Schengen Information System)** — Regulations (EU) 2018/1860 and 2018/1861\n- **VIS (Visa Information System)** — referenced via the 2021/1134 amending regulation\n- **Eurodac** — Regulation (EU) 2024/1358: biometric data comparison for asylum seekers; AI systems operating within Eurodac are subject to the AI Act\n- **EES (Entry/Exit System)** — Regulation (EU) 2017/2226\n- **ETIAS** — Regulation (EU) 2018/1240: European Travel Information and Authorisation System\n\nThese systems frequently involve automated decision-making and biometric processing, placing them squarely in the AI Act's high-risk category, with an obligation to comply with both the AI Act's requirements *and* the procedural and fundamental rights safeguards built into the governing regulations of each system.\n\n![](https://static.dastra.eu/richtext/efc92c2c-1b48-4189-8147-5a8b658d8aa8/notebooklm-mind-map-7-original.png)\n\n---\n\n### 6. Digital Services & Cybersecurity —\n\n**Digital Services Act — Regulation (EU) 2022/2065**: The AI Act explicitly states it **shall not affect** the DSA's liability rules for providers of intermediary services (hosting, platforms, search engines). If an AI system is embedded in an intermediary service, the DSA's liability shield provisions remain intact and the AI Act does not override them.\n\n![](https://static.dastra.eu/richtext/455370d4-7699-43b3-97ef-e566f90d6cf3/notebooklm-mind-map-6-original.png)\n\n**Cybersecurity Act — Regulation (EU) 2019/881 (ENISA)**: The AI Act establishes a cooperation duty: the Commission must work with ENISA on cybersecurity issues related to AI systems. ENISA's expertise and cybersecurity certification framework informs the AI Act's approach to cybersecurity requirements for high-risk AI systems (which must be designed to withstand attacks). Where AI systems undergo EU cybersecurity certification, this may be taken into account in conformity assessment.\n\n**Critical Entities Resilience Directive — Directive (EU) 2022/2557**: The definition of \"critical infrastructure\" from Article 2(4) of this Directive is directly imported into the AI Act. A serious disruption to critical infrastructure (as defined therein) that creates an imminent threat to life constitutes one of the trigger conditions justifying real-time biometric identification by law enforcement in public spaces — one of the very limited exceptions to the general prohibition.\n\n---\n\n### 7. Whistleblower Protection\n\n**Whistleblower Directive — Directive (EU) 2019/1937**: The AI Act expressly states this Directive **applies** to reporting of infringements of the AI Act. Any person who reports a breach of the AI Act to competent authorities or via internal channels has the full protection afforded to whistleblowers under this Directive, including protection against retaliation.\n\n---\n\n### 8. Product Liability\n\n**Council Directive 85/374/EEC** (Product Liability Directive): The AI Act is clear that all rights and remedies for damages under this Directive remain \"unaffected and fully applicable.\" Victims of harm caused by a defective AI system can still pursue product liability claims. The AI Act compliance does not provide a liability shield under the Product Liability Directive.\n\n---\n\n### Summary\n\nThe AI Act is designed as a **horizontal, cross-sectoral regulation** that *complements rather than replaces* existing law. Its underlying logic can be summarized in three principles:\n\n(1) where sectoral legislation already exists (medical devices, aviation, machinery, etc.), the AI Act adds AI-specific requirements on top, and uses the existing conformity assessment infrastructure where possible;\n\n(2) where fundamental rights frameworks exist (GDPR, Law Enforcement Directive, asylum law), the AI Act operates alongside them and doesn't derogate from them; and\n\n(3) where sector-specific supervisors already exist (financial regulators, aviation authorities), they become the AI Act's enforcement arm for their domains, avoiding supervisory fragmentation.","\u003Cp>When the EU AI Act entered into force in August 2024, it did not arrive in an empty regulatory space. It arrived in one of the \u003Cstrong>most densely regulated environments\u003C/strong> &amp; it was designed to fit within it, not replace it.\u003C/p>\n\u003Cp>A close reading of the Act reveals references to more than 30 existing legal instruments. Each of these frameworks continues to apply in full alongside the AI Act.\u003C/p>\n\u003Cp>The Regulation intersects with other EU law in five distinct ways: (1) it operates \u003Cem>without prejudice\u003C/em> to existing law (complementarity); (2) it defers conformity assessment to sector-specific bodies; (3) it assigns enforcement to existing supervisors in specific sectors; (4) it requires compliance with procedural rules of other frameworks; and (5) it directly amends certain acts.\u003C/p>\n\u003Cp>Getting this wrong (treating the AI Act as self-contained, or assuming compliance with a sector-specific framework is sufficient) is the \u003Cstrong>most common and consequential misreading of the AI Act.\u003C/strong>\u003C/p>\n\u003Cp>For compliance professionals, in-house counsel, and policymakers, \u003Cstrong>navigating this landscape requires more than familiarity with the AI Act in isolation.\u003C/strong> It requires a clear map of which frameworks apply when, how they interact &amp; where they reinforce each other.\u003C/p>\n\u003Cp>That is precisely what we have built.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"data-protection-law\">1. Data Protection Law\u003C/h3>\n\u003Cp>The AI Act explicitly states that it is \"without prejudice to existing Union law, in particular on data protection.\" The three pillars of EU data protection law all remain fully applicable alongside it:\u003C/p>\n\u003Cp>\u003Cstrong>GDPR — Regulation (EU) 2016/679\u003C/strong> applies whenever an AI system processes personal data, which is the overwhelming majority of cases. The AI Act does not replace GDPR obligations; it adds to them. Data subjects retain all GDPR rights (access, erasure, objection) in full. A provider of a high-risk AI system must simultaneously satisfy AI Act conformity requirements \u003Cem>and\u003C/em> conduct a GDPR Data Protection Impact Assessment where required. The two frameworks are complementary.\u003C/p>\n\u003Cp>\u003Cstrong>Law Enforcement Directive — Directive (EU) 2016/680\u003C/strong> is specifically relevant to AI systems used by police and criminal justice authorities. The AI Act's strict rules on real-time remote biometric identification in public spaces must be read together with this Directive, which governs the lawfulness of processing personal data for law enforcement purposes. Authorization under the AI Act does not substitute for the legal basis required under the Directive.\u003C/p>\n\u003Cp>\u003Cstrong>EU Institutions Data Protection — Regulation (EU) 2018/1725\u003C/strong> applies when Union institutions, bodies, offices and agencies (such as Europol, Frontex or eu-LISA) deploy or develop AI systems. It is the institutional equivalent of GDPR and applies in parallel to AI Act obligations on those entities.\u003C/p>\n\u003Cp>\u003Cstrong>ePrivacy Directive — 2002/58/EC\u003C/strong> (referenced alongside the above) continues to govern the confidentiality of electronic communications and conditions for data storage on terminal equipment, including where AI systems process such data.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/93333322-b9d4-4476-826b-a6fdd4c8c253/notebooklm-mind-map-5-original.png\" alt=\"\" />\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"employment-consumer-protection\">2. Employment &amp; consumer protection\u003C/h3>\n\u003Cp>\u003Cbr />\n\u003Cstrong>Employment and Workers' Rights (Directive 2002/14/EC)\u003C/strong>\u003C/p>\n\u003Cp>The AI Act states it does not affect existing obligations under Directive 2002/14/EC for employers to \u003Cem>inform or consult\u003C/em> workers or their representatives when deciding to deploy AI systems. So if a works council or employee representatives already have consultation rights under 2002/14 that are triggered by the introduction of new technology, the AI Act does not override or displace those rights.\u003C/p>\n\u003Cp>Where Directive 2002/14's conditions for triggering information and consultation are \u003Cem>not\u003C/em> met (e.g. because the employer is below the threshold of 50 employees, or the specific decision doesn't qualify), the AI Act creates its own standalone information right — workers must still be told about the planned deployment of a high-risk AI system in the workplace.\u003C/p>\n\u003Cp>In practice this means: a deployer who is an employer has a \u003Cem>layered\u003C/em> obligation: consult workers under 2002/14 where it applies, and in any event inform them under the AI Act.\u003C/p>\n\u003Cp>\u003Cstrong>Consumer Protection\u003C/strong>\u003C/p>\n\u003Ch5 id=\"directive-200529ec-unfair-commercial-practices-directive-explicitly-complementary\">1. Directive 2005/29/EC — Unfair Commercial Practices Directive: \u003Cem>explicitly complementary\u003C/em>\u003C/h5>\n\u003Cp>The AI Act prohibits AI systems that use subliminal techniques, deceptive techniques, or that exploit vulnerabilities (age, disability, poverty) to materially distort behaviour in a harmful way. The Act explicitly states these prohibitions are \"complementary to the provisions contained in Directive 2005/29/EC.\" In practical terms: unfair commercial practices causing financial or economic harm to consumers are already prohibited under Directive 2005/29 \u003Cem>regardless of the technology used\u003C/em> — AI or otherwise. The AI Act adds a layer on top targeting specifically AI-enabled manipulation, without displacing what Directive 2005/29 already covers.\u003C/p>\n\u003Cp>The key consequence: a business using a manipulative AI system to push consumers into purchases could face enforcement under \u003Cem>both\u003C/em> Directive 2005/29 (via national consumer protection authorities) \u003Cem>and\u003C/em> the AI Act (via market surveillance authorities). The AI Act explicitly says its prohibitions \"should not affect\" what Directive 2005/29 already forbids.\u003C/p>\n\u003Ch5 id=\"directive-eu-20201828-representative-actions-directive-directly-amended-by-the-ai-act\">2. Directive (EU) 2020/1828 — Representative Actions Directive: \u003Cem>directly amended by the AI Act\u003C/em>\u003C/h5>\n\u003Cp>This is one of the directives formally \u003Cstrong>amended\u003C/strong> by the AI Act (it appears in the title of the Regulation alongside the sectoral acts). The amendment adds AI Act violations to the list of Union laws for which qualified consumer organisations can bring representative actions on behalf of a group of consumers. This is practically important: it means that where an AI system harms a large number of consumers (e.g. through discriminatory credit scoring, manipulative commercial profiling, or unlawful biometric processing), a consumer association can take collective legal action rather than requiring each individual to sue separately.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/b67bbff7-e568-4892-92c4-4998f8dac61c/notebooklm-mind-map-10-original.png\" alt=\"\" />\u003C/p>\n\u003Ch3 id=\"market-surveillance-the-new-legislative-framework\">3. Market Surveillance &amp; the New Legislative Framework\u003C/h3>\n\u003Cp>\u003Cstrong>Market Surveillance Regulation — Regulation (EU) 2019/1020\u003C/strong> is the foundational framework that the AI Act is deliberately built on top of. The AI Act's rules on conformity assessment, CE marking, notified bodies, market surveillance authorities, and product enforcement are all modelled on and consistent with this Regulation and the wider New Legislative Framework (NLF). Member States' market surveillance authorities exercise their powers under the AI Act in accordance with 2019/1020. This means the same procedural tools (product withdrawal, recall, fines, border controls) used for physical products apply to high-risk AI systems.\u003C/p>\n\u003Cp>\u003Cstrong>Decision No 768/2008/EC\u003C/strong> (common framework for product marketing) and \u003Cstrong>Regulation (EC) No 765/2008\u003C/strong> (accreditation requirements) are also part of this NLF backbone and are incorporated by reference for conformity assessment procedures.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"sector-specific-product-legislation-annex-i-dual-compliance-shared-conformity-assessment\">3. Sector-Specific Product Legislation (Annex I) — \u003Cem>Dual compliance, shared conformity assessment\u003C/em>\u003C/h3>\n\u003Cp>This is perhaps the most practically significant category. The AI Act's Annex I lists existing EU harmonisation legislation. AI systems that are safety components of products already regulated under these acts are automatically classified as \u003Cstrong>high-risk\u003C/strong> under the AI Act. Crucially, the AI Act does not displace the sectoral rules — it adds a layer. A single product may need to comply with \u003Cem>both\u003C/em> its sector-specific legislation \u003Cem>and\u003C/em> the AI Act. The conformity assessment may be integrated: where an existing sectoral act already requires a third-party conformity assessment by a notified body, that body's assessment under the AI Act can be folded into the same procedure.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/52111188-1456-4d67-a0d8-83e104f5468c/notebooklm-mind-map-8-original.png\" alt=\"\" />The sectors and applicable legislation are:\u003C/p>\n\u003Cp>\u003Cstrong>Medical &amp; Health\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Medical Devices Regulation — Regulation (EU) 2017/745\u003C/strong>: AI systems that are medical devices or safety components thereof are high-risk under the AI Act. The MDR notified body framework can be used for AI Act conformity assessment.\u003C/li>\n\u003Cli>\u003Cstrong>IVD Regulation — Regulation (EU) 2017/746\u003C/strong>: Same logic applies to in vitro diagnostic devices.\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Transport &amp; Vehicles\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Motor Vehicles — Regulation (EU) 2019/2144\u003C/strong>: Type-approval requirements for motor vehicles. AI systems in cars (e.g. advanced driver assistance, autonomous driving functions) must meet AI Act requirements; the AI Act amends this Regulation to require the Commission to account for AI requirements in delegated acts.\u003C/li>\n\u003Cli>\u003Cstrong>Vehicle Type Approval — Regulation (EU) 2018/858\u003C/strong>: The overarching framework for motor vehicle type approval is similarly amended.\u003C/li>\n\u003Cli>\u003Cstrong>Agricultural Vehicles — Regulations (EU) No 167/2013 and No 168/2013\u003C/strong>: Cover tractors and two/three-wheelers; AI systems in these vehicles are Annex I regulated.\u003C/li>\n\u003Cli>\u003Cstrong>Civil Aviation Security — Regulation (EC) No 300/2008\u003C/strong>: AI systems used in civil aviation security screening (e.g. threat detection in luggage) are captured.\u003C/li>\n\u003Cli>\u003Cstrong>Aviation Safety — Regulation (EU) 2018/1139\u003C/strong> (EASA Regulation): Establishes EASA and common rules for civil aviation. AI systems in aircraft or aviation infrastructure fall under both EASA rules and the AI Act; the AI Act amends this Regulation.\u003C/li>\n\u003Cli>\u003Cstrong>Railway Safety — Directive (EU) 2016/797\u003C/strong>: AI systems as safety components in rail infrastructure or rolling stock must comply. The AI Act amends this Directive.\u003C/li>\n\u003Cli>\u003Cstrong>Marine Equipment — Directive 2014/90/EU\u003C/strong>: AI systems in marine equipment fall under this regime. Also amended by the AI Act.\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Machinery &amp; Industrial Equipment\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Machinery Directive — Directive 2006/42/EC\u003C/strong>: AI systems embedded in machinery as safety components are high-risk. The \"Blue Guide\" principle applies: both the Machinery Directive and the AI Act may be applicable simultaneously to the same product.\u003C/li>\n\u003Cli>\u003Cstrong>Lifts — Directive 2014/33/EU\u003C/strong>: Safety components in lifts.\u003C/li>\n\u003Cli>\u003Cstrong>ATEX (Explosive Atmospheres) — Directive 2014/34/EU\u003C/strong>: Equipment in explosive atmospheres.\u003C/li>\n\u003Cli>\u003Cstrong>Pressure Equipment — Directive 2014/68/EU\u003C/strong>: Pressure vessels with AI safety components.\u003C/li>\n\u003Cli>\u003Cstrong>Radio Equipment — Directive 2014/53/EU\u003C/strong>: AI in radio equipment falls under both frameworks (this Directive is also referenced in the amended EASA Regulation).\u003C/li>\n\u003Cli>\u003Cstrong>PPE — Regulation (EU) 2016/425\u003C/strong>: Personal protective equipment.\u003C/li>\n\u003Cli>\u003Cstrong>Gas Appliances — Regulation (EU) 2016/426\u003C/strong>: Appliances burning gaseous fuels.\u003C/li>\n\u003Cli>\u003Cstrong>Non-automatic Weighing Instruments — Directive 2014/31/EU\u003C/strong> and \u003Cstrong>Measuring Instruments — Directive 2014/32/EU\u003C/strong>: Relevant measuring equipment with AI components.\u003C/li>\n\u003C/ul>\n\u003Chr />\n\u003Ch3 id=\"financial-services\">4. Financial Services\u003C/h3>\n\u003Cp>The AI Act takes a distinctive approach to the financial sector: rather than creating new supervisors, it designates existing financial regulators as the competent authorities for AI Act market surveillance within their domains. This means:\u003C/p>\n\u003Cp>\u003Cstrong>Banking — CRD/CRR framework (Directive 2013/36/EU and Regulation (EU) No 575/2013)\u003C/strong>: National banking supervisors responsible for credit institutions under CRD IV are designated as the AI Act's competent authorities for those institutions. They must report relevant findings to the ECB under the Single Supervisory Mechanism.\u003C/p>\n\u003Cp>\u003Cstrong>Single Supervisory Mechanism — Council Regulation (EU) No 1024/2013\u003C/strong>: National authorities participating in the SSM must notify the ECB of any AI Act market surveillance findings that may be of prudential interest. This creates a direct information bridge between AI oversight and banking prudential supervision.\u003C/p>\n\u003Cp>\u003Cstrong>Insurance — Solvency II (Directive 2009/138/EC)\u003C/strong>: Insurance and reinsurance undertakings' AI compliance is supervised by the Solvency II competent authorities.\u003C/p>\n\u003Cp>\u003Cstrong>Insurance Distribution — Directive (EU) 2016/97\u003C/strong>: Insurance intermediaries fall under the same logic.\u003C/p>\n\u003Cp>\u003Cstrong>Consumer Credit / Mortgages — Directives 2008/48/EC and 2014/17/EU\u003C/strong>: Mentioned as part of the cluster of financial services directives whose supervisors take on AI Act competence.\u003C/p>\n\u003Cp>The key principle here is that financial institutions subject to internal governance requirements under Union financial services law can use those existing governance frameworks to demonstrate compliance with the AI Act's obligations on providers and deployers of high-risk AI systems, ensuring consistency and avoiding duplication.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"migration-asylum-border-control\">5. Migration, Asylum &amp; Border Control\u003C/h3>\n\u003Cp>AI systems used in migration, asylum, and border management are classified as high-risk and must comply \u003Cem>not only\u003C/em> with the AI Act but also with the specific procedural rules of the relevant Union law. The AI Act is explicit that it must never be used to circumvent international obligations (UN Refugee Convention).\u003C/p>\n\u003Cp>\u003Cstrong>Visa Code — Regulation (EC) No 810/2009\u003C/strong>: AI systems used in visa processing or border management must comply with the procedural requirements of this Code.\u003C/p>\n\u003Cp>\u003Cstrong>Asylum Procedures Directive — Directive 2013/32/EU\u003C/strong>: AI tools used in asylum determination processes must respect the procedural guarantees of this Directive, including the right to an effective remedy.\u003C/p>\n\u003Cp>\u003Cstrong>Large-Scale IT Systems (Annex X)\u003C/strong>: The AI Act lists the large-scale IT systems in the Area of Freedom, Security and Justice to which it applies, including:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>SIS (Schengen Information System)\u003C/strong> — Regulations (EU) 2018/1860 and 2018/1861\u003C/li>\n\u003Cli>\u003Cstrong>VIS (Visa Information System)\u003C/strong> — referenced via the 2021/1134 amending regulation\u003C/li>\n\u003Cli>\u003Cstrong>Eurodac\u003C/strong> — Regulation (EU) 2024/1358: biometric data comparison for asylum seekers; AI systems operating within Eurodac are subject to the AI Act\u003C/li>\n\u003Cli>\u003Cstrong>EES (Entry/Exit System)\u003C/strong> — Regulation (EU) 2017/2226\u003C/li>\n\u003Cli>\u003Cstrong>ETIAS\u003C/strong> — Regulation (EU) 2018/1240: European Travel Information and Authorisation System\u003C/li>\n\u003C/ul>\n\u003Cp>These systems frequently involve automated decision-making and biometric processing, placing them squarely in the AI Act's high-risk category, with an obligation to comply with both the AI Act's requirements \u003Cem>and\u003C/em> the procedural and fundamental rights safeguards built into the governing regulations of each system.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/efc92c2c-1b48-4189-8147-5a8b658d8aa8/notebooklm-mind-map-7-original.png\" alt=\"\" />\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"digital-services-cybersecurity\">6. Digital Services &amp; Cybersecurity —\u003C/h3>\n\u003Cp>\u003Cstrong>Digital Services Act — Regulation (EU) 2022/2065\u003C/strong>: The AI Act explicitly states it \u003Cstrong>shall not affect\u003C/strong> the DSA's liability rules for providers of intermediary services (hosting, platforms, search engines). If an AI system is embedded in an intermediary service, the DSA's liability shield provisions remain intact and the AI Act does not override them.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/455370d4-7699-43b3-97ef-e566f90d6cf3/notebooklm-mind-map-6-original.png\" alt=\"\" />\u003C/p>\n\u003Cp>\u003Cstrong>Cybersecurity Act — Regulation (EU) 2019/881 (ENISA)\u003C/strong>: The AI Act establishes a cooperation duty: the Commission must work with ENISA on cybersecurity issues related to AI systems. ENISA's expertise and cybersecurity certification framework informs the AI Act's approach to cybersecurity requirements for high-risk AI systems (which must be designed to withstand attacks). Where AI systems undergo EU cybersecurity certification, this may be taken into account in conformity assessment.\u003C/p>\n\u003Cp>\u003Cstrong>Critical Entities Resilience Directive — Directive (EU) 2022/2557\u003C/strong>: The definition of \"critical infrastructure\" from Article 2(4) of this Directive is directly imported into the AI Act. A serious disruption to critical infrastructure (as defined therein) that creates an imminent threat to life constitutes one of the trigger conditions justifying real-time biometric identification by law enforcement in public spaces — one of the very limited exceptions to the general prohibition.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"whistleblower-protection\">7. Whistleblower Protection\u003C/h3>\n\u003Cp>\u003Cstrong>Whistleblower Directive — Directive (EU) 2019/1937\u003C/strong>: The AI Act expressly states this Directive \u003Cstrong>applies\u003C/strong> to reporting of infringements of the AI Act. Any person who reports a breach of the AI Act to competent authorities or via internal channels has the full protection afforded to whistleblowers under this Directive, including protection against retaliation.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"product-liability\">8. Product Liability\u003C/h3>\n\u003Cp>\u003Cstrong>Council Directive 85/374/EEC\u003C/strong> (Product Liability Directive): The AI Act is clear that all rights and remedies for damages under this Directive remain \"unaffected and fully applicable.\" Victims of harm caused by a defective AI system can still pursue product liability claims. The AI Act compliance does not provide a liability shield under the Product Liability Directive.\u003C/p>\n\u003Chr />\n\u003Ch3 id=\"summary\">Summary\u003C/h3>\n\u003Cp>The AI Act is designed as a \u003Cstrong>horizontal, cross-sectoral regulation\u003C/strong> that \u003Cem>complements rather than replaces\u003C/em> existing law. Its underlying logic can be summarized in three principles:\u003C/p>\n\u003Cp>(1) where sectoral legislation already exists (medical devices, aviation, machinery, etc.), the AI Act adds AI-specific requirements on top, and uses the existing conformity assessment infrastructure where possible;\u003C/p>\n\u003Cp>(2) where fundamental rights frameworks exist (GDPR, Law Enforcement Directive, asylum law), the AI Act operates alongside them and doesn't derogate from them; and\u003C/p>\n\u003Cp>(3) where sector-specific supervisors already exist (financial regulators, aviation authorities), they become the AI Act's enforcement arm for their domains, avoiding supervisory fragmentation.\u003C/p>\n","EU AI Act & Related Regulations: Full Overlap Guide","The EU AI Act overlaps with 30+ regulations. Learn how GDPR, consumer law, labour law and sector rules interact and download the mind map for instant clarity.",2509,14,"Which laws apply alongside the EU AI Act?",0,null,"en","which-laws-apply-alongside-the-eu-ai-act","The EU AI Act overlaps with 30+ regulations. Learn how GDPR, consumer law, labour law and sector rules interact, and download the mind map for instant clarity.","Published",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2026-05-06T08:17:00","2026-05-06T08:17:01.3756516","2026-05-06T14:56:51.9685685",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":32},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},[],"https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-original.jpg",[51,52,53,54,55,56,57],"https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-1000.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-1500.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-800.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-600.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-300.webp","https://static.dastra.eu/content/af1c01d4-e473-47d5-a2e0-e4662ed1d3af/visuel-article-4-100.webp",60019,{"items":60,"total":100,"size":101,"page":101},[61],{"title":62,"nbDownloads":63,"excerpt":13,"lang":14,"url":64,"intro":65,"featured":4,"state":17,"author":66,"authorId":19,"datePublication":67,"dateCreation":68,"dateUpdate":69,"mainCategory":70,"categories":77,"metaDatas":85,"imageUrl":90,"imageThumbUrls":91,"id":99},"Your Checklist to Multi-State Privacy Impact Assessments ",7,"your-checklist-to-multi-state-privacy-impact-assessment-compliance","Master multi-state Privacy Impact Assessments by downloading this checklist.",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},"2026-02-23T10:07:00","2026-02-23T10:07:01.6114712","2026-02-24T15:38:38.0037058",{"id":71,"name":72,"description":13,"url":73,"color":74,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":75,"translations":76},70,"Livre blanc","white-papers","#1795d3",3,[],[78,83],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":79},[80,81,82],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},{"id":71,"name":72,"description":13,"url":73,"color":74,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":75,"translations":84},[],[86],{"typeMetaDataId":87,"value":88,"id":89},4,"https://static.dastra.eu/backofficefilescontainer/6c9c6770-09f5-44d2-ac35-466a87c40426/US PIA Cross State Checklist Best Practices.pdf",117305,"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-original.jpg",[92,93,94,95,96,97,98],"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1000.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1500.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-800.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-600.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-300.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-100.webp",59886,12,1]