[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fQRvLcC12MzLJj4Hcwwm-JJIobUUrmyClk3oeObDH2Zw":3,"white_papers":59},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":15,"featured":4,"state":16,"author":17,"authorId":18,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":48,"imageUrl":49,"imageThumbUrls":50,"id":58},false,"With the adoption of [**Regulation (EU) 2024/1689, known as the AI Act**](https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689), the European Union has taken a historic step in regulating artificial intelligence by establishing **the first comprehensive legal framework** aimed at governing the design, placing on the market, and use of AI systems. This text, which entered into force in August 2024, is based on an approach **grounded in the level of risk posed by AI applications to individuals’ health, safety, and fundamental rights**.\n\n## A distinction between different risk levels\n\nThe **AI Act** therefore distinguishes [several categories of AI systems](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai?), each giving rise to specific legal obligations.\n\n- At the top of this hierarchy are systems presenting **unacceptable risk**: such uses, including social scoring, behavioral manipulation, or certain real-time facial recognition devices, **are strictly prohibited** on the European Union market in order to prevent serious violations of fundamental rights.\n- Next come **high-risk systems**, which are not prohibited but **must comply with a set of enhanced requirements** before being placed on the market or deployed. These obligations include, in particular, a rigorous risk assessment, data quality and bias management procedures, human oversight mechanisms, as well as detailed and traceable documentation of the system’s functioning. This category covers a variety of uses in sensitive sectors such as health, education, employment, essential services, and even justice and public order.\n- At an intermediate level, the AI Act identifies systems with **limited risk**, for which the obligations focus **essentially on transparency** toward users: for example, ensuring that users are informed that they are interacting with an AI system or that the content has been generated by artificial intelligence.\n\nThere is no category representing **minimal or no risk** in the AI Act. The vast majority of systems (such as spam filters or video games integrating AI) fall into this category and therefore do not require any specific obligations, even though voluntary best practices are encouraged.\n\n> In the categories mentioned, the challenge for organizations is not only to abstractly identify the risk level of an AI system, but also to **identify, classify, and document each use case in concrete terms**, in order to determine precisely the applicable legal regime and the resulting obligations.\n>\n> This requirement involves putting in place a **structured methodology for mapping AI use cases**, making it possible to move from a theoretical reading of the AI Act to an operational and controlled implementation in practice.\n\n## Mapping AI systems with DASTRA\n\n[Mapping](https://doc.dastra.eu/features/cartography) enables companies and organizations to visualize all deployed AI systems, identify applicable legal obligations, and prioritize compliance actions.\n\n**The approach is based on three main pillars: identification, classification, and documentation.**\n\n> For an AI use-case mapping exercise to be truly operational, it must go beyond a simple descriptive list and become a **structured repository, connected to the reality of the systems**, the data, and the regulatory obligations. **DASTRA** meets this need precisely by offering [features](https://doc.dastra.eu/features/systemes-dia) that make it possible to document, classify, and monitor AI systems within their organizational and regulatory context.\n\n### 1. Create an initial register of AI systems\n\nThe first step is to identify **all AI systems used or developed**, taking into account not only internal software, but also third-party solutions or SaaS offerings. This inventory must include **systems in production, in testing, in proof of concept (POC) stage, or in deployment**, in order to avoid the blind spots frequently observed during audits.\n\nEach system should be described according to several dimensions, such as the system’s purpose (e.g. fraud detection), the technology used, the beneficiary, and the sensitivity of the data processed.\n\nIWith **DASTRA**, each AI system can be represented by a dedicated record, which serves as the **central entry point of the mapping**.\n\n![](https://static.dastra.eu/richtext/e12b0b7c-1e5b-46ec-97fb-f2b3ea3829a3/image-original.png)This record makes it possible in particular to:\n\n- **associate the system with technical assets** already identified in the data mapping (applications, APIs, infrastructures);\n\n- **link the datasets used or generated**, facilitating impact analysis in light of the GDPR and the AI Act;\n\n- **identify key stakeholders** (business owner, controller, processor, vendor);\n\n- and **qualify the system’s status** (internal/external, being deployed, discontinued), ensuring a comprehensive and up-to-date mapping.\n\n### 2. Classify AI systems according to the risk level\n\nThe second step consists in **determining the category of the AI system used**, in order to adopt a differentiated approach based on risk level.\n\nWith **DASTRA**, it is possible to **link each system to its risk category** directly in its record. The tool also makes it possible to **document the criteria that led to the classification**, as well as to assess the **added value** of each system, in order to facilitate decision-making.\n\n### ![](https://static.dastra.eu/richtext/230f960b-b179-4dd5-8893-f92de2cf7f98/image-original.png)3. Document obligations and ensure monitoring\n\nFor each identified and classified AI system, it is necessary to **document the associated obligations and requirements**, such as applicable regulatory obligations or internal operational processes.\n\nWith **DASTRA**, this documentation is carried out directly within the **AI system record**. It is recommended to include a transparency notice (or information notice) in order to **prepare and centralize the information intended for end users**. This helps ensure that the system’s purpose, the data used, and the rights of data subjects are properly communicated.\n\n### 4. Analyze interdependencies and impacts of AI systems\n\nA mapping exercise must also **identify interactions between systems, dependency on data, and the potential impact on fundamental rights**. For example, a recommendation system using personal health data could, depending on its purpose, shift from limited risk to high risk, requiring enhanced monitoring.\n\nWith **DASTRA**, this analysis is made easier by several features. In particular, it is possible to visualize **the mapping** and **see the links between systems, assets, and datasets**, providing an overview of critical flows and dependencies. **Interactions with datasets and AI models** are explicitly connected, making it possible to quickly identify systems whose purpose or sensitive data may change the risk level.\n\n{% button href=\"https://www.dastra.eu/en/contacts\" text=\"Talk to a Dastra expert\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}","\u003Cp>With the adoption of \u003Ca href=\"https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689\" rel=\"nofollow\">\u003Cstrong>Regulation (EU) 2024/1689, known as the AI Act\u003C/strong>\u003C/a>, the European Union has taken a historic step in regulating artificial intelligence by establishing \u003Cstrong>the first comprehensive legal framework\u003C/strong> aimed at governing the design, placing on the market, and use of AI systems. This text, which entered into force in August 2024, is based on an approach \u003Cstrong>grounded in the level of risk posed by AI applications to individuals’ health, safety, and fundamental rights\u003C/strong>.\u003C/p>\n\u003Ch2 id=\"a-distinction-between-different-risk-levels\">A distinction between different risk levels\u003C/h2>\n\u003Cp>The \u003Cstrong>AI Act\u003C/strong> therefore distinguishes \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai?\" rel=\"nofollow\">several categories of AI systems\u003C/a>, each giving rise to specific legal obligations.\u003C/p>\n\u003Cul>\n\u003Cli>At the top of this hierarchy are systems presenting \u003Cstrong>unacceptable risk\u003C/strong>: such uses, including social scoring, behavioral manipulation, or certain real-time facial recognition devices, \u003Cstrong>are strictly prohibited\u003C/strong> on the European Union market in order to prevent serious violations of fundamental rights.\u003C/li>\n\u003Cli>Next come \u003Cstrong>high-risk systems\u003C/strong>, which are not prohibited but \u003Cstrong>must comply with a set of enhanced requirements\u003C/strong> before being placed on the market or deployed. These obligations include, in particular, a rigorous risk assessment, data quality and bias management procedures, human oversight mechanisms, as well as detailed and traceable documentation of the system’s functioning. This category covers a variety of uses in sensitive sectors such as health, education, employment, essential services, and even justice and public order.\u003C/li>\n\u003Cli>At an intermediate level, the AI Act identifies systems with \u003Cstrong>limited risk\u003C/strong>, for which the obligations focus \u003Cstrong>essentially on transparency\u003C/strong> toward users: for example, ensuring that users are informed that they are interacting with an AI system or that the content has been generated by artificial intelligence.\u003C/li>\n\u003C/ul>\n\u003Cp>There is no category representing \u003Cstrong>minimal or no risk\u003C/strong> in the AI Act. The vast majority of systems (such as spam filters or video games integrating AI) fall into this category and therefore do not require any specific obligations, even though voluntary best practices are encouraged.\u003C/p>\n\u003Cblockquote>\n\u003Cp>In the categories mentioned, the challenge for organizations is not only to abstractly identify the risk level of an AI system, but also to \u003Cstrong>identify, classify, and document each use case in concrete terms\u003C/strong>, in order to determine precisely the applicable legal regime and the resulting obligations.\u003C/p>\n\u003Cp>This requirement involves putting in place a \u003Cstrong>structured methodology for mapping AI use cases\u003C/strong>, making it possible to move from a theoretical reading of the AI Act to an operational and controlled implementation in practice.\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"mapping-ai-systems-with-dastra\">Mapping AI systems with DASTRA\u003C/h2>\n\u003Cp>\u003Ca href=\"https://doc.dastra.eu/features/cartography\">Mapping\u003C/a> enables companies and organizations to visualize all deployed AI systems, identify applicable legal obligations, and prioritize compliance actions.\u003C/p>\n\u003Cp>\u003Cstrong>The approach is based on three main pillars: identification, classification, and documentation.\u003C/strong>\u003C/p>\n\u003Cblockquote>\n\u003Cp>For an AI use-case mapping exercise to be truly operational, it must go beyond a simple descriptive list and become a \u003Cstrong>structured repository, connected to the reality of the systems\u003C/strong>, the data, and the regulatory obligations. \u003Cstrong>DASTRA\u003C/strong> meets this need precisely by offering \u003Ca href=\"https://doc.dastra.eu/features/systemes-dia\">features\u003C/a> that make it possible to document, classify, and monitor AI systems within their organizational and regulatory context.\u003C/p>\n\u003C/blockquote>\n\u003Ch3 id=\"create-an-initial-register-of-ai-systems\">1. Create an initial register of AI systems\u003C/h3>\n\u003Cp>The first step is to identify \u003Cstrong>all AI systems used or developed\u003C/strong>, taking into account not only internal software, but also third-party solutions or SaaS offerings. This inventory must include \u003Cstrong>systems in production, in testing, in proof of concept (POC) stage, or in deployment\u003C/strong>, in order to avoid the blind spots frequently observed during audits.\u003C/p>\n\u003Cp>Each system should be described according to several dimensions, such as the system’s purpose (e.g. fraud detection), the technology used, the beneficiary, and the sensitivity of the data processed.\u003C/p>\n\u003Cp>IWith \u003Cstrong>DASTRA\u003C/strong>, each AI system can be represented by a dedicated record, which serves as the \u003Cstrong>central entry point of the mapping\u003C/strong>.\u003C/p>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/e12b0b7c-1e5b-46ec-97fb-f2b3ea3829a3/image-original.png\" alt=\"\" />This record makes it possible in particular to:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>\u003Cstrong>associate the system with technical assets\u003C/strong> already identified in the data mapping (applications, APIs, infrastructures);\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>link the datasets used or generated\u003C/strong>, facilitating impact analysis in light of the GDPR and the AI Act;\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>identify key stakeholders\u003C/strong> (business owner, controller, processor, vendor);\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>and \u003Cstrong>qualify the system’s status\u003C/strong> (internal/external, being deployed, discontinued), ensuring a comprehensive and up-to-date mapping.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"classify-ai-systems-according-to-the-risk-level\">2. Classify AI systems according to the risk level\u003C/h3>\n\u003Cp>The second step consists in \u003Cstrong>determining the category of the AI system used\u003C/strong>, in order to adopt a differentiated approach based on risk level.\u003C/p>\n\u003Cp>With \u003Cstrong>DASTRA\u003C/strong>, it is possible to \u003Cstrong>link each system to its risk category\u003C/strong> directly in its record. The tool also makes it possible to \u003Cstrong>document the criteria that led to the classification\u003C/strong>, as well as to assess the \u003Cstrong>added value\u003C/strong> of each system, in order to facilitate decision-making.\u003C/p>\n\u003Ch3 id=\"document-obligations-and-ensure-monitoring\">\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/230f960b-b179-4dd5-8893-f92de2cf7f98/image-original.png\" alt=\"\" />3. Document obligations and ensure monitoring\u003C/h3>\n\u003Cp>For each identified and classified AI system, it is necessary to \u003Cstrong>document the associated obligations and requirements\u003C/strong>, such as applicable regulatory obligations or internal operational processes.\u003C/p>\n\u003Cp>With \u003Cstrong>DASTRA\u003C/strong>, this documentation is carried out directly within the \u003Cstrong>AI system record\u003C/strong>. It is recommended to include a transparency notice (or information notice) in order to \u003Cstrong>prepare and centralize the information intended for end users\u003C/strong>. This helps ensure that the system’s purpose, the data used, and the rights of data subjects are properly communicated.\u003C/p>\n\u003Ch3 id=\"analyze-interdependencies-and-impacts-of-ai-systems\">4. Analyze interdependencies and impacts of AI systems\u003C/h3>\n\u003Cp>A mapping exercise must also \u003Cstrong>identify interactions between systems, dependency on data, and the potential impact on fundamental rights\u003C/strong>. For example, a recommendation system using personal health data could, depending on its purpose, shift from limited risk to high risk, requiring enhanced monitoring.\u003C/p>\n\u003Cp>With \u003Cstrong>DASTRA\u003C/strong>, this analysis is made easier by several features. In particular, it is possible to visualize \u003Cstrong>the mapping\u003C/strong> and \u003Cstrong>see the links between systems, assets, and datasets\u003C/strong>, providing an overview of critical flows and dependencies. \u003Cstrong>Interactions with datasets and AI models\u003C/strong> are explicitly connected, making it possible to quickly identify systems whose purpose or sensitive data may change the risk level.\u003C/p>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/contacts\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">Talk to a Dastra expert\u003C/a>\u003C/div>\n","Map your AI systems in compliance with the AI Act","Map your AI systems and comply with the AI Act: a practical guide with DASTRA to manage risks, obligations, and compliance.",1030,6,0,"","en","implement-a-mapping-of-ai-use-cases-in-compliance-with-the-ai-act","The AI Act introduces an unprecedented regulatory framework to govern the use of artificial intelligence in Europe, based on the risk level of the systems deployed. This article offers a practical methodology for mapping AI systems & use cases with Dastra. ","Published",{"id":18,"displayName":19,"avatarUrl":20,"bio":21,"blogUrl":21,"color":21,"userId":18,"creationDate":22},2986,"Maëva Vidal","https://static.dastra.eu/tenant-3/avatar/2986/maeva-min-min-min-150.png",null,"2022-09-05T13:22:36","2026-04-15T12:17:00","2026-04-28T12:17:01.4115671","2026-05-07T09:29:24.8372911",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":21,"count":21,"imageUrl":21,"parent":21,"order":11,"translations":32},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":21,"count":21,"imageUrl":21,"parent":21,"order":11,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},[],"https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-original.webp",[51,52,53,54,55,56,57],"https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-1000.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-1500.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-800.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-600.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-300.webp","https://static.dastra.eu/content/de77cbd8-467d-4a82-82d8-b3371414ae92/visuel-article-10-300-100.webp",59992,{"items":60,"total":104,"size":105,"page":105},[61],{"title":62,"nbDownloads":63,"excerpt":21,"lang":13,"url":64,"intro":65,"featured":4,"state":16,"author":66,"authorId":67,"datePublication":71,"dateCreation":72,"dateUpdate":73,"mainCategory":74,"categories":81,"metaDatas":89,"imageUrl":94,"imageThumbUrls":95,"id":103},"Your Checklist to Multi-State Privacy Impact Assessments ",7,"your-checklist-to-multi-state-privacy-impact-assessment-compliance","Master multi-state Privacy Impact Assessments by downloading this checklist.",{"id":67,"displayName":68,"avatarUrl":69,"bio":21,"blogUrl":21,"color":21,"userId":67,"creationDate":70},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2026-02-23T10:07:00","2026-02-23T10:07:01.6114712","2026-02-24T15:38:38.0037058",{"id":75,"name":76,"description":21,"url":77,"color":78,"parentId":21,"count":21,"imageUrl":21,"parent":21,"order":79,"translations":80},70,"Livre blanc","white-papers","#1795d3",3,[],[82,87],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":21,"count":21,"imageUrl":21,"parent":21,"order":11,"translations":83},[84,85,86],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},{"id":75,"name":76,"description":21,"url":77,"color":78,"parentId":21,"count":21,"imageUrl":21,"parent":21,"order":79,"translations":88},[],[90],{"typeMetaDataId":91,"value":92,"id":93},4,"https://static.dastra.eu/backofficefilescontainer/6c9c6770-09f5-44d2-ac35-466a87c40426/US PIA Cross State Checklist Best Practices.pdf",117305,"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-original.jpg",[96,97,98,99,100,101,102],"https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1000.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-1500.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-800.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-600.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-300.webp","https://static.dastra.eu/content/a321130b-375a-4a3f-b9d5-e9d9afea648e/visuel-article-18-100.webp",59886,12,1]