Javascript is required
logo-dastralogo-dastra

Map your AI systems in compliance with the AI Act

Map your AI systems in compliance with the AI Act
Maëva Vidal
Maëva Vidal
April 15, 2026·6 minutes read time

With the adoption of Regulation (EU) 2024/1689, known as the AI Act, the European Union has taken a historic step in regulating artificial intelligence by establishing the first comprehensive legal framework aimed at governing the design, placing on the market, and use of AI systems. This text, which entered into force in August 2024, is based on an approach grounded in the level of risk posed by AI applications to individuals’ health, safety, and fundamental rights.

A distinction between different risk levels

The AI Act therefore distinguishes several categories of AI systems, each giving rise to specific legal obligations.

  • At the top of this hierarchy are systems presenting unacceptable risk: such uses, including social scoring, behavioral manipulation, or certain real-time facial recognition devices, are strictly prohibited on the European Union market in order to prevent serious violations of fundamental rights.
  • Next come high-risk systems, which are not prohibited but must comply with a set of enhanced requirements before being placed on the market or deployed. These obligations include, in particular, a rigorous risk assessment, data quality and bias management procedures, human oversight mechanisms, as well as detailed and traceable documentation of the system’s functioning. This category covers a variety of uses in sensitive sectors such as health, education, employment, essential services, and even justice and public order.
  • At an intermediate level, the AI Act identifies systems with limited risk, for which the obligations focus essentially on transparency toward users: for example, ensuring that users are informed that they are interacting with an AI system or that the content has been generated by artificial intelligence.

There is no category representing minimal or no risk in the AI Act. The vast majority of systems (such as spam filters or video games integrating AI) fall into this category and therefore do not require any specific obligations, even though voluntary best practices are encouraged.

In the categories mentioned, the challenge for organizations is not only to abstractly identify the risk level of an AI system, but also to identify, classify, and document each use case in concrete terms, in order to determine precisely the applicable legal regime and the resulting obligations.

This requirement involves putting in place a structured methodology for mapping AI use cases, making it possible to move from a theoretical reading of the AI Act to an operational and controlled implementation in practice.

Mapping AI systems with DASTRA

Mapping enables companies and organizations to visualize all deployed AI systems, identify applicable legal obligations, and prioritize compliance actions.

The approach is based on three main pillars: identification, classification, and documentation.

For an AI use-case mapping exercise to be truly operational, it must go beyond a simple descriptive list and become a structured repository, connected to the reality of the systems, the data, and the regulatory obligations. DASTRA meets this need precisely by offering features that make it possible to document, classify, and monitor AI systems within their organizational and regulatory context.

1. Create an initial register of AI systems

The first step is to identify all AI systems used or developed, taking into account not only internal software, but also third-party solutions or SaaS offerings. This inventory must include systems in production, in testing, in proof of concept (POC) stage, or in deployment, in order to avoid the blind spots frequently observed during audits.

Each system should be described according to several dimensions, such as the system’s purpose (e.g. fraud detection), the technology used, the beneficiary, and the sensitivity of the data processed.

IWith DASTRA, each AI system can be represented by a dedicated record, which serves as the central entry point of the mapping.

This record makes it possible in particular to:

  • associate the system with technical assets already identified in the data mapping (applications, APIs, infrastructures);

  • link the datasets used or generated, facilitating impact analysis in light of the GDPR and the AI Act;

  • identify key stakeholders (business owner, controller, processor, vendor);

  • and qualify the system’s status (internal/external, being deployed, discontinued), ensuring a comprehensive and up-to-date mapping.

2. Classify AI systems according to the risk level

The second step consists in determining the category of the AI system used, in order to adopt a differentiated approach based on risk level.

With DASTRA, it is possible to link each system to its risk category directly in its record. The tool also makes it possible to document the criteria that led to the classification, as well as to assess the added value of each system, in order to facilitate decision-making.

3. Document obligations and ensure monitoring

For each identified and classified AI system, it is necessary to document the associated obligations and requirements, such as applicable regulatory obligations or internal operational processes.

With DASTRA, this documentation is carried out directly within the AI system record. It is recommended to include a transparency notice (or information notice) in order to prepare and centralize the information intended for end users. This helps ensure that the system’s purpose, the data used, and the rights of data subjects are properly communicated.

4. Analyze interdependencies and impacts of AI systems

A mapping exercise must also identify interactions between systems, dependency on data, and the potential impact on fundamental rights. For example, a recommendation system using personal health data could, depending on its purpose, shift from limited risk to high risk, requiring enhanced monitoring.

With DASTRA, this analysis is made easier by several features. In particular, it is possible to visualize the mapping and see the links between systems, assets, and datasets, providing an overview of critical flows and dependencies. Interactions with datasets and AI models are explicitly connected, making it possible to quickly identify systems whose purpose or sensitive data may change the risk level.


See Dastra in action

In just a few minutes, schedule a personalized demo and discover how Dastra can adapt to your organization.

Ask for a demo
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.