Javascript is required
logo-dastralogo-dastra

Systemic risk

Leïla Sayssa
Leïla Sayssa
24 July 2025·2 minutes read time

The notion of "Systemic risk" is defined by Article 3(65) of the AI Act.

The risks designate a form of danger that general-purpose AI models can pose to society,** including potential harm to fundamental rights or loss of model control.

Pertaining to Article 51 of the AI Act, a model can be classified as such following one of the two conditions:

  • has hight-impact capabilities, namely that "match or exceed those recorded in the most advanced models" (Article 3(64) AI Act. Those high-impact capabilities should have a significant impact on the Union market due to their reach.

    This is presumed when the model’s training compute exceeds 10^25 FLOPs (Article 51(2) AI Act), which can be estimated even at pre-training run.

    • This is not set in stone as the Commission can adjust the threshold to account for advancements.
  • desginated as such ex officio by the Commission or based on alerts of its high-impact capabilities from the scientific panel

Large language models are typically considered GPAI with systemic risks because of their high impact capabilities and large training compute.

A list of models presenting systemic risk will be published and regularly updated, ensuring enhanced oversight of their deployment.

Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.