Following the confirmation that no pause will be introduced in the implementation timeline of the AI Act, the long-awaited Code of Practice on General-Purpose AI (GPAI) has now been published.
The European Commission released the code, establishing a key reference framework for how stakeholders across the AI value chain, from large-scale model developers to start-ups and SMEs, can begin aligning with the forthcoming GPAI-related obligations under the AI Act.
Given their broad applicability and technical architecture, GPAI models, trained on vast datasets using large-scale self-supervised learning, underpin a significant proportion of AI systems deployed across the EU. Their versatility allows them to be integrated into a wide range of downstream applications, irrespective of how the model is placed on the market.
While the Code has met with some resistance — including calls for simplification from figures such as the Danish Minister for Digital Government — it has also received strong institutional backing. The co-Chairs of the European Parliament’s Working Group on the Implementation and Enforcement of the AI Act have emphasized the importance of adoption, stating:
“We now call on all GPAI providers to sign and implement the Code in full. Participation signals a willingness to act in good faith and demonstrates alignment with European values.”
Key highlights
The Code is a voluntary instrument designed to support GPAI model providers with current or planned operations in the EU in demonstrating compliance with Articles 53 and 55 of the AI Act.
It reflects over a year of collaborative work, incorporating insights from 1,000+ stakeholders, 1,600 written submissions, and over 40 expert workshops. It stands as a landmark example of inclusive, community-led regulatory co-creation in the digital era. To know more on how the code of practice was drafted & its timeline, click here.
While participation in the Code is not mandatory, the AI Act itself imposes binding horizontal obligations on all GPAI providers under Article 53(1) — particularly concerning transparency requirements and copyright compliance. In addition, providers of GPAI models that pose a systemic risk must meet stricter safety and security requirements under Article 55(1).
A practical compliance guide, not a new legal obligation
First thing to know: the Code of Practice is not synonym with legal obligations, let alone additional obligations to those of the AI Act.
The Code of Practice does not create new legal obligations beyond what is already required under the AI Act. Rather, it serves as a practical implementation guide, illustrating how GPAI providers can align with existing obligations on:
Recognizing the rapid pace of AI development, the Code will undergo periodic review at least every two years. The AI Office is expected to propose a streamlined update mechanism to ensure the Code remains aligned with state-of-the-art practices and emerging risks.
The Code subdivises into 3 chapters: Copyright, Transparency and Safey & Security. So when we say "the code of practice", it is actually the compilation of those 3 chapters, each tackling a different subject.
Transparency: The Code outlines clear expectations for disclosing how GPAI models are trained, evaluated, and operate. It includes a standardized Model Documentation Form to guide consistent and comprehensive information sharing.
Copyright: This section focuses on safeguarding intellectual property rights — specifically how training data is sourced and how providers can align with EU copyright law, ensuring respect for IP holders throughout the model lifecycle.
These first two pillars apply to all GPAI model providers.
- Safety and Security: Targeted at advanced GPAI models with systemic risks, this section defines state-of-the-art practices to mitigate potential harms, reduce misuse, and maintain public trust in AI. It offers technical and organizational measures aligned with emerging standards.
Importantly, the Code includes tailored guidance for start-ups and SMEs, offering simplified compliance pathways and proportional key performance indicators (KPIs). This ensures that smaller players are not disproportionately burdened, in line with Recital 109 of the AI Act.
🚀 What’s next for the Code of Practice on GPAI?
The AI Office is now inviting providers of GPAI models to voluntarily sign the Code of Practice.
Public listing of signatories will take place on 1 August 2025, ahead of the formal entry into application of GPAI-specific obligations under the AI Act on 2 August 2025.
The Code of Practice remains subject to assessment by Member States and the European Commission, which may ultimately approve it through an adequacy decision (by the AI Office and the AI Board).
In parallel, the Commission will publish complementary Guidelines in July, aimed at:
Clarifying the scope of obligations for GPAI providers,
Defining what constitutes a General-Purpose AI model,
Distinguishing GPAI models that pose systemic risks,
Identifying who qualifies as a GPAI provider, particularly in cases involving fine-tuning or modification of existing models.
Voluntary, but highly incentivized
While adoption of the Code is not mandatory, it provides a presumptive pathway to demonstrating compliance with Articles 53 and 55 of the AI Act. Signing the Code can therefore significantly reduce the administrative burden on GPAI providers by offering an aligned, standardized approach.
Providers who choose not to sign must demonstrate alternative means of compliance — likely subject to greater evidentiary requirements and regulatory scrutiny, as indicated by the Commission. This effectively makes the Code the de facto baseline for demonstrating responsible and safe behavior in the EU AI market, and potentially internationally.
Enforcement & oversight will be key
As with any self-regulatory instrument, the Code’s strength will lie in its enforcement. The AI Office will need sufficient resources and expertise to evaluate the varied compliance mechanisms adopted by GPAI providers — particularly in an outcome-based regulatory model.
Now that the text has been finalized, the AI Office must focus on operationalizing the Code, ensuring that commitments are translated into concrete, enforceable practices.
As the afore-mentioned co-chairs say: We will continue to advocate for rigorous monitoring and constructive engagement, ensuring that this code does not become a ‘paper tiger’.
Breakdown of the three chapters
The principle of proportionality applies throughout the Code. Compliance obligations are tailored to reflect the size, capacity, and market presence of the GPAI provider. This ensures that startups and SMEs face flexible, proportionate expectations rather than disproportionate burdens.
Concerning Transparency
With respect to transparency, the GPAI Code of Practice operationalizes providers' obligations under Article 53(1)(a) and (b) of the AI Act, along with Annexes XI and XII, by introducing a standardized Model Documentation Form. This form enables GPAI providers to compile and maintain the required technical and compliance information in a consistent and structured manner.
- At a minimum, signatories must draw up comprehensive technical documentation that includes details of the model’s training and testing processes, the evaluation results, and relevant methodological information.
- Additionally, they must prepare documentation intended for downstream providers who may integrate the GPAI model into their own AI systems, thereby equipping them to understand the model's capabilities and limitations and to fulfill their own compliance duties.
While major GPAI providers may already publish model cards, their format and content often vary by sector or use case. The Model Documentation Form seeks to
- Standardize these disclosures by consolidating them in a single, uniform format
- Clarifying both the content required;
- And the intended recipients: such as the AI Office, national competent authorities, and downstream developers.
Required disclosures include:
- Contact information to disclose publicly via website or other appropriate means;
- Description of the training, testing, and validation methodologies;
- The types of data used and how they were collected; how the data was curated and which methodologies were used to ensure data quality and integrity;
- Measures implemented for bias detection;
- and, where applicable, the rights obtained for third-party data.
Importantly, providers must keep this documentation up to date and retain previous versions for at least ten years after the model has been placed on the market.
The Code further clarifies that, in the case of fine-tuning or other modifications to an existing GPAI model, transparency obligations should apply proportionally, focusing solely on the changes introduced by the provider.
The Code also establishes the AI Office as the central point of contact for national competent authorities, meaning that all official information requests must be submitted through the AI Office. Such requests must clearly indicate their legal basis and purpose and be strictly limited to what is necessary for the authority’s tasks.
Concerning Copyright
Internal Copyright Policy:
All GPAI providers must adopt and implement an internal copyright compliance policy aligned with EU copyright law. This policy should:
Define clear internal procedures governing the handling of copyrighted content.
Designate responsible personnel for oversight and implementation.
Be accompanied by a public summary to enhance transparency and stakeholder trust.
Lawful use: key restrictions and safeguards
Providers may only reproduce and extract lawfully accessible, copyright-protected content, and must not circumvent any effective technological protection measures. Key obligations include:
Ban on the use of pirated content: Providers are prohibited from sourcing content from known copyright-infringing ("piracy") websites. A dynamic list of such websites will be maintained and published by EU authorities.
Web crawling safeguards: Crawling tools must be designed to access only lawfully accessible content. This includes:
Respecting paywalls, access restrictions, and technical protection measures.
Adhering to machine-readable opt-outs (e.g.,
robots.txt
), or similar protocols.Integrating objection detection tools that identify and respond to copyright holders’ signals.
Due diligence for third-party datasets: Providers must verify that any external datasets used for model training are compliant with EU copyright rules.
Transparency and redress mechanisms: Providers must:
Disclose crawler specifications and objection-detection mechanisms.
Maintain a contact point for rights holders to request model-related information, and submit complaints or copyright concerns.
Avoid penalizing content indexed by search engines when legitimate objections are raised.
Downstream System Obligations
For GPAI models integrated into other AI systems, whether by the provider or a third party, the Code requires:
Implementation of measures to prevent copyright-infringing outputs.
Inclusion of prohibitions on infringing uses within Acceptable Use Policies.
These obligations apply regardless of the downstream user's identity, ensuring responsible use throughout the AI supply chain.
Disclosure of Training Content Summaries
To enable rights holders to identify potential use of their works, providers must publish summaries of training content that are:
Sufficiently detailed to allow assessment and possible action (e.g., filing objections or requesting delisting).
Publicly accessible as part of the provider’s overall transparency commitments.
Concerning safety and security
The Safety and Security chapter of the GPAI Code of Practice applies specifically to general-purpose AI models that may pose systemic risks, as defined under Article 51 of the AI Act.
Under the AI Act, systemic risk is associated with models that exhibit high-impact capabilities—meaning capabilities that are comparable to or exceed those of the most advanced GPAI models and that have a significant impact on the EU market. The Act presumes that models trained using more than 10^25 floating-point operations (FLOPs) fall within this category.
Providers of such models are required to implement a comprehensive Safety and Security Framework, to comply with theeir obligations as provided under Article 55 of the AI Act. This includes conducting thorough risk assessments, performing regular evaluations, enabling post-market monitoring, and reporting incidents in a timely manner. Providers are also expected to enable external evaluations.
- Providers must conduct model evaluations, including adversarial testing, to detect and mitigate systemic risks. These evaluations should assess potential sources of systemic risk and be documented appropriately.
- If a provider identifies a serious incident or vulnerability, it must be reported without undue delay to the AI Office and the relevant national authorities, and documented along with any corrective measures taken.
- Providers are also required to ensure that their systems are protected by an adequate level of cybersecurity safeguards.
Signatories must go further by actively identifying, quantifying, and managing systemic risk pertaining to some of the GPAI models. This involves:
- Estimating when a model may exceed applicable risk thresholds,
- Defining what constitutes an acceptable level of systemic risk,
- Maintaining clear mitigation strategies,
- And transparently documenting existing security controls.
Once risks are identified, providers are expected to evaluate whether they remain within internal risk tolerance limits. If not, mitigation measures must be applied until the risk is brought to an acceptable level.
The Code also sets expectations for incident tracking and reporting, reinforcing Article 55(1)(c) by promoting a feedback loop that enhances safety, accountability, and regulatory oversight. It stresses that safety obligations are not static and must evolve as technology advances.
Consequently, the Code advocates for flexible, forward-looking risk governance strategies capable of adapting to emerging threats and model capabilities.
More documentation and less built-in responsibility?
At present, however, the prevailing approach still treats risk management as a process that runs in parallel to AI development—rather than being fully embedded into the model design lifecycle. While tools such as Model Reports and post-market monitoring help meet compliance standards, they can lead to overly bureaucratic workflows without necessarily improving technical or ethical outcomes.
The true shift envisioned by the Code is one that integrates risk awareness directly into the core of AI system development. Risk analysis should be conducted before selecting training data, before designing model architectures, and before deployment decisions are made.
Embedding this mindset throughout the development lifecycle—rather than relying solely on post hoc compliance—will be critical to building safer, more reliable, and more trustworthy general-purpose AI systems.
Want to learn more about how can Dastra help you comply with the AI Act? Click here.