[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fIfnikmWF5PiMuv5ZGO7tj_Jt206BWS3qSpvkGcOG6YI":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":15,"featured":4,"state":16,"author":17,"authorId":18,"datePublication":22,"dateCreation":23,"dateUpdate":24,"mainCategory":25,"categories":41,"metaDatas":68,"imageUrl":69,"imageThumbUrls":70,"id":78},false,"Following the confirmation that no pause will be introduced in the implementation timeline of the AI Act, the long-awaited [**Code of Practice on General-Purpose AI (GPAI)**](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai) has now been published.\r\n\r\nThe **European Commission** released the code, establishing a key reference framework for how stakeholders across the AI value chain, from large-scale model developers to start-ups and SMEs, can begin aligning with the forthcoming **GPAI-related obligations** under the AI Act.\r\n\r\nGiven their broad applicability and technical architecture, **GPAI models, trained on vast datasets using large-scale self-supervised learning, underpin a significant proportion of AI systems deployed across the EU.** Their versatility allows them to be integrated into a wide range of downstream applications, irrespective of how the model is placed on the market.\r\n\r\n## The Code is published, now what? \r\n\r\nThe Code of Practice remains **subject to assessment** by **Member States and the European Commission**, which may ultimately approve it through an **adequacy decision** (by the **AI Office** and the **AI Board**).\r\n\r\n> **Update**: In an opinion[ released on August 1st 2025](https://digital-strategy.ec.europa.eu/en/library/commission-opinion-assessment-general-purpose-ai-code-practice), the Commission deems that the Code of Practice *\"adequately covers the obligations provided for in Articles 53 and 55 of the AI Act and meets the aims according to Article 56 of the AI Act\".* **Meaning that, the Code is now an adequate voluntary tool that organisations can use to demonstrate their compliance with the AI Act.**\r\n\r\n**Public listing of signatories** will take place on **1 August 2025**, ahead of the formal **entry into application of GPAI-specific obligations under the AI Act on 2 August 2025**.\r\n\r\n> The list is now shared on the [Commission's website. ](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai#ecl-inpage-Signatories-of-the-AI-Pact)**Current signatories** include Amazon, Anthropic, Google, Mistral AI, and OpenAI.\r\n\r\nIn parallel, the Commission is set to publish **complementary Guidelines in July**, aimed at:\r\n\r\n- Clarifying the **scope of obligations** for GPAI providers,\r\n\r\n- Defining what constitutes a **General-Purpose AI model**,\r\n\r\n- Distinguishing GPAI models that pose **systemic risks**,\r\n\r\n- Identifying who qualifies as a **GPAI provider**, particularly in cases involving fine-tuning or modification of existing models.\r\n\r\n> Update: The Commission released the [guidelines to help general-purpose AI (GPAI) ](https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act)providers comply with the AI Act, particularly their obligations taking effect on August 2, 2025.\r\n>\r\n> Read[ our take on the Guidelines here](https://www.dastra.eu/en/guide/building-a-gpai-you-might-be-the-provider/59448).\r\n\r\n## **Key highlights**\r\n\r\nThe Code is a **voluntary instrument** designed to support GPAI model providers with current or planned operations in the EU in demonstrating **compliance with Articles 53 and 55 of the AI Act.**\r\n\r\nIt reflects over a year of collaborative work, incorporating insights from **1,000+ stakeholders**, **1,600 written submissions**, and over **40 expert workshops**. It stands as a landmark example of inclusive, community-led regulatory co-creation in the digital era. To know more on how the code of practice was drafted & its timeline, click [here](https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice).\r\n\r\nWhile participation in the Code is not mandatory, the **AI Act itself imposes binding horizontal obligations** on all GPAI providers under **Article 53(1)** — particularly concerning **transparency requirements** and **copyright compliance**. In addition, providers of GPAI models that pose a **systemic risk** must meet **stricter safety and security requirements** under **Article 55(1)**.\r\n\r\n**A practical compliance guide, not a new legal obligation**\r\n\r\nFirst thing to know: the Code of Practice is not synonym with legal obligations, let alone additional obligations to those of the AI Act.\r\n\r\n**The Code of Practice does not create new legal obligations** beyond what is already required under the AI Act. Rather, it serves as a **practical implementation guide**, illustrating how GPAI providers can align with existing obligations on:\r\n\r\nRecognizing the **rapid pace of AI development**, the Code will undergo **periodic review at least every two years**. The AI Office is expected to propose **a streamlined update mechanism** to ensure the Code remains aligned with **state-of-the-art practices** and emerging risks.\r\n\r\nThe Code subdivises into 3 chapters: Copyright, Transparency and Safey & Security. So when we say \"the code of practice\", it is actually the compilation of those 3 chapters, each tackling a different subject.\r\n\r\n1. **Transparency:** The Code outlines clear expectations for disclosing how GPAI models are trained, evaluated, and operate. It includes a standardized **Model Documentation Form** to guide consistent and comprehensive information sharing.\r\n\r\n2. **Copyright:** This section focuses on safeguarding intellectual property rights — specifically how training data is sourced and how providers can align with EU copyright law, ensuring respect for IP holders throughout the model lifecycle.\r\n\r\n***These first two pillars apply to all GPAI model providers.***\r\n\r\n3. **Safety and Security:** Targeted at ***advanced GPAI models with systemic risks***, this section defines **state-of-the-art practices** to mitigate potential harms, reduce misuse, and maintain public trust in AI. It offers technical and organizational measures aligned with emerging standards.\r\n\r\nImportantly, the Code **includes tailored guidance for start-ups and SMEs**, offering **simplified compliance pathways** and **proportional key performance indicators (KPIs)**. This ensures that smaller players are not disproportionately burdened, in line with **Recital 109** of the AI Act.\r\n\r\n### Voluntary, but highly incentivized\r\n\r\nWhile adoption of the Code is **not mandatory**, it provides a **presumptive pathway to demonstrating compliance** with Articles 53 and 55 of the AI Act. Signing the Code can therefore significantly **reduce the administrative burden** on GPAI providers by offering an aligned, standardized approach.\r\n\r\nProviders who choose **not to sign** must demonstrate **alternative means of compliance** — likely subject to **greater evidentiary requirements and regulatory scrutiny**, as indicated by the Commission. This effectively makes the Code the **de facto baseline** for demonstrating responsible and safe behavior in the EU AI market, and potentially **internationally**.\r\n\r\n### Enforcement & oversight will be key\r\n\r\nAs with any self-regulatory instrument, **the Code’s strength will lie in its enforcement**. The AI Office will need sufficient **resources and expertise** to evaluate the varied compliance mechanisms adopted by GPAI providers — particularly in an outcome-based regulatory model.\r\n\r\nNow that the text has been finalized, the AI Office must focus on **operationalizing the Code**, ensuring that commitments are translated into **concrete, enforceable practices**.\r\n\r\n## **Breakdown of the three chapters**\r\n\r\nThe **principle of proportionality** applies throughout the Code. Compliance obligations are tailored to reflect the **size, capacity, and market presence** of the GPAI provider. This ensures that **startups and SMEs** face **flexible, proportionate expectations** rather than disproportionate burdens.\r\n\r\n### **1. Concerning Transparency**\r\n\r\nWith respect to transparency, the GPAI Code of Practice operationalizes providers' obligations under Article 53(1)(a) and (b) of the AI Act, along with Annexes XI and XII, by introducing a **standardized[ Model Documentation Form.](https://ec.europa.eu/newsroom/dae/redirection/document/118118)** This form enables GPAI providers to **compile and maintain the required technical and compliance information** in a consistent and structured manner.\r\n\r\n- At a minimum, **signatories must draw up comprehensive technical documentation that includes details of the model’s training and testing processes, the evaluation results, and relevant methodological information**.\r\n- Additionally, they must prepar**e documentation intended for downstream providers who may integrate the GPAI model into their own AI systems,** thereby equipping them to understand the model's capabilities and limitations and to fulfill their own compliance duties.\r\n\r\nWhile major GPAI providers may already publish **model cards**, their format and content often vary by sector or use case. The **Model Documentation Form seeks to**\r\n\r\n- **Standardize** these disclosures by consolidating them in a single, uniform format\r\n- Clarifying both the content required;\r\n- And the intended recipients: such as the AI Office, national competent authorities, and downstream developers.\r\n\r\nRequired disclosures include:\r\n\r\n- Contact information to disclose publicly via website or other appropriate means;\r\n- Description of the training, testing, and validation methodologies;\r\n- The types of data used and how they were collected; how the data was curated and which methodologies were used to ensure data quality and integrity;\r\n- Measures implemented for bias detection;\r\n- and, where applicable, the rights obtained for third-party data.\r\n\r\nImportantly, **providers must keep this documentation up to date and retain previous versions for at least ten years after the model has been placed on the market.**\r\n\r\nThe Code further clarifies that, in the case of **fine-tuning or other modifications to an existing GPAI model**, transparency obligations should apply **proportionally**, focusing solely on the changes introduced by the provider.\r\n\r\nThe Code also establishes the **AI Office as the central point of contact for national competent authorities,** meaning that all official information requests must be submitted through the AI Office. Such requests must clearly indicate their legal basis and purpose and be strictly limited to what is necessary for the authority’s tasks.\r\n\r\n### **2. Concerning Copyright**\r\n\r\n#### Internal Copyright Policy:\r\n\r\nAll GPAI providers must adopt and implement an **internal copyright compliance policy** aligned with **EU copyright law**. This policy should:\r\n\r\n- Define **clear internal procedures** governing the handling of copyrighted content.\r\n\r\n- Designate responsible personnel for **oversight and implementation**.\r\n\r\n- Be accompanied by a **public summary** to enhance **transparency** and stakeholder trust.\r\n\r\n#### Lawful use: key restrictions and safeguards\r\n\r\nProviders may **only reproduce and extract lawfully accessible, copyright-protected content**, and must **not circumvent** any effective technological protection measures. Key obligations include:\r\n\r\n- **Ban on the use of pirated content**: Providers are prohibited from sourcing content from known **copyright-infringing (\"piracy\") websites**. A **dynamic list** of such websites will be maintained and published by EU authorities.\r\n\r\n- **Web crawling safeguards**: Crawling tools must be designed to access **only lawfully accessible content**. This includes:\r\n\r\n  - Respecting **paywalls**, access restrictions, and technical protection measures.\r\n\r\n  - Adhering to **machine-readable opt-outs** (e.g., `robots.txt`), or similar protocols.\r\n\r\n  - Integrating **objection detection tools** that identify and respond to copyright holders’ signals.\r\n\r\n- **Due diligence for third-party datasets**: Providers must verify that any external datasets used for model training are **compliant with EU copyright rules**.\r\n\r\n- **Transparency and redress mechanisms**: Providers must:\r\n\r\n  - Disclose **crawler specifications** and **objection-detection mechanisms**.\r\n\r\n  - Maintain a **contact point** for rights holders to request model-related information, and submit **complaints** or copyright concerns.\r\n\r\n  - Avoid penalizing content indexed by **search engines** when legitimate objections are raised.\r\n\r\n#### Downstream system obligations\r\n\r\nFor GPAI models integrated into other AI systems, whether by the provider or a third party, the Code requires:\r\n\r\n- Implementation of **measures to prevent copyright-infringing outputs**.\r\n\r\n- Inclusion of **prohibitions on infringing uses** within **Acceptable Use Policies**.\r\n\r\nThese obligations apply **regardless of the downstream user's identity**, ensuring responsible use throughout the AI supply chain.\r\n\r\n#### Disclosure of training content summaries\r\n\r\nTo enable **rights holders to identify potential use of their works**, providers must publish **summaries of training content** that are:\r\n\r\n- **Sufficiently detailed** to allow assessment and possible action (e.g., filing objections or requesting delisting).\r\n\r\n- Publicly accessible as part of the provider’s overall transparency commitments.\r\n\r\n### **3. Concerning safety and security**\r\n\r\nThe Safety and Security chapter of the GPAI Code of Practice applies specifically to general-purpose AI models that may pose **systemic risks**, as defined under [Article 51 of the AI Act.](https://artificialintelligenceact.eu/article/51/)\r\n\r\nUnder the AI Act, systemic risk is associated with models that exhibit **high-impact capabilities**—meaning capabilities that are comparable to or exceed those of the most advanced GPAI models and that have a **significant impact on the EU market**. The Act presumes that models trained using more than **10^25 floating-point operations (FLOPs)** fall within this category.\r\n\r\nProviders of such models are required to implement a **comprehensive Safety and Security Framework, to comply with their obligations as provided under [Article 55 of the AI Act](https://artificialintelligenceact.eu/article/55/.).** This includes conducting thorough **risk assessments, performing regular evaluations, enabling post-market monitoring, and reporting incidents in a timely manner.** Providers are also expected to enable **external evaluations**.\r\n\r\n- Providers must conduct **model evaluations**, in**cluding adversarial testing, to detect and mitigate systemic risks.** These evaluations should assess potential sources of systemic risk and be documented appropriately.\r\n- If a provider identifies a serious incident or vulnerability, it must be **reported** without undue delay to the AI Office and the relevant national authorities, and **documented** along with any **corrective measures** taken.\r\n- Providers are also required to ensure that their systems are protected by **an adequate level of cybersecurity safeguards.**\r\n\r\nSignatories must go further by actively **identifying, quantifying, and managing systemic risk pertaining to some of the GPAI models**. This involves:\r\n\r\n- Estimating when a model may **exceed applicable risk thresholds,**\r\n- Defining what constitutes an **acceptable level of systemic risk,**\r\n- Maintaining clear **mitigation strategies,**\r\n- And transparently documenting existing **security controls.**\r\n\r\nOnce risks are identified, providers are expected to evaluate w**hether they remain within internal risk tolerance limits. If not, mitigation measures must be applied until the risk is brought to an acceptable level.**\r\n\r\nThe Code also sets expectations for **incident tracking and reporting**, reinforcing Article 55(1)(c) by promoting a feedback loop that enhances safety, accountability, and regulatory oversight. It stresses that safety obligations are not static and must evolve as technology advances. Consequently, the Code advocates for **flexible, forward-looking risk governance strategies** capable of adapting to emerging threats and model capabilities.\r\n\r\n### More documentation and less built-in responsibility?\r\n\r\nAt present, however, the prevailing approach still treats risk management as a process that runs in parallel to AI development—rather than being fully embedded into the model design lifecycle. While tools such as Model Reports and post-market monitoring help meet compliance standards, they can lead to **overly bureaucratic workflows without necessarily improving technical or ethical outcomes.**\r\n\r\nThe true shift envisioned by the Code is one that integrates **risk awareness directly into the core of AI system development**. Risk analysis should be conducted **before selecting training data**, **before designing model architectures**, and **before deployment decisions are made**.\r\n\r\nEmbedding this mindset throughout the development lifecycle—rather than relying solely on post hoc compliance—will be critical to building **safer, more reliable, and more trustworthy** general-purpose AI systems.\r\n\r\n> Want to learn more about how can Dastra help you comply with the AI Act? Click [here](https://www.dastra.eu/en/contacts/demo).","\u003Cp>Following the confirmation that no pause will be introduced in the implementation timeline of the AI Act, the long-awaited \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai\" rel=\"nofollow\">\u003Cstrong>Code of Practice on General-Purpose AI (GPAI)\u003C/strong>\u003C/a> has now been published.\u003C/p>\r\n\u003Cp>The \u003Cstrong>European Commission\u003C/strong> released the code, establishing a key reference framework for how stakeholders across the AI value chain, from large-scale model developers to start-ups and SMEs, can begin aligning with the forthcoming \u003Cstrong>GPAI-related obligations\u003C/strong> under the AI Act.\u003C/p>\r\n\u003Cp>Given their broad applicability and technical architecture, \u003Cstrong>GPAI models, trained on vast datasets using large-scale self-supervised learning, underpin a significant proportion of AI systems deployed across the EU.\u003C/strong> Their versatility allows them to be integrated into a wide range of downstream applications, irrespective of how the model is placed on the market.\u003C/p>\r\n\u003Ch2 id=\"the-code-is-published-now-what\">The Code is published, now what?\u003C/h2>\r\n\u003Cp>The Code of Practice remains \u003Cstrong>subject to assessment\u003C/strong> by \u003Cstrong>Member States and the European Commission\u003C/strong>, which may ultimately approve it through an \u003Cstrong>adequacy decision\u003C/strong> (by the \u003Cstrong>AI Office\u003C/strong> and the \u003Cstrong>AI Board\u003C/strong>).\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>\u003Cstrong>Update\u003C/strong>: In an opinion\u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/commission-opinion-assessment-general-purpose-ai-code-practice\" rel=\"nofollow\"> released on August 1st 2025\u003C/a>, the Commission deems that the Code of Practice \u003Cem>\"adequately covers the obligations provided for in Articles 53 and 55 of the AI Act and meets the aims according to Article 56 of the AI Act\".\u003C/em> \u003Cstrong>Meaning that, the Code is now an adequate voluntary tool that organisations can use to demonstrate their compliance with the AI Act.\u003C/strong>\u003C/p>\r\n\u003C/blockquote>\r\n\u003Cp>\u003Cstrong>Public listing of signatories\u003C/strong> will take place on \u003Cstrong>1 August 2025\u003C/strong>, ahead of the formal \u003Cstrong>entry into application of GPAI-specific obligations under the AI Act on 2 August 2025\u003C/strong>.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>The list is now shared on the \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai#ecl-inpage-Signatories-of-the-AI-Pact\" rel=\"nofollow\">Commission's website. \u003C/a>\u003Cstrong>Current signatories\u003C/strong> include Amazon, Anthropic, Google, Mistral AI, and OpenAI.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Cp>In parallel, the Commission is set to publish \u003Cstrong>complementary Guidelines in July\u003C/strong>, aimed at:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Clarifying the \u003Cstrong>scope of obligations\u003C/strong> for GPAI providers,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Defining what constitutes a \u003Cstrong>General-Purpose AI model\u003C/strong>,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Distinguishing GPAI models that pose \u003Cstrong>systemic risks\u003C/strong>,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Identifying who qualifies as a \u003Cstrong>GPAI provider\u003C/strong>, particularly in cases involving fine-tuning or modification of existing models.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cblockquote>\r\n\u003Cp>Update: The Commission released the \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act\" rel=\"nofollow\">guidelines to help general-purpose AI (GPAI) \u003C/a>providers comply with the AI Act, particularly their obligations taking effect on August 2, 2025.\u003C/p>\r\n\u003Cp>Read\u003Ca href=\"https://www.dastra.eu/en/guide/building-a-gpai-you-might-be-the-provider/59448\"> our take on the Guidelines here\u003C/a>.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch2 id=\"key-highlights\">\u003Cstrong>Key highlights\u003C/strong>\u003C/h2>\r\n\u003Cp>The Code is a \u003Cstrong>voluntary instrument\u003C/strong> designed to support GPAI model providers with current or planned operations in the EU in demonstrating \u003Cstrong>compliance with Articles 53 and 55 of the AI Act.\u003C/strong>\u003C/p>\r\n\u003Cp>It reflects over a year of collaborative work, incorporating insights from \u003Cstrong>1,000+ stakeholders\u003C/strong>, \u003Cstrong>1,600 written submissions\u003C/strong>, and over \u003Cstrong>40 expert workshops\u003C/strong>. It stands as a landmark example of inclusive, community-led regulatory co-creation in the digital era. To know more on how the code of practice was drafted &amp; its timeline, click \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice\" rel=\"nofollow\">here\u003C/a>.\u003C/p>\r\n\u003Cp>While participation in the Code is not mandatory, the \u003Cstrong>AI Act itself imposes binding horizontal obligations\u003C/strong> on all GPAI providers under \u003Cstrong>Article 53(1)\u003C/strong> — particularly concerning \u003Cstrong>transparency requirements\u003C/strong> and \u003Cstrong>copyright compliance\u003C/strong>. In addition, providers of GPAI models that pose a \u003Cstrong>systemic risk\u003C/strong> must meet \u003Cstrong>stricter safety and security requirements\u003C/strong> under \u003Cstrong>Article 55(1)\u003C/strong>.\u003C/p>\r\n\u003Cp>\u003Cstrong>A practical compliance guide, not a new legal obligation\u003C/strong>\u003C/p>\r\n\u003Cp>First thing to know: the Code of Practice is not synonym with legal obligations, let alone additional obligations to those of the AI Act.\u003C/p>\r\n\u003Cp>\u003Cstrong>The Code of Practice does not create new legal obligations\u003C/strong> beyond what is already required under the AI Act. Rather, it serves as a \u003Cstrong>practical implementation guide\u003C/strong>, illustrating how GPAI providers can align with existing obligations on:\u003C/p>\r\n\u003Cp>Recognizing the \u003Cstrong>rapid pace of AI development\u003C/strong>, the Code will undergo \u003Cstrong>periodic review at least every two years\u003C/strong>. The AI Office is expected to propose \u003Cstrong>a streamlined update mechanism\u003C/strong> to ensure the Code remains aligned with \u003Cstrong>state-of-the-art practices\u003C/strong> and emerging risks.\u003C/p>\r\n\u003Cp>The Code subdivises into 3 chapters: Copyright, Transparency and Safey &amp; Security. So when we say \"the code of practice\", it is actually the compilation of those 3 chapters, each tackling a different subject.\u003C/p>\r\n\u003Col>\r\n\u003Cli>\u003Cp>\u003Cstrong>Transparency:\u003C/strong> The Code outlines clear expectations for disclosing how GPAI models are trained, evaluated, and operate. It includes a standardized \u003Cstrong>Model Documentation Form\u003C/strong> to guide consistent and comprehensive information sharing.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Copyright:\u003C/strong> This section focuses on safeguarding intellectual property rights — specifically how training data is sourced and how providers can align with EU copyright law, ensuring respect for IP holders throughout the model lifecycle.\u003C/p>\r\n\u003C/li>\r\n\u003C/ol>\r\n\u003Cp>\u003Cem>\u003Cstrong>These first two pillars apply to all GPAI model providers.\u003C/strong>\u003C/em>\u003C/p>\r\n\u003Col start=\"3\">\r\n\u003Cli>\u003Cstrong>Safety and Security:\u003C/strong> Targeted at \u003Cem>\u003Cstrong>advanced GPAI models with systemic risks\u003C/strong>\u003C/em>, this section defines \u003Cstrong>state-of-the-art practices\u003C/strong> to mitigate potential harms, reduce misuse, and maintain public trust in AI. It offers technical and organizational measures aligned with emerging standards.\u003C/li>\r\n\u003C/ol>\r\n\u003Cp>Importantly, the Code \u003Cstrong>includes tailored guidance for start-ups and SMEs\u003C/strong>, offering \u003Cstrong>simplified compliance pathways\u003C/strong> and \u003Cstrong>proportional key performance indicators (KPIs)\u003C/strong>. This ensures that smaller players are not disproportionately burdened, in line with \u003Cstrong>Recital 109\u003C/strong> of the AI Act.\u003C/p>\r\n\u003Ch3 id=\"voluntary-but-highly-incentivized\">Voluntary, but highly incentivized\u003C/h3>\r\n\u003Cp>While adoption of the Code is \u003Cstrong>not mandatory\u003C/strong>, it provides a \u003Cstrong>presumptive pathway to demonstrating compliance\u003C/strong> with Articles 53 and 55 of the AI Act. Signing the Code can therefore significantly \u003Cstrong>reduce the administrative burden\u003C/strong> on GPAI providers by offering an aligned, standardized approach.\u003C/p>\r\n\u003Cp>Providers who choose \u003Cstrong>not to sign\u003C/strong> must demonstrate \u003Cstrong>alternative means of compliance\u003C/strong> — likely subject to \u003Cstrong>greater evidentiary requirements and regulatory scrutiny\u003C/strong>, as indicated by the Commission. This effectively makes the Code the \u003Cstrong>de facto baseline\u003C/strong> for demonstrating responsible and safe behavior in the EU AI market, and potentially \u003Cstrong>internationally\u003C/strong>.\u003C/p>\r\n\u003Ch3 id=\"enforcement-oversight-will-be-key\">Enforcement &amp; oversight will be key\u003C/h3>\r\n\u003Cp>As with any self-regulatory instrument, \u003Cstrong>the Code’s strength will lie in its enforcement\u003C/strong>. The AI Office will need sufficient \u003Cstrong>resources and expertise\u003C/strong> to evaluate the varied compliance mechanisms adopted by GPAI providers — particularly in an outcome-based regulatory model.\u003C/p>\r\n\u003Cp>Now that the text has been finalized, the AI Office must focus on \u003Cstrong>operationalizing the Code\u003C/strong>, ensuring that commitments are translated into \u003Cstrong>concrete, enforceable practices\u003C/strong>.\u003C/p>\r\n\u003Ch2 id=\"breakdown-of-the-three-chapters\">\u003Cstrong>Breakdown of the three chapters\u003C/strong>\u003C/h2>\r\n\u003Cp>The \u003Cstrong>principle of proportionality\u003C/strong> applies throughout the Code. Compliance obligations are tailored to reflect the \u003Cstrong>size, capacity, and market presence\u003C/strong> of the GPAI provider. This ensures that \u003Cstrong>startups and SMEs\u003C/strong> face \u003Cstrong>flexible, proportionate expectations\u003C/strong> rather than disproportionate burdens.\u003C/p>\r\n\u003Ch3 id=\"concerning-transparency\">\u003Cstrong>1. Concerning Transparency\u003C/strong>\u003C/h3>\r\n\u003Cp>With respect to transparency, the GPAI Code of Practice operationalizes providers' obligations under Article 53(1)(a) and (b) of the AI Act, along with Annexes XI and XII, by introducing a \u003Cstrong>standardized\u003Ca href=\"https://ec.europa.eu/newsroom/dae/redirection/document/118118\" rel=\"nofollow\"> Model Documentation Form.\u003C/a>\u003C/strong> This form enables GPAI providers to \u003Cstrong>compile and maintain the required technical and compliance information\u003C/strong> in a consistent and structured manner.\u003C/p>\r\n\u003Cul>\r\n\u003Cli>At a minimum, \u003Cstrong>signatories must draw up comprehensive technical documentation that includes details of the model’s training and testing processes, the evaluation results, and relevant methodological information\u003C/strong>.\u003C/li>\r\n\u003Cli>Additionally, they must prepar\u003Cstrong>e documentation intended for downstream providers who may integrate the GPAI model into their own AI systems,\u003C/strong> thereby equipping them to understand the model's capabilities and limitations and to fulfill their own compliance duties.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>While major GPAI providers may already publish \u003Cstrong>model cards\u003C/strong>, their format and content often vary by sector or use case. The \u003Cstrong>Model Documentation Form seeks to\u003C/strong>\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Standardize\u003C/strong> these disclosures by consolidating them in a single, uniform format\u003C/li>\r\n\u003Cli>Clarifying both the content required;\u003C/li>\r\n\u003Cli>And the intended recipients: such as the AI Office, national competent authorities, and downstream developers.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Required disclosures include:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Contact information to disclose publicly via website or other appropriate means;\u003C/li>\r\n\u003Cli>Description of the training, testing, and validation methodologies;\u003C/li>\r\n\u003Cli>The types of data used and how they were collected; how the data was curated and which methodologies were used to ensure data quality and integrity;\u003C/li>\r\n\u003Cli>Measures implemented for bias detection;\u003C/li>\r\n\u003Cli>and, where applicable, the rights obtained for third-party data.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Importantly, \u003Cstrong>providers must keep this documentation up to date and retain previous versions for at least ten years after the model has been placed on the market.\u003C/strong>\u003C/p>\r\n\u003Cp>The Code further clarifies that, in the case of \u003Cstrong>fine-tuning or other modifications to an existing GPAI model\u003C/strong>, transparency obligations should apply \u003Cstrong>proportionally\u003C/strong>, focusing solely on the changes introduced by the provider.\u003C/p>\r\n\u003Cp>The Code also establishes the \u003Cstrong>AI Office as the central point of contact for national competent authorities,\u003C/strong> meaning that all official information requests must be submitted through the AI Office. Such requests must clearly indicate their legal basis and purpose and be strictly limited to what is necessary for the authority’s tasks.\u003C/p>\r\n\u003Ch3 id=\"concerning-copyright\">\u003Cstrong>2. Concerning Copyright\u003C/strong>\u003C/h3>\r\n\u003Ch4 id=\"internal-copyright-policy\">Internal Copyright Policy:\u003C/h4>\r\n\u003Cp>All GPAI providers must adopt and implement an \u003Cstrong>internal copyright compliance policy\u003C/strong> aligned with \u003Cstrong>EU copyright law\u003C/strong>. This policy should:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Define \u003Cstrong>clear internal procedures\u003C/strong> governing the handling of copyrighted content.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Designate responsible personnel for \u003Cstrong>oversight and implementation\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Be accompanied by a \u003Cstrong>public summary\u003C/strong> to enhance \u003Cstrong>transparency\u003C/strong> and stakeholder trust.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"lawful-use-key-restrictions-and-safeguards\">Lawful use: key restrictions and safeguards\u003C/h4>\r\n\u003Cp>Providers may \u003Cstrong>only reproduce and extract lawfully accessible, copyright-protected content\u003C/strong>, and must \u003Cstrong>not circumvent\u003C/strong> any effective technological protection measures. Key obligations include:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>Ban on the use of pirated content\u003C/strong>: Providers are prohibited from sourcing content from known \u003Cstrong>copyright-infringing (\"piracy\") websites\u003C/strong>. A \u003Cstrong>dynamic list\u003C/strong> of such websites will be maintained and published by EU authorities.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Web crawling safeguards\u003C/strong>: Crawling tools must be designed to access \u003Cstrong>only lawfully accessible content\u003C/strong>. This includes:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Respecting \u003Cstrong>paywalls\u003C/strong>, access restrictions, and technical protection measures.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Adhering to \u003Cstrong>machine-readable opt-outs\u003C/strong> (e.g., \u003Ccode>robots.txt\u003C/code>), or similar protocols.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Integrating \u003Cstrong>objection detection tools\u003C/strong> that identify and respond to copyright holders’ signals.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Due diligence for third-party datasets\u003C/strong>: Providers must verify that any external datasets used for model training are \u003Cstrong>compliant with EU copyright rules\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Transparency and redress mechanisms\u003C/strong>: Providers must:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Disclose \u003Cstrong>crawler specifications\u003C/strong> and \u003Cstrong>objection-detection mechanisms\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Maintain a \u003Cstrong>contact point\u003C/strong> for rights holders to request model-related information, and submit \u003Cstrong>complaints\u003C/strong> or copyright concerns.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Avoid penalizing content indexed by \u003Cstrong>search engines\u003C/strong> when legitimate objections are raised.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"downstream-system-obligations\">Downstream system obligations\u003C/h4>\r\n\u003Cp>For GPAI models integrated into other AI systems, whether by the provider or a third party, the Code requires:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Implementation of \u003Cstrong>measures to prevent copyright-infringing outputs\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Inclusion of \u003Cstrong>prohibitions on infringing uses\u003C/strong> within \u003Cstrong>Acceptable Use Policies\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>These obligations apply \u003Cstrong>regardless of the downstream user's identity\u003C/strong>, ensuring responsible use throughout the AI supply chain.\u003C/p>\r\n\u003Ch4 id=\"disclosure-of-training-content-summaries\">Disclosure of training content summaries\u003C/h4>\r\n\u003Cp>To enable \u003Cstrong>rights holders to identify potential use of their works\u003C/strong>, providers must publish \u003Cstrong>summaries of training content\u003C/strong> that are:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>Sufficiently detailed\u003C/strong> to allow assessment and possible action (e.g., filing objections or requesting delisting).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Publicly accessible as part of the provider’s overall transparency commitments.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Ch3 id=\"concerning-safety-and-security\">\u003Cstrong>3. Concerning safety and security\u003C/strong>\u003C/h3>\r\n\u003Cp>The Safety and Security chapter of the GPAI Code of Practice applies specifically to general-purpose AI models that may pose \u003Cstrong>systemic risks\u003C/strong>, as defined under \u003Ca href=\"https://artificialintelligenceact.eu/article/51/\" rel=\"nofollow\">Article 51 of the AI Act.\u003C/a>\u003C/p>\r\n\u003Cp>Under the AI Act, systemic risk is associated with models that exhibit \u003Cstrong>high-impact capabilities\u003C/strong>—meaning capabilities that are comparable to or exceed those of the most advanced GPAI models and that have a \u003Cstrong>significant impact on the EU market\u003C/strong>. The Act presumes that models trained using more than \u003Cstrong>10^25 floating-point operations (FLOPs)\u003C/strong> fall within this category.\u003C/p>\r\n\u003Cp>Providers of such models are required to implement a \u003Cstrong>comprehensive Safety and Security Framework, to comply with their obligations as provided under \u003Ca href=\"https://artificialintelligenceact.eu/article/55/.\" rel=\"nofollow\">Article 55 of the AI Act\u003C/a>.\u003C/strong> This includes conducting thorough \u003Cstrong>risk assessments, performing regular evaluations, enabling post-market monitoring, and reporting incidents in a timely manner.\u003C/strong> Providers are also expected to enable \u003Cstrong>external evaluations\u003C/strong>.\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Providers must conduct \u003Cstrong>model evaluations\u003C/strong>, in\u003Cstrong>cluding adversarial testing, to detect and mitigate systemic risks.\u003C/strong> These evaluations should assess potential sources of systemic risk and be documented appropriately.\u003C/li>\r\n\u003Cli>If a provider identifies a serious incident or vulnerability, it must be \u003Cstrong>reported\u003C/strong> without undue delay to the AI Office and the relevant national authorities, and \u003Cstrong>documented\u003C/strong> along with any \u003Cstrong>corrective measures\u003C/strong> taken.\u003C/li>\r\n\u003Cli>Providers are also required to ensure that their systems are protected by \u003Cstrong>an adequate level of cybersecurity safeguards.\u003C/strong>\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Signatories must go further by actively \u003Cstrong>identifying, quantifying, and managing systemic risk pertaining to some of the GPAI models\u003C/strong>. This involves:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Estimating when a model may \u003Cstrong>exceed applicable risk thresholds,\u003C/strong>\u003C/li>\r\n\u003Cli>Defining what constitutes an \u003Cstrong>acceptable level of systemic risk,\u003C/strong>\u003C/li>\r\n\u003Cli>Maintaining clear \u003Cstrong>mitigation strategies,\u003C/strong>\u003C/li>\r\n\u003Cli>And transparently documenting existing \u003Cstrong>security controls.\u003C/strong>\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Once risks are identified, providers are expected to evaluate w\u003Cstrong>hether they remain within internal risk tolerance limits. If not, mitigation measures must be applied until the risk is brought to an acceptable level.\u003C/strong>\u003C/p>\r\n\u003Cp>The Code also sets expectations for \u003Cstrong>incident tracking and reporting\u003C/strong>, reinforcing Article 55(1)(c) by promoting a feedback loop that enhances safety, accountability, and regulatory oversight. It stresses that safety obligations are not static and must evolve as technology advances. \u003Cbr />\r\n\u003Cbr />\r\nConsequently, the Code advocates for \u003Cstrong>flexible, forward-looking risk governance strategies\u003C/strong> capable of adapting to emerging threats and model capabilities.\u003C/p>\r\n\u003Ch3 id=\"more-documentation-and-less-built-in-responsibility\">More documentation and less built-in responsibility?\u003C/h3>\r\n\u003Cp>At present, however, the prevailing approach still treats risk management as a process that runs in parallel to AI development—rather than being fully embedded into the model design lifecycle. While tools such as Model Reports and post-market monitoring help meet compliance standards, they can lead to \u003Cstrong>overly bureaucratic workflows without necessarily improving technical or ethical outcomes.\u003C/strong>\u003C/p>\r\n\u003Cp>The true shift envisioned by the Code is one that integrates \u003Cstrong>risk awareness directly into the core of AI system development\u003C/strong>. Risk analysis should be conducted \u003Cstrong>before selecting training data\u003C/strong>, \u003Cstrong>before designing model architectures\u003C/strong>, and \u003Cstrong>before deployment decisions are made\u003C/strong>.\u003C/p>\r\n\u003Cp>Embedding this mindset throughout the development lifecycle—rather than relying solely on post hoc compliance—will be critical to building \u003Cstrong>safer, more reliable, and more trustworthy\u003C/strong> general-purpose AI systems.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>Want to learn more about how can Dastra help you comply with the AI Act? Click \u003Ca href=\"https://www.dastra.eu/en/contacts/demo\">here\u003C/a>.\u003C/p>\r\n\u003C/blockquote>\r\n","General-Purpose AI Code of Practice: what you need to know ","The long-awaited Code of practice on General-Purpose AI (GPAI) is here. Click here for the key insights! ",2328,13,0,null,"en","general-purpose-ai-code-of-practice-what-you-need-to-know","A few days after finding out there will be no pause in the AI Act, the long-awaited Code of practice on General-Purpose AI (GPAI) is here. Click here for the key insights! ","Published",{"id":18,"displayName":19,"avatarUrl":20,"bio":12,"blogUrl":12,"color":12,"userId":18,"creationDate":21},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-07-18T08:58:00","2025-07-17T08:58:07.2214346","2025-08-19T15:07:05.9913906",{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":31},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[32,35,38],{"lang":33,"name":27,"description":34},"fr","Une liste d'articles rédigés par la communauté",{"lang":36,"name":27,"description":37},"es","Una lista de artículos escritos por la comunidad",{"lang":39,"name":27,"description":40},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[42,47],{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":43},[44,45,46],{"lang":33,"name":27,"description":34},{"lang":36,"name":27,"description":37},{"lang":39,"name":27,"description":40},{"id":48,"name":49,"description":50,"url":51,"color":52,"parentId":26,"count":12,"imageUrl":12,"parent":53,"order":58,"translations":59},69,"Expertise","Gain insights from our experts on GDPR compliance, data protection, and privacy challenges. In-depth articles, professional analysis, and real-world best practices.","indepth","#000000",{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":54},[55,56,57],{"lang":33,"name":27,"description":34},{"lang":36,"name":27,"description":37},{"lang":39,"name":27,"description":40},5,[60,62,65],{"lang":33,"name":49,"description":61},"Bénéficiez des conseils de nos experts sur la conformité RGPD, la protection des données et les enjeux privacy. Articles de fond, analyses et retours d’expérience métier.",{"lang":39,"name":63,"description":64},"Fachwissen","Entdecken Sie die Artikel unserer DSGVO-Experten",{"lang":36,"name":66,"description":67},"Experiencia","Descubre los artículos de nuestros expertos en Privacy",[],"https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-original.jpg",[71,72,73,74,75,76,77],"https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-1000.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-1500.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-800.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-600.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-300.webp","https://static.dastra.eu/content/6701fcb1-6422-4b98-ae1f-3599859d8b2b/visuel-article-24-100.webp",59438]