[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fskgWNCjFlKkF68BGkrYjEIXp3WiGZ6UiLvS-fq1PV44":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":11,"nbDownloads":12,"excerpt":13,"lang":14,"url":15,"intro":16,"featured":4,"state":17,"author":18,"authorId":19,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":69,"imageUrl":70,"imageThumbUrls":71,"id":79},false,"The European Commission released draft [guidelines to help general-purpose AI (GPAI) ](https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act)providers comply with the AI Act, particularly their obligations taking effect on August 2, 2025.Even if not binding, they reflect the Commission's interpretation of the AI Act, and [build on the GPAI Code of Practice, recently released](https://www.dastra.eu/en/article/general-purpose-ai-code-of-practice-what-you-need-to-know/59438).\r\n\r\nIs your model a General-purpose AI model? And are you the Provider? Find out here.\r\n\r\n## When is a model considered a General-purpose AI model?\r\n\r\nA General-purpose AI(GPAI) model is defined by [Article 3(63) of the AI Act.](https://artificialintelligenceact.eu/article/3/)\r\n\r\nHowever, this definition does not provide a specific set of criteria or conditions that providers can check, which is normal given the inherent nature of those models that are **trained on vast datasets using large-scale self-supervised learning, who can perform a large variety of tasks.**\r\n\r\nThe Commission gives a concrete approach: it is the **amount of computational resources** that are used to train the model measured in FLOP, **and the modalities** of the model that will define whether a model is a GPAI or not.\r\n\r\n> A model is likely to be considered a GPAI if:\r\n>\r\n> - **The training compute of the model is greater than 10^23 FLOP.** As explained in the Guidelines, the \"*amount of compute used to train a model is typically proportional to the number obtained by multiplying the number of its parameters with the number of its training examples\".*\r\n>   - For more details on the training compute of GPAI models, refer to the annex of the Guidelines.\r\n> - **It is capable of performing a wide range of distinct tasks** like generate language (text or audio), text-to-image or text-to-video. **The model's training on a broad range of natural language, ability to use language to communicative, store knowledge & reason is an indicator of its significant generality.**\r\n>\r\n> If the first threshold is met, but the model cannot perform a wide range of distinct tasks, then it is not a GPAI. **For example, if it uses 10^24 FLOP but can only transcribe speech to text, it is not a GPAI because it can perform a narrow set of tasks.**\r\n>\r\n> If the model **is general enough in its capabilities without meeting the threshold, it is still a GPAI.**\r\n\r\nWhile this single threshold seems easier to hold as a criterion, **it is not set in stone,** as the European Commission indicates that it continues to investigate other criteria.\r\n\r\n## If GPAI, when is it with systemic risk?\r\n\r\nGPAI models that present[ **systemic risks (Article 3(65))**](https://www.dastra.eu/en/guide/gpai-with-systemic-risk/59461)**,** including potential harm to fundamental rights or loss of model control, are subject to more stringent obligations under Articles 52 and 55 of the AI Act.\r\n\r\n> A model can be classified as such following one of the two conditions:\r\n>\r\n> - has **hight-impact capabilities,** namely that *\"**match or exceed those recorded in the most advanced models***\" (Article 3(64) AI Act. Those high-impact capabilities **should have a significant impact on the Union market due to their reach.** >   >   This is **presumed** when the model’s **training compute exceeds 10^25 FLOPs** (Article 51(2) AI Act)**, which can be estimated even at pre-training run**.\r\n>\r\n>   - This is not set in stone as the Commission can adjust the threshold to account for advancements.\r\n>\r\n> - desginated as such **ex officio** by the Commission or based on alerts of its high-impact capabilities from the scientific panel.\r\n>\r\n>   - The provider can contest by submitting technical justification, such as model architecture, parameter count, training techniques, and dataset details.\r\n>   - The provider is subject to obligations only if the Commission decides to reject the arguments, and confirms that the model is indeed with systemic risk.\r\n\r\nThis triggers enhanced compliance duties such as:\r\n\r\n- **Mandatory notification** to the EU AI Office: providers must notify the Commission **within two weeks** once their model meets the criteria or it becomes known that it will. Failure to do so can result in penalties of up to **€15 million or 3% of global turnover**, whichever is higher (Article 101 AI Act).\r\n\r\n- **Continuous risk monitoring and mitigation by \"taking appropriate measures along the entire model's lifecycle\" (Recital 114 AI Act).** The Commission considers the lifecycle to start at the pre-training run.\r\n\r\n## Ways you could be considered a GPAI provider\r\n\r\nUnder the AI act, a [**provider**](https://www.dastra.eu/en/guide/provider-ai/58858) is a person or body that develops a GPAI model or has it developed, and **places it on the market** under its own name or trademark irrespective of payment.\r\n\r\nThe definition is intentionally broad and reflects the various ways a GPAI model can enter the Union market.\r\n\r\n- **“Placing on the market”** refers to the first time a GPAI model is made available in the EU, through any means, be it **an API, downloadable library, physical media, cloud computing service, copied onto a customer's own infrastructure.** It includes both commercial and non-commercial distributions.\r\n- Importantly, **an entity based outside the EU can still qualify as a provider** if it introduces a GPAI model into the EU market, either directly or through intermediaries.\r\n- The **rules concerning GPAI systems**, systemic risk included or not, **apply even when the models are integrated or form a part of an [AI system](https://www.dastra.eu/en/guide/ai-system/59056)** (Recital 97 AI Act), **in addition to those for AI systems.**\r\n\r\nHere are the ways you could be considered a GPAI provider:\r\n\r\n\u003Ctable style=\"min-width: 75px\">\r\n\u003Ccolgroup>\u003Ccol style=\"min-width: 25px\">\u003Ccol style=\"min-width: 25px\">\u003Ccol style=\"min-width: 25px\">\u003C/colgroup>\u003Ctbody>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>By building or asking someone to build a model for you\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If your company develops a model, or commissions another entity to develop it on your behalf, you are the provider.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>So it applies whether development is internal or outsourced.\u003C/p>\u003C/td>\u003C/tr>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If you make significant modifications to an existing model\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If the compute used for your fine-tuning or modification exceeds \u003Cstrong>one-third of the compute\u003C/strong> used to train the original model, you are likely to become the new provider.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>In contrast, \u003Cstrong>small-scale adaptations\u003C/strong>, such as domain-specific tuning or retrieval-augmented generation (RAG), likely do \u003Cstrong>not\u003C/strong> make you a provider—they would still be classified as downstream use.\u003C/p>\u003C/td>\u003C/tr>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If you release a model under a free &amp; open-source license under certain conditions\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>Open-source developers are \u003Cstrong>not automatically exempt\u003C/strong> from provider obligations. If you release a GPAI model under a free and open-source license, you may still be required to comply with all obligations pertaining to GPAI providers, \u003Cstrong>unless\u003C/strong> those conditions are met:\u003C/p>\u003Col>\u003Cli>\u003Cp>The \u003Cstrong>model weights and relevant information\u003C/strong> (such as architecture and usage) are \u003Cstrong>publicly available\u003C/strong>,\u003C/p>\u003C/li>\u003Cli>\u003Cp>The model is \u003Cstrong>freely accessible and users are able to freely access, use modify and redistribute the model\u003C/strong> with \u003Cstrong>no monetary compensation\u003C/strong> required,\u003C/p>\u003C/li>\u003Cli>\u003Cp>The model \u003Cstrong>does not pose systemic risk\u003C/strong>.\u003C/p>\u003C/li>\u003C/ol>\u003Cp>You may be exempt from certain technical requirements, such as providing detailed documentation on training and testing, to both the AI Office and to downstream providers integrating the GPAI into their own systems.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>Nevertheless, \u003Cstrong>all providers\u003C/strong>, including open-source developers, must adopt a \u003Cstrong>copyright policy\u003C/strong> and publish a \u003Cstrong>summary of their training data sources\u003C/strong> to ensure transparency and compliance.\u003C/p>\u003C/td>\u003C/tr>\u003C/tbody>\r\n\u003C/table>\r\n\r\n## **Important enforcement dates**\r\n\r\n- **August 2, 2025** : GPAI-related obligations under the AI Act take effect.\r\n\r\n- **August 2, 2027**: End of two-year tranistional period given to providers of GPAI models already on the market prior to the applicability date.\r\n\r\n- **August 2, 2026**: Enforcement powers, including fines, begin\r\n\r\nAlthough penalties cannot be enforced until 2026, the Commission expects **early cooperation and voluntary compliance** from providers starting in 2025.\r\n\r\nThe AI Office is committed to helping providers take the necessary steps toward compliance and encourages them to reach out proactively to ensure they are on the right path.\r\n\r\n## **Transitional provisions**\r\n\r\nIf **you’ve been developing a GPAI model prior to the Act’s entry into force,** and compliance with the AI Act is challenging, the AI Office will provide the necessary support to help comply by August 2027. Notably, retraining or unlearning is not required **if it imposes a disproportionate burden,** provided this is transparently disclosed and justified in the copyright policy and training data summary.\r\n\r\nIf you are releasing your **first-ever GPAI model**, the Commission may take your ***challenging position*** into account and grant certain procedural flexibilities.\r\n\r\n## **Code of practice & guidelines: your strategic advantage**\r\n\r\n- Adhering to the **general-purpose AI code of practice** carries a major benefit: under Article 53(4), it creates a **presumption of compliance** with the AI Act, streamlining your path to legal conformity.\r\n\r\n- The draft **guidelines** help clarify where you stand in terms of obligations. While the Code gives you a framework, the guidelines help you understand how to apply it to your specific situation.\r\n\r\nGet started with the Code of practice [right here](https://www.dastra.eu/en/article/general-purpose-ai-code-of-practice-what-you-need-to-know/59438).","\u003Cp>The European Commission released draft \u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act\" rel=\"nofollow\">guidelines to help general-purpose AI (GPAI) \u003C/a>providers comply with the AI Act, particularly their obligations taking effect on August 2, 2025.Even if not binding, they reflect the Commission's interpretation of the AI Act, and \u003Ca href=\"https://www.dastra.eu/en/article/general-purpose-ai-code-of-practice-what-you-need-to-know/59438\">build on the GPAI Code of Practice, recently released\u003C/a>.\u003C/p>\n\u003Cp>Is your model a General-purpose AI model? And are you the Provider? Find out here.\u003C/p>\n\u003Ch2 id=\"when-is-a-model-considered-a-general-purpose-ai-model\">When is a model considered a General-purpose AI model?\u003C/h2>\n\u003Cp>A General-purpose AI(GPAI) model is defined by \u003Ca href=\"https://artificialintelligenceact.eu/article/3/\" rel=\"nofollow\">Article 3(63) of the AI Act.\u003C/a>\u003C/p>\n\u003Cp>However, this definition does not provide a specific set of criteria or conditions that providers can check, which is normal given the inherent nature of those models that are \u003Cstrong>trained on vast datasets using large-scale self-supervised learning, who can perform a large variety of tasks.\u003C/strong>\u003C/p>\n\u003Cp>The Commission gives a concrete approach: it is the \u003Cstrong>amount of computational resources\u003C/strong> that are used to train the model measured in FLOP, \u003Cstrong>and the modalities\u003C/strong> of the model that will define whether a model is a GPAI or not.\u003C/p>\n\u003Cblockquote>\n\u003Cp>A model is likely to be considered a GPAI if:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>The training compute of the model is greater than 10^23 FLOP.\u003C/strong> As explained in the Guidelines, the \"\u003Cem>amount of compute used to train a model is typically proportional to the number obtained by multiplying the number of its parameters with the number of its training examples\".\u003C/em>\n\u003Cul>\n\u003Cli>For more details on the training compute of GPAI models, refer to the annex of the Guidelines.\u003C/li>\n\u003C/ul>\n\u003C/li>\n\u003Cli>\u003Cstrong>It is capable of performing a wide range of distinct tasks\u003C/strong> like generate language (text or audio), text-to-image or text-to-video. \u003Cstrong>The model's training on a broad range of natural language, ability to use language to communicative, store knowledge &amp; reason is an indicator of its significant generality.\u003C/strong>\u003C/li>\n\u003C/ul>\n\u003Cp>If the first threshold is met, but the model cannot perform a wide range of distinct tasks, then it is not a GPAI. \u003Cstrong>For example, if it uses 10^24 FLOP but can only transcribe speech to text, it is not a GPAI because it can perform a narrow set of tasks.\u003C/strong>\u003C/p>\n\u003Cp>If the model \u003Cstrong>is general enough in its capabilities without meeting the threshold, it is still a GPAI.\u003C/strong>\u003C/p>\n\u003C/blockquote>\n\u003Cp>While this single threshold seems easier to hold as a criterion, \u003Cstrong>it is not set in stone,\u003C/strong> as the European Commission indicates that it continues to investigate other criteria.\u003C/p>\n\u003Ch2 id=\"if-gpai-when-is-it-with-systemic-risk\">If GPAI, when is it with systemic risk?\u003C/h2>\n\u003Cp>GPAI models that present\u003Ca href=\"https://www.dastra.eu/en/guide/gpai-with-systemic-risk/59461\"> \u003Cstrong>systemic risks (Article 3(65))\u003C/strong>\u003C/a>\u003Cstrong>,\u003C/strong> including potential harm to fundamental rights or loss of model control, are subject to more stringent obligations under Articles 52 and 55 of the AI Act.\u003C/p>\n\u003Cblockquote>\n\u003Cp>A model can be classified as such following one of the two conditions:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>has \u003Cstrong>hight-impact capabilities,\u003C/strong> namely that \u003Cem>\"\u003Cstrong>match or exceed those recorded in the most advanced models\u003C/strong>\u003C/em>\" (Article 3(64) AI Act. Those high-impact capabilities \u003Cstrong>should have a significant impact on the Union market due to their reach.\u003C/strong> &gt;   &gt;   This is \u003Cstrong>presumed\u003C/strong> when the model’s \u003Cstrong>training compute exceeds 10^25 FLOPs\u003C/strong> (Article 51(2) AI Act)\u003Cstrong>, which can be estimated even at pre-training run\u003C/strong>.\u003C/p>\n\u003Cul>\n\u003Cli>This is not set in stone as the Commission can adjust the threshold to account for advancements.\u003C/li>\n\u003C/ul>\n\u003C/li>\n\u003Cli>\u003Cp>desginated as such \u003Cstrong>ex officio\u003C/strong> by the Commission or based on alerts of its high-impact capabilities from the scientific panel.\u003C/p>\n\u003Cul>\n\u003Cli>The provider can contest by submitting technical justification, such as model architecture, parameter count, training techniques, and dataset details.\u003C/li>\n\u003Cli>The provider is subject to obligations only if the Commission decides to reject the arguments, and confirms that the model is indeed with systemic risk.\u003C/li>\n\u003C/ul>\n\u003C/li>\n\u003C/ul>\n\u003C/blockquote>\n\u003Cp>This triggers enhanced compliance duties such as:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>\u003Cstrong>Mandatory notification\u003C/strong> to the EU AI Office: providers must notify the Commission \u003Cstrong>within two weeks\u003C/strong> once their model meets the criteria or it becomes known that it will. Failure to do so can result in penalties of up to \u003Cstrong>€15 million or 3% of global turnover\u003C/strong>, whichever is higher (Article 101 AI Act).\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>Continuous risk monitoring and mitigation by \"taking appropriate measures along the entire model's lifecycle\" (Recital 114 AI Act).\u003C/strong> The Commission considers the lifecycle to start at the pre-training run.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Ch2 id=\"ways-you-could-be-considered-a-gpai-provider\">Ways you could be considered a GPAI provider\u003C/h2>\n\u003Cp>Under the AI act, a \u003Ca href=\"https://www.dastra.eu/en/guide/provider-ai/58858\">\u003Cstrong>provider\u003C/strong>\u003C/a> is a person or body that develops a GPAI model or has it developed, and \u003Cstrong>places it on the market\u003C/strong> under its own name or trademark irrespective of payment.\u003C/p>\n\u003Cp>The definition is intentionally broad and reflects the various ways a GPAI model can enter the Union market.\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>“Placing on the market”\u003C/strong> refers to the first time a GPAI model is made available in the EU, through any means, be it \u003Cstrong>an API, downloadable library, physical media, cloud computing service, copied onto a customer's own infrastructure.\u003C/strong> It includes both commercial and non-commercial distributions.\u003C/li>\n\u003Cli>Importantly, \u003Cstrong>an entity based outside the EU can still qualify as a provider\u003C/strong> if it introduces a GPAI model into the EU market, either directly or through intermediaries.\u003C/li>\n\u003Cli>The \u003Cstrong>rules concerning GPAI systems\u003C/strong>, systemic risk included or not, \u003Cstrong>apply even when the models are integrated or form a part of an \u003Ca href=\"https://www.dastra.eu/en/guide/ai-system/59056\">AI system\u003C/a>\u003C/strong> (Recital 97 AI Act), \u003Cstrong>in addition to those for AI systems.\u003C/strong>\u003C/li>\n\u003C/ul>\n\u003Cp>Here are the ways you could be considered a GPAI provider:\u003C/p>\n\u003Ctable style=\"min-width: 75px\">\n\u003Ccolgroup>\u003Ccol style=\"min-width: 25px\">\u003Ccol style=\"min-width: 25px\">\u003Ccol style=\"min-width: 25px\">\u003C/colgroup>\u003Ctbody>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>By building or asking someone to build a model for you\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If your company develops a model, or commissions another entity to develop it on your behalf, you are the provider.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>So it applies whether development is internal or outsourced.\u003C/p>\u003C/td>\u003C/tr>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If you make significant modifications to an existing model\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If the compute used for your fine-tuning or modification exceeds \u003Cstrong>one-third of the compute\u003C/strong> used to train the original model, you are likely to become the new provider.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>In contrast, \u003Cstrong>small-scale adaptations\u003C/strong>, such as domain-specific tuning or retrieval-augmented generation (RAG), likely do \u003Cstrong>not\u003C/strong> make you a provider—they would still be classified as downstream use.\u003C/p>\u003C/td>\u003C/tr>\u003Ctr>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>If you release a model under a free &amp; open-source license under certain conditions\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>Open-source developers are \u003Cstrong>not automatically exempt\u003C/strong> from provider obligations. If you release a GPAI model under a free and open-source license, you may still be required to comply with all obligations pertaining to GPAI providers, \u003Cstrong>unless\u003C/strong> those conditions are met:\u003C/p>\u003Col>\u003Cli>\u003Cp>The \u003Cstrong>model weights and relevant information\u003C/strong> (such as architecture and usage) are \u003Cstrong>publicly available\u003C/strong>,\u003C/p>\u003C/li>\u003Cli>\u003Cp>The model is \u003Cstrong>freely accessible and users are able to freely access, use modify and redistribute the model\u003C/strong> with \u003Cstrong>no monetary compensation\u003C/strong> required,\u003C/p>\u003C/li>\u003Cli>\u003Cp>The model \u003Cstrong>does not pose systemic risk\u003C/strong>.\u003C/p>\u003C/li>\u003C/ol>\u003Cp>You may be exempt from certain technical requirements, such as providing detailed documentation on training and testing, to both the AI Office and to downstream providers integrating the GPAI into their own systems.\u003C/p>\u003C/td>\u003Ctd colspan=\"1\" rowspan=\"1\">\u003Cp>Nevertheless, \u003Cstrong>all providers\u003C/strong>, including open-source developers, must adopt a \u003Cstrong>copyright policy\u003C/strong> and publish a \u003Cstrong>summary of their training data sources\u003C/strong> to ensure transparency and compliance.\u003C/p>\u003C/td>\u003C/tr>\u003C/tbody>\n\u003C/table>\n\u003Ch2 id=\"important-enforcement-dates\">\u003Cstrong>Important enforcement dates\u003C/strong>\u003C/h2>\n\u003Cul>\n\u003Cli>\u003Cp>\u003Cstrong>August 2, 2025\u003C/strong> : GPAI-related obligations under the AI Act take effect.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>August 2, 2027\u003C/strong>: End of two-year tranistional period given to providers of GPAI models already on the market prior to the applicability date.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>August 2, 2026\u003C/strong>: Enforcement powers, including fines, begin\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cp>Although penalties cannot be enforced until 2026, the Commission expects \u003Cstrong>early cooperation and voluntary compliance\u003C/strong> from providers starting in 2025.\u003C/p>\n\u003Cp>The AI Office is committed to helping providers take the necessary steps toward compliance and encourages them to reach out proactively to ensure they are on the right path.\u003C/p>\n\u003Ch2 id=\"transitional-provisions\">\u003Cstrong>Transitional provisions\u003C/strong>\u003C/h2>\n\u003Cp>If \u003Cstrong>you’ve been developing a GPAI model prior to the Act’s entry into force,\u003C/strong> and compliance with the AI Act is challenging, the AI Office will provide the necessary support to help comply by August 2027. Notably, retraining or unlearning is not required \u003Cstrong>if it imposes a disproportionate burden,\u003C/strong> provided this is transparently disclosed and justified in the copyright policy and training data summary.\u003C/p>\n\u003Cp>If you are releasing your \u003Cstrong>first-ever GPAI model\u003C/strong>, the Commission may take your \u003Cem>\u003Cstrong>challenging position\u003C/strong>\u003C/em> into account and grant certain procedural flexibilities.\u003C/p>\n\u003Ch2 id=\"code-of-practice-guidelines-your-strategic-advantage\">\u003Cstrong>Code of practice &amp; guidelines: your strategic advantage\u003C/strong>\u003C/h2>\n\u003Cul>\n\u003Cli>\u003Cp>Adhering to the \u003Cstrong>general-purpose AI code of practice\u003C/strong> carries a major benefit: under Article 53(4), it creates a \u003Cstrong>presumption of compliance\u003C/strong> with the AI Act, streamlining your path to legal conformity.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>The draft \u003Cstrong>guidelines\u003C/strong> help clarify where you stand in terms of obligations. While the Code gives you a framework, the guidelines help you understand how to apply it to your specific situation.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cp>Get started with the Code of practice \u003Ca href=\"https://www.dastra.eu/en/article/general-purpose-ai-code-of-practice-what-you-need-to-know/59438\">right here\u003C/a>.\u003C/p>\n","Building a GPAI? You might be the provider","Is your model a General-purpose AI model? And are you the Provider? Find out here.",1601,9,"General-Purpose AI: what the Commission says",0,null,"en","building-a-gpai-you-might-be-the-provider","Is your model a General-purpose AI model? Are you the provider? Find out here.","Published",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-07-21T08:37:00","2025-07-21T08:37:16.830805","2026-04-20T12:07:25.8812622",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":32},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43,48],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},{"id":49,"name":50,"description":51,"url":52,"color":53,"parentId":27,"count":13,"imageUrl":13,"parent":54,"order":59,"translations":60},69,"Expertise","Gain insights from our experts on GDPR compliance, data protection, and privacy challenges. In-depth articles, professional analysis, and real-world best practices.","indepth","#000000",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":55},[56,57,58],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},5,[61,63,66],{"lang":34,"name":50,"description":62},"Bénéficiez des conseils de nos experts sur la conformité RGPD, la protection des données et les enjeux privacy. Articles de fond, analyses et retours d’expérience métier.",{"lang":40,"name":64,"description":65},"Fachwissen","Entdecken Sie die Artikel unserer DSGVO-Experten",{"lang":37,"name":67,"description":68},"Experiencia","Descubre los artículos de nuestros expertos en Privacy",[],"https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-original.jpg",[72,73,74,75,76,77,78],"https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-1000.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-1500.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-800.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-600.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-300.webp","https://static.dastra.eu/content/8d19d67c-3568-42c2-8a3c-ee60d7b582c7/visuel-article-24-100.webp",59448]