[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fE9ECgMcCsyoLlTMr21Ss6zXukaUFVSMOODcoygahdwE":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":8,"featured":4,"state":15,"author":16,"authorId":17,"datePublication":21,"dateCreation":22,"dateUpdate":23,"mainCategory":24,"categories":40,"metaDatas":88,"imageUrl":89,"imageThumbUrls":90,"id":98},false,"As AI systems become embedded across business operations, the question isn’t whether they require governance, but **how** to implement it effectively. This is even more crucial now that we know that there will be [no delay in the EU AI Act’s enforcement. ](https://www.dastra.eu/en/article/no-pause-for-the-eu-ai-act/59400)Meanwhile, you should already be deploying a proper AI governance framework. Many organisations already rely on strong compliance foundations like GDPR. However, AI introduces new layers of complexity and risk that require tailored governance, sharper accountability, and more rigorous oversight. There's simply no universal template.\r\n\r\nYou could structure your AI governance following the EU AI Act's timeline. Or consider the EU AI Act as the most stringent text, similar to the international standard that became the GDPR. However, the challenge is to **move beyond general principles and design operational governance frameworks — not just because “the AI Act says so,”** but because it’s essential for the sustainable & trusted use of AI.\r\n\r\nA proper governance allows you to identify & **navigate risks** stemming from the use and development of AI systems. It is also a **reputational and competitive advantage** to be transparent & responsible when using AI. It can also help to **lower compliance costs.**\r\n\r\nWe recently hosted an **AI Governance workshop** at [the CPDP.AI event in Brussels](https://www.cpdpconferences.org/) with around 30 participants mostly DPOs, privacy leads and compliance professionals. The **results will be shared across this article althroughout.**\r\n\r\nLet's go through the most important AI Governance points.\r\n\r\n> ![](https://static.dastra.eu/richtext/4bbf79fc-0640-43b7-ba5a-7e3f796b7352/image-original.png)We kicked things off by asking them to rate their own organisation’s AI compliance. The outcome? Most see themselves at the early stages of readiness for the EU AI Act. Many are just starting to bridge the gap between high-level obligations and daily operations.\r\n\r\n## **Make it fit: Tailor governance to your organisation**​\r\n\r\nEffective AI governance isn’t about replicating a rigid standard. The key is to ensure your governance is **proportionate, practical, and firmly anchored in how your business actually works:**\r\n\r\n- Consider how to develop internal templates for AI-specific reviews that align with your existing vendor assessment processes.\r\n- Define how your organisation will operationalise AI ethical principles just as you did under the GDPR : adopt the same mindset as privacy or security by design, integrate fairness, explainability, and robustness directly into the AI development lifecycle.\r\n- Map your AI systems, whether developed in-house or sourced externally, by leveraging the same approach you used for your ROPA.\r\n- Assess risks through your established risk evaluation steps, but with AI goggles.\r\n- Clarify ownerhsip as already did for your stakeholders who know their roles.\r\n\r\n## **Integrate, don't isolate**\r\n\r\n**AI human governance shouldn’t live in its own silo.**\r\n\r\nIt needs to reflect your organisation’s **size, industry, risk appetite, and way of operating**. Smaller companies might not have dedicated AI teams, and that’s fine. There are no obligations to create dedicated functions.\r\n\r\nMany organisations weave AI oversight into their risk or ethics committees, or establish dedicated AI boards that work closely with privacy, audit, and security teams. For example, you might have the CTO handle technical development, the Chief Privacy Officer focus on data compliance, and the Chief Risk Officer oversee AI-specific risks.\r\n\r\nIn ay case, AI can’t be managed in isolation. It needs **multi-department scrutiny, alignment with corporate values,** enabling effective coordination between various departments and ensuring that diverse risks are properly understood and managed.\r\n\r\nElaborating a RACI can be an effective way to determine who is responsable for what.\r\n\r\n> ![](https://static.dastra.eu/richtext/c7a54c4c-ff00-4992-885f-beefb1cee825/image-original.png)During our discussions, we saw how diverse these approaches can be. In some companies, AI governance emerged from IT or data teams. In others, it started under privacy teams who had early visibility on regulatory risk. The decision points such as who sets thresholds for bias tolerance or who greenlights high-risk models, are still evolving. Many still lack clear processes to consistently involve all relevant teams.\r\n\r\nIn any case, there is no one-size-fits-all solution. The essential part is the cross-functional engagement, and there are different operating models depending on size & sector.\r\n\r\n> ![](https://static.dastra.eu/richtext/53d57e30-c992-4a5f-8161-012986119229/image-original.png)We are convinced that the **Data Protection Officer (DPO) should be in charge of AI Compliance.** In the vast majority of cases, artificial intelligence involves the processing of personal data. It is therefore logical that new data-related issues are fully integrated into the **broader scope** of the DPO's responsibilities.\r\n\r\n## Develop an AI strategy that goes beyond mere compliance\r\n\r\nEffective governance is defined by how decisions are made, enforced, and supported across the company. Ask yourself:\r\n\r\n- Is AI governance in your company treated as a mere compliance task, or as a critical capability?\r\n\r\n- When ethics and commercial pressures collide, which takes priority?\r\n\r\nAI governance must be more than static policies. It’s a living framework, **shaped daily by choices that reflect your company’s values,** whether it’s a CEO funding training despite budget concerns, a legal counsel questioning a risky but profitable use case, or engineers delaying a launch to address bias.\r\n\r\n**Establish a managerial-level AI strategy:** a clear AI strategy is foundational for sound governance. It signals that AI isn’t treated as an isolated technical experiment or scattered set of projects, but as a strategic capability managed with purpose and oversight.\r\n\r\nAt the leadership level, and in order to avoid fragmented, inconsistent AI initiatives across the business, an AI strategy should define:\r\n\r\n- The overarching objectives for AI use,\r\n\r\n- The rules and parameters for deployment, clarifying where AI should or shouldn’t be applied,\r\n\r\n- And the financial and human resources allocated to make it work.\r\n\r\n**Guide operational decisions from the top down:** once set at the managerial level, this strategic framework becomes the reference for all tactical and operational decisions — from project evaluations to risk assessments, post-deployment monitoring, and assigning responsibilities. It ensures that data scientists, product teams, compliance officers, and partners align with the organization’s values, legal duties, and risk appetite.\r\n\r\n**Integrate governance directly into strategic planning:** Governance shouldn’t be an afterthought. It must be embedded into how AI projects are budgeted, prioritized, and measured, aligning with broader risk management and quality goals.\r\n\r\n## Reducing risk exposure through governance\r\n\r\nDepending on the stage of the AI lifecycle and the role of each stakeholder, different risks may arise. **Governance plays a crucial role in identifying, assessing, and implementing measures to address these risks.** Those risks can pertain to matters such as:\r\n\r\n- Regulatory (sanctions), reputational (loss of clients);\r\n- Cyber attacks, biais or hallucinations, discrimination;\r\n- Loss or leak of confidential data, intellectual property;\r\n\r\nIt’s important not to look at each risk in isolation but to c**onsider them collectively, always keeping the purpose of the AI use case in mind.**\r\n\r\n- For example, an AI system used by HR to evaluate candidates poses very different risks than an AI tool that simply monitors daily public news.\r\n\r\nOnce the risks are identified, **appropriate mitigation measures should be put in place** — such as anonymising personal data, strengthening contractual clauses with providers and users, or scheduling regular audits.\r\n\r\nIf these measures prove insufficient to adequately reduce the risks, or if they fail to protect the rights and freedoms of individuals, it may be necessary to adopt **even stricter safeguards. I**n some cases, this could ultimately mean deciding **not to pursue a particular AI use case at all**.\r\n\r\n## Mapping & connecting to data governance\r\n\r\n![](https://static.dastra.eu/richtext/1a6af214-578b-49a7-9b34-7e19bf3c51a0/image-original.png)**Ensure AI systems are governed with full visibility into their data foundations.** Link your data management quality and accountability practices to AI oversight.\r\n\r\nKnowing the **origin and method of data collection is essential**, not only for legal compliance, but also for building trust & making AI systems more accountable. **Holding unnecessary data increases the risk of breaches or misuse in AI systems.**\r\n\r\nTo keep your compliance up-to-date, **strong data quality workflows & controls are essential.** Such as creating a questionnaire in order to identify and assess AI tools in light of the AI Act.\r\n\r\nCreating a **record of AI systems** is an essential step here & will give you the necessary visibility.\r\n\r\n> Some organisations only have a loose list of tools. Others run internal surveys or change management tickets to spot new AI uses. A few incorporate these updates into their ROPA (GDPR record) and conduct yearly or on-demand reviews. Others hired someone to analyze all the use cases by contacting all departments & making a list of what they found.\r\n\r\nWhatever your way of doing it is, **you need to start mapping your AI systems** somewhere, somehow. However, the success depends on workflows such as standardizing the information being collected​, partially automating the collection process where possible​ & regularly updating the information to keep the AI mapping relevant​.\r\n\r\n> **We simplified it for you over [here.](https://www.dastra.eu/en/product-features/ai-governance)**\r\n\r\n## From vague promises to real policies\r\n\r\nEstablishing policies is essential to define your organisation's position on AI. It helps control the usage, moderate and reduce the risks of incidents.\r\n\r\n- Set up clear internal rules for developing, procuring, testing, deploying and monitoring AI.\r\n- Define how ethical principles (like fairness, accountability, transparency) are put into practice.\r\n- Tools for recording the technical documentation of AI systems and the transparency documents provided by vendors.\r\n- Tool for assessing the AI maturity of service providers or for scoring tenders.\r\n- It also is essential to review the privacy policy, T&Cs and compliance documentation, as much as possible given the difficult access or negotiation, to consider aspects such as intellectual property and cybersecurity.\r\n- Model contractual clauses to address AI-related risks.\r\n\r\n**Avoid vague promises in your policies** like \"we will prevent hallucinations or bias\" and replace them with concrete requirements \"undergo testing of X kind, under X number of days, and deployed one week after confirmation\".\r\n\r\n**Think of policies like software, they need regular updates to stay relevant.**\r\n\r\n## Beyond the AI Act's literacy obligation\r\n\r\nWhile the AI Act (Article 4) references AI literacy, there are no direct fines tied to it. But regulators could **well consider a lack of literacy as an aggravating factor** in assessing violations of the law when it starts taking broader effect this August. Similar to the factor of lack of due diligence in dealing with bias for instance.\r\n\r\nThe[ living repository of the EU AI Office](https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy) gives examples to support the implementation of literacy (but does not automatically grant presumption of compliance).\r\n\r\n**AI literacy goes far beyond checking boxes.** It’s about building an organisation that truly understands why it uses AI and how to manage the risks and benefits. Regularly training teams and ensuring they use AI tools responsibly is key & cannot be a one-shot event.\r\n\r\nWhy? Let's take ShadowAI. It's your early signal of a gap between the speed of AI innovation & organization governance. Real world examples could take the shape of an internal security incident, similar to the one faced by Samsung when engineers leaked proprietary code by sharing it with ChatGPT.\r\n\r\n> #### **![](https://static.dastra.eu/richtext/060cbbd8-4bfe-4418-98f9-a3cf3ba8e782/image-original.png)![](https://static.dastra.eu/richtext/5aa818dd-850b-4c6f-bb6a-6017bc5a87b3/image-original.png)Our workshop made it clear: role-specific AI Training should be a top priority amongst literacy initiatives ranging from data breaches simulations, creating tiered permissions & more. From interns to executives, AI literacy has to become part of the culture.**\r\n\r\n## **The time for cautious observation is over**\r\n\r\n#### It’s time to embed AI Governance into your DNA, just like we learned to do with the GDPR.\r\n\r\nWant to explore how to move from abstract obligations to concrete processes? **Let’s talk[ here](https://www.dastra.eu/en/contacts).**\r\n\r\nMeanwhile, check out our **AI features over [here](https://www.dastra.eu/en/product-features/ai-governance).**\r\n\r\nBottom line is: **governance is alive & well. At least it is with Dastra's help!**","\u003Cp>As AI systems become embedded across business operations, the question isn’t whether they require governance, but \u003Cstrong>how\u003C/strong> to implement it effectively. This is even more crucial now that we know that there will be \u003Ca href=\"https://www.dastra.eu/en/article/no-pause-for-the-eu-ai-act/59400\">no delay in the EU AI Act’s enforcement. \u003C/a>Meanwhile, you should already be deploying a proper AI governance framework. Many organisations already rely on strong compliance foundations like GDPR. However, AI introduces new layers of complexity and risk that require tailored governance, sharper accountability, and more rigorous oversight. There's simply no universal template.\u003C/p>\n\u003Cp>You could structure your AI governance following the EU AI Act's timeline. Or consider the EU AI Act as the most stringent text, similar to the international standard that became the GDPR. However, the challenge is to \u003Cstrong>move beyond general principles and design operational governance frameworks — not just because “the AI Act says so,”\u003C/strong> but because it’s essential for the sustainable &amp; trusted use of AI.\u003C/p>\n\u003Cp>A proper governance allows you to identify &amp; \u003Cstrong>navigate risks\u003C/strong> stemming from the use and development of AI systems. It is also a \u003Cstrong>reputational and competitive advantage\u003C/strong> to be transparent &amp; responsible when using AI. It can also help to \u003Cstrong>lower compliance costs.\u003C/strong>\u003C/p>\n\u003Cp>We recently hosted an \u003Cstrong>AI Governance workshop\u003C/strong> at \u003Ca href=\"https://www.cpdpconferences.org/\" rel=\"nofollow\">the CPDP.AI event in Brussels\u003C/a> with around 30 participants mostly DPOs, privacy leads and compliance professionals. The \u003Cstrong>results will be shared across this article althroughout.\u003C/strong>\u003C/p>\n\u003Cp>Let's go through the most important AI Governance points.\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/4bbf79fc-0640-43b7-ba5a-7e3f796b7352/image-original.png\" alt=\"\" />We kicked things off by asking them to rate their own organisation’s AI compliance. The outcome? Most see themselves at the early stages of readiness for the EU AI Act. Many are just starting to bridge the gap between high-level obligations and daily operations.\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"make-it-fit-tailor-governance-to-your-organisation\">\u003Cstrong>Make it fit: Tailor governance to your organisation\u003C/strong>​\u003C/h2>\n\u003Cp>Effective AI governance isn’t about replicating a rigid standard. The key is to ensure your governance is \u003Cstrong>proportionate, practical, and firmly anchored in how your business actually works:\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>Consider how to develop internal templates for AI-specific reviews that align with your existing vendor assessment processes.\u003C/li>\n\u003Cli>Define how your organisation will operationalise AI ethical principles just as you did under the GDPR : adopt the same mindset as privacy or security by design, integrate fairness, explainability, and robustness directly into the AI development lifecycle.\u003C/li>\n\u003Cli>Map your AI systems, whether developed in-house or sourced externally, by leveraging the same approach you used for your ROPA.\u003C/li>\n\u003Cli>Assess risks through your established risk evaluation steps, but with AI goggles.\u003C/li>\n\u003Cli>Clarify ownerhsip as already did for your stakeholders who know their roles.\u003C/li>\n\u003C/ul>\n\u003Ch2 id=\"integrate-dont-isolate\">\u003Cstrong>Integrate, don't isolate\u003C/strong>\u003C/h2>\n\u003Cp>\u003Cstrong>AI human governance shouldn’t live in its own silo.\u003C/strong>\u003C/p>\n\u003Cp>It needs to reflect your organisation’s \u003Cstrong>size, industry, risk appetite, and way of operating\u003C/strong>. Smaller companies might not have dedicated AI teams, and that’s fine. There are no obligations to create dedicated functions.\u003C/p>\n\u003Cp>Many organisations weave AI oversight into their risk or ethics committees, or establish dedicated AI boards that work closely with privacy, audit, and security teams. For example, you might have the CTO handle technical development, the Chief Privacy Officer focus on data compliance, and the Chief Risk Officer oversee AI-specific risks.\u003C/p>\n\u003Cp>In ay case, AI can’t be managed in isolation. It needs \u003Cstrong>multi-department scrutiny, alignment with corporate values,\u003C/strong> enabling effective coordination between various departments and ensuring that diverse risks are properly understood and managed.\u003C/p>\n\u003Cp>Elaborating a RACI can be an effective way to determine who is responsable for what.\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/c7a54c4c-ff00-4992-885f-beefb1cee825/image-original.png\" alt=\"\" />During our discussions, we saw how diverse these approaches can be. In some companies, AI governance emerged from IT or data teams. In others, it started under privacy teams who had early visibility on regulatory risk. The decision points such as who sets thresholds for bias tolerance or who greenlights high-risk models, are still evolving. Many still lack clear processes to consistently involve all relevant teams.\u003C/p>\n\u003C/blockquote>\n\u003Cp>In any case, there is no one-size-fits-all solution. The essential part is the cross-functional engagement, and there are different operating models depending on size &amp; sector.\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/53d57e30-c992-4a5f-8161-012986119229/image-original.png\" alt=\"\" />We are convinced that the \u003Cstrong>Data Protection Officer (DPO) should be in charge of AI Compliance.\u003C/strong> In the vast majority of cases, artificial intelligence involves the processing of personal data. It is therefore logical that new data-related issues are fully integrated into the \u003Cstrong>broader scope\u003C/strong> of the DPO's responsibilities.\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"develop-an-ai-strategy-that-goes-beyond-mere-compliance\">Develop an AI strategy that goes beyond mere compliance\u003C/h2>\n\u003Cp>Effective governance is defined by how decisions are made, enforced, and supported across the company. Ask yourself:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>Is AI governance in your company treated as a mere compliance task, or as a critical capability?\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>When ethics and commercial pressures collide, which takes priority?\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cp>AI governance must be more than static policies. It’s a living framework, \u003Cstrong>shaped daily by choices that reflect your company’s values,\u003C/strong> whether it’s a CEO funding training despite budget concerns, a legal counsel questioning a risky but profitable use case, or engineers delaying a launch to address bias.\u003C/p>\n\u003Cp>\u003Cstrong>Establish a managerial-level AI strategy:\u003C/strong> a clear AI strategy is foundational for sound governance. It signals that AI isn’t treated as an isolated technical experiment or scattered set of projects, but as a strategic capability managed with purpose and oversight.\u003C/p>\n\u003Cp>At the leadership level, and in order to avoid fragmented, inconsistent AI initiatives across the business, an AI strategy should define:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>The overarching objectives for AI use,\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>The rules and parameters for deployment, clarifying where AI should or shouldn’t be applied,\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>And the financial and human resources allocated to make it work.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Guide operational decisions from the top down:\u003C/strong> once set at the managerial level, this strategic framework becomes the reference for all tactical and operational decisions — from project evaluations to risk assessments, post-deployment monitoring, and assigning responsibilities. It ensures that data scientists, product teams, compliance officers, and partners align with the organization’s values, legal duties, and risk appetite.\u003C/p>\n\u003Cp>\u003Cstrong>Integrate governance directly into strategic planning:\u003C/strong> Governance shouldn’t be an afterthought. It must be embedded into how AI projects are budgeted, prioritized, and measured, aligning with broader risk management and quality goals.\u003C/p>\n\u003Ch2 id=\"reducing-risk-exposure-through-governance\">Reducing risk exposure through governance\u003C/h2>\n\u003Cp>Depending on the stage of the AI lifecycle and the role of each stakeholder, different risks may arise. \u003Cstrong>Governance plays a crucial role in identifying, assessing, and implementing measures to address these risks.\u003C/strong> Those risks can pertain to matters such as:\u003C/p>\n\u003Cul>\n\u003Cli>Regulatory (sanctions), reputational (loss of clients);\u003C/li>\n\u003Cli>Cyber attacks, biais or hallucinations, discrimination;\u003C/li>\n\u003Cli>Loss or leak of confidential data, intellectual property;\u003C/li>\n\u003C/ul>\n\u003Cp>It’s important not to look at each risk in isolation but to c\u003Cstrong>onsider them collectively, always keeping the purpose of the AI use case in mind.\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>For example, an AI system used by HR to evaluate candidates poses very different risks than an AI tool that simply monitors daily public news.\u003C/li>\n\u003C/ul>\n\u003Cp>Once the risks are identified, \u003Cstrong>appropriate mitigation measures should be put in place\u003C/strong> — such as anonymising personal data, strengthening contractual clauses with providers and users, or scheduling regular audits.\u003C/p>\n\u003Cp>If these measures prove insufficient to adequately reduce the risks, or if they fail to protect the rights and freedoms of individuals, it may be necessary to adopt \u003Cstrong>even stricter safeguards. I\u003C/strong>n some cases, this could ultimately mean deciding \u003Cstrong>not to pursue a particular AI use case at all\u003C/strong>.\u003C/p>\n\u003Ch2 id=\"mapping-connecting-to-data-governance\">Mapping &amp; connecting to data governance\u003C/h2>\n\u003Cp>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/1a6af214-578b-49a7-9b34-7e19bf3c51a0/image-original.png\" alt=\"\" />\u003Cstrong>Ensure AI systems are governed with full visibility into their data foundations.\u003C/strong> Link your data management quality and accountability practices to AI oversight.\u003C/p>\n\u003Cp>Knowing the \u003Cstrong>origin and method of data collection is essential\u003C/strong>, not only for legal compliance, but also for building trust &amp; making AI systems more accountable. \u003Cstrong>Holding unnecessary data increases the risk of breaches or misuse in AI systems.\u003C/strong>\u003C/p>\n\u003Cp>To keep your compliance up-to-date, \u003Cstrong>strong data quality workflows &amp; controls are essential.\u003C/strong> Such as creating a questionnaire in order to identify and assess AI tools in light of the AI Act.\u003C/p>\n\u003Cp>Creating a \u003Cstrong>record of AI systems\u003C/strong> is an essential step here &amp; will give you the necessary visibility.\u003C/p>\n\u003Cblockquote>\n\u003Cp>Some organisations only have a loose list of tools. Others run internal surveys or change management tickets to spot new AI uses. A few incorporate these updates into their ROPA (GDPR record) and conduct yearly or on-demand reviews. Others hired someone to analyze all the use cases by contacting all departments &amp; making a list of what they found.\u003C/p>\n\u003C/blockquote>\n\u003Cp>Whatever your way of doing it is, \u003Cstrong>you need to start mapping your AI systems\u003C/strong> somewhere, somehow. However, the success depends on workflows such as standardizing the information being collected​, partially automating the collection process where possible​ &amp; regularly updating the information to keep the AI mapping relevant​.\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cstrong>We simplified it for you over \u003Ca href=\"https://www.dastra.eu/en/product-features/ai-governance\">here.\u003C/a>\u003C/strong>\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"from-vague-promises-to-real-policies\">From vague promises to real policies\u003C/h2>\n\u003Cp>Establishing policies is essential to define your organisation's position on AI. It helps control the usage, moderate and reduce the risks of incidents.\u003C/p>\n\u003Cul>\n\u003Cli>Set up clear internal rules for developing, procuring, testing, deploying and monitoring AI.\u003C/li>\n\u003Cli>Define how ethical principles (like fairness, accountability, transparency) are put into practice.\u003C/li>\n\u003Cli>Tools for recording the technical documentation of AI systems and the transparency documents provided by vendors.\u003C/li>\n\u003Cli>Tool for assessing the AI maturity of service providers or for scoring tenders.\u003C/li>\n\u003Cli>It also is essential to review the privacy policy, T&amp;Cs and compliance documentation, as much as possible given the difficult access or negotiation, to consider aspects such as intellectual property and cybersecurity.\u003C/li>\n\u003Cli>Model contractual clauses to address AI-related risks.\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Avoid vague promises in your policies\u003C/strong> like \"we will prevent hallucinations or bias\" and replace them with concrete requirements \"undergo testing of X kind, under X number of days, and deployed one week after confirmation\".\u003C/p>\n\u003Cp>\u003Cstrong>Think of policies like software, they need regular updates to stay relevant.\u003C/strong>\u003C/p>\n\u003Ch2 id=\"beyond-the-ai-acts-literacy-obligation\">Beyond the AI Act's literacy obligation\u003C/h2>\n\u003Cp>While the AI Act (Article 4) references AI literacy, there are no direct fines tied to it. But regulators could \u003Cstrong>well consider a lack of literacy as an aggravating factor\u003C/strong> in assessing violations of the law when it starts taking broader effect this August. Similar to the factor of lack of due diligence in dealing with bias for instance.\u003C/p>\n\u003Cp>The\u003Ca href=\"https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy\" rel=\"nofollow\"> living repository of the EU AI Office\u003C/a> gives examples to support the implementation of literacy (but does not automatically grant presumption of compliance).\u003C/p>\n\u003Cp>\u003Cstrong>AI literacy goes far beyond checking boxes.\u003C/strong> It’s about building an organisation that truly understands why it uses AI and how to manage the risks and benefits. Regularly training teams and ensuring they use AI tools responsibly is key &amp; cannot be a one-shot event.\u003C/p>\n\u003Cp>Why? Let's take ShadowAI. It's your early signal of a gap between the speed of AI innovation &amp; organization governance. Real world examples could take the shape of an internal security incident, similar to the one faced by Samsung when engineers leaked proprietary code by sharing it with ChatGPT.\u003C/p>\n\u003Cblockquote>\n\u003Ch4 id=\"our-workshop-made-it-clear-role-specific-ai-training-should-be-a-top-priority-amongst-literacy-initiatives-ranging-from-data-breaches-simulations-creating-tiered-permissions-more.from-interns-to-executives-ai-literacy-has-to-become-part-of-the-culture\">\u003Cstrong>\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/060cbbd8-4bfe-4418-98f9-a3cf3ba8e782/image-original.png\" alt=\"\" />\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/5aa818dd-850b-4c6f-bb6a-6017bc5a87b3/image-original.png\" alt=\"\" />Our workshop made it clear: role-specific AI Training should be a top priority amongst literacy initiatives ranging from data breaches simulations, creating tiered permissions &amp; more. From interns to executives, AI literacy has to become part of the culture.\u003C/strong>\u003C/h4>\n\u003C/blockquote>\n\u003Ch2 id=\"the-time-for-cautious-observation-is-over\">\u003Cstrong>The time for cautious observation is over\u003C/strong>\u003C/h2>\n\u003Ch4 id=\"its-time-to-embed-ai-governance-into-your-dna-just-like-we-learned-to-do-with-the-gdpr\">It’s time to embed AI Governance into your DNA, just like we learned to do with the GDPR.\u003C/h4>\n\u003Cp>Want to explore how to move from abstract obligations to concrete processes? \u003Cstrong>Let’s talk\u003Ca href=\"https://www.dastra.eu/en/contacts\"> here\u003C/a>.\u003C/strong>\u003C/p>\n\u003Cp>Meanwhile, check out our \u003Cstrong>AI features over \u003Ca href=\"https://www.dastra.eu/en/product-features/ai-governance\">here\u003C/a>.\u003C/strong>\u003C/p>\n\u003Cp>Bottom line is: \u003Cstrong>governance is alive &amp; well. At least it is with Dastra's help!\u003C/strong>\u003C/p>\n","How to get started with AI Governance ","As AI systems become embedded across business operations, the question isn’t whether they require governance, but how to implement it effectively. ",1991,11,0,null,"en","how-to-get-started-with-ai-governance","Published",{"id":17,"displayName":18,"avatarUrl":19,"bio":12,"blogUrl":12,"color":12,"userId":17,"creationDate":20},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-07-10T10:00:00","2025-05-19T11:47:46.4558764","2026-04-20T12:07:41.1880462",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":30},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[31,34,37],{"lang":32,"name":26,"description":33},"fr","Une liste d'articles rédigés par la communauté",{"lang":35,"name":26,"description":36},"es","Una lista de artículos escritos por la comunidad",{"lang":38,"name":26,"description":39},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[41,46,67],{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":42},[43,44,45],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},{"id":47,"name":48,"description":49,"url":50,"color":51,"parentId":25,"count":12,"imageUrl":12,"parent":52,"order":11,"translations":57},9,"News","Stay up to date with the latest news from data protection authorities: decisions, fines, guidelines, and regulatory trends in GDPR and privacy.","news","#1676ca",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":53},[54,55,56],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},[58,61,64],{"lang":32,"name":59,"description":60},"Actualités","Suivez les dernières actualités des autorités de protection des données (CNIL, EDPS, etc.) : décisions, sanctions, lignes directrices et tendances réglementaires en matière de RGPD et de privacy.",{"lang":35,"name":62,"description":63},"Actualidad","Todos los artículos relativos a las autoridades de protección de datos",{"lang":38,"name":65,"description":66},"Nachrichten","Alle Artikel mit Bezug zu Datenschutzbehörden",{"id":68,"name":69,"description":70,"url":71,"color":72,"parentId":25,"count":12,"imageUrl":12,"parent":73,"order":78,"translations":79},69,"Expertise","Gain insights from our experts on GDPR compliance, data protection, and privacy challenges. In-depth articles, professional analysis, and real-world best practices.","indepth","#000000",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":74},[75,76,77],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},5,[80,82,85],{"lang":32,"name":69,"description":81},"Bénéficiez des conseils de nos experts sur la conformité RGPD, la protection des données et les enjeux privacy. Articles de fond, analyses et retours d’expérience métier.",{"lang":38,"name":83,"description":84},"Fachwissen","Entdecken Sie die Artikel unserer DSGVO-Experten",{"lang":35,"name":86,"description":87},"Experiencia","Descubre los artículos de nuestros expertos en Privacy",[],"https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-original.jpg",[91,92,93,94,95,96,97],"https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-1000.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-1500.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-800.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-600.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-300.webp","https://static.dastra.eu/content/7094dc83-f689-441a-8529-a5113855b1ea/visuel-article-22-100.webp",59299]