[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article_59717":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":11,"nbDownloads":12,"excerpt":13,"lang":14,"url":15,"intro":16,"featured":4,"state":17,"author":18,"authorId":19,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":48,"imageUrl":49,"imageThumbUrls":50,"id":58},false,"## Why your organization needs an AI charter right now\r\n\r\nArtificial intelligence is being integrated into all professions, often without a formal framework: AI assistants, automation, content generation tools, business co-pilots...But behind these efficiency gains lie **legal, ethical, and compliance risks**: data breaches, algorithmic bias, loss of control over decisions, or failure to comply with GDPR and the **EU AI Act**.\r\n\r\nThe **AI usage charter** thus becomes an **essential governance tool**.It establishes rules and best practices to **secure, structure, and hold accountable the use of AI**, in compliance with **ethical, regulatory, and compliance requirements**.\r\n\r\n**Objective:** to provide a compass for ethical innovation, protect the company, reassure teams, and anticipate the obligations of the AI Act.\r\n\r\n## 1. Preparatory phase: laying the foundations\r\n\r\n### a) Who and how to start the approach\r\n\r\n- Identify the sponsor, governance (“who carries the charter”), involve internal stakeholders (IT, legal, HR, departments).\r\n\r\n- Conduct a state of play: what AI tools are already in use, what data is involved, what “shadow” practices exist.\r\n\r\n> Before any drafting, it is essential to **identify the AIs used** within the company, including those integrated without official validation (the “shadow IT” or \"shadow AI\"). This involves a **data flow analysis, generative tools, automation, and connectors**.> ****Objective:** to achieve a **clear vision of actual usage and associated risks**.\r\n\r\n### b) Key questions to ask\r\n\r\n- What AI tools are currently employed (chatbots, generative AI, co-pilots, etc.)?\r\n\r\n- What data is being processed? Often: sensitive data, business secrets, personal data, etc.\r\n\r\n- In which use cases do we want to allow AI and in which cases do we want to prohibit it?\r\n\r\n- How does usage align with the organization's values and objectives?\r\n\r\n> The drafting of an AI charter is a **collective approach**. Its effectiveness relies on **the involvement of stakeholders**: management, IT, legal, HR, departments, DPO, and sometimes external partners.> > This co-construction allows for **defining the values, objectives, and red lines** of the organization.\r\n\r\n### c) Mapping risks & obligations\r\n\r\nEach use of AI must be analyzed from several angles:\r\n\r\n- **GDPR and data security**,\r\n\r\n- **ethical risks and algorithmic bias**,\r\n\r\n- **business and decision-making impact**,\r\n\r\n- **regulatory compliance (AI Act)**.\r\n\r\nThe company must then **establish a prioritized risk matrix** for each tool or usage.\r\n\r\n> **Objective:** to factually assess the risk associated with each AI use case and identify the necessary mitigation measures.\r\n\r\n---\r\n\r\n## 2. Structuring the charter: essential elements to include\r\n\r\nHere are the essential sections identified in best practices:\r\n\r\n#### a) Preamble\r\n\r\n- Objective of the charter, tailored to the context of AI usage within the organization.\r\n\r\n#### b) Scope\r\n\r\n- Who the charter applies to (employees, contractors, interns...); which systems and use cases are covered.\r\n\r\n#### c) Definitions\r\n\r\n- Generative AI, sensitive data, prompt, authorized user, etc.\r\n\r\n#### d) Objectives and scope\r\n\r\n- That the charter aims to frame usage, complement existing policies (GDPR, security, ethics).\r\n\r\n#### e) Guiding principles\r\n\r\n- Primacy of the human, transparency, explainability, respect for fundamental rights, fairness, security, accountability, responsible innovation, traceability.\r\n\r\n#### f) Authorized/prohibited Uses\r\n\r\n- How to define concrete examples: “authorized: non-confidential email draft”, “prohibited: customer data in a prompt”.\r\n\r\n#### g) Control and protection measures\r\n\r\n- Blocking unapproved tools, access restrictions, audits, monitoring.\r\n\r\n#### h) Training and support\r\n\r\n- Awareness, hands-on training, prompt workshops, etc.\r\n\r\n#### i) Sanctions\r\n\r\n- Disciplinary measures in case of non-compliance.\r\n\r\n#### j) Governance, updates, enforceability\r\n\r\n- Appendix to internal regulations, AI committee, periodic review.\r\n\r\n#### k) Employee commitment\r\n\r\n- Signature, individual commitment, liability clause.\r\n\r\n---\r\n\r\n## 3. Writing the charter: good practices\r\n\r\n- Use clear language, accessible to all (avoid legal jargon).\r\n\r\n- Illustrate with concrete examples (internal use cases) so that every employee can visualize.\r\n\r\n- Tailor the document to the organization's context (size, sector, type of data).\r\n\r\n- Think about operational format, not just theoretical. Provide a short document, appendices, FAQs, pictograms.\r\n\r\n- Involve end users in the review to ensure understanding.\r\n\r\n---\r\n\r\n## 4. Implementation and monitoring: turning the charter into action\r\n\r\n- Internal communication: dissemination, training, e-learning module.\r\n\r\n- Integration: attach to employment contracts, internal regulations.\r\n\r\n- Monitoring and control: audit of AI usage, tracking indicators (e.g., % of approved AI tools, incidents).\r\n\r\n- Updating: regularly review the charter to incorporate new tools, new regulations (AI Act).\r\n\r\n- Governance: set up an AI committee, compliance officer, liaison with the DPO.\r\n\r\n- Awareness sessions for teams, internal FAQs to address practical questions.\r\n\r\n> A useful charter is never static. Before its publication, it must be **reviewed and validated** by internal stakeholders, then **adjusted** according to their feedback.> ****Objective:** to transform the charter into a **collective reflex** and not just a compliance document.\r\n\r\n---\r\n\r\n## 5. Common mistakes to avoid\r\n\r\n- Copying and pasting a generic charter without internal analysis. Not adapting the content to its context (sector, size, type of data) → poor understanding.\r\n\r\n- Not mapping uses or tools before drafting → risk of omission or inadequacy.\r\n\r\n- Neglecting team training and failing to raise awareness among users: then the charter remains a dead letter.\r\n\r\n- Writing a text that is too legalistic, incomprehensible to users.\r\n\r\n- Writing the charter as a mere “compliance document” without business ownership.\r\n\r\n- Forgetting to assess biases and transparency of the AIs used.\r\n\r\n- Not providing sanctions or monitoring → lack of credibility.\r\n\r\n---\r\n\r\n## Conclusion\r\n\r\nThe charter is a **strategic tool** to support innovation, limit risks, and build trust. It is a structured approach combining internal reflection, tailored drafting, training, and governance. At Dastra, we support companies in this process: from auditing AI uses to drafting the charter template, to training teams and establishing operational follow-up with our AI systems register and more.","\u003Ch2 id=\"why-your-organization-needs-an-ai-charter-right-now\">Why your organization needs an AI charter right now\u003C/h2>\r\n\u003Cp>Artificial intelligence is being integrated into all professions, often without a formal framework: AI assistants, automation, content generation tools, business co-pilots...\u003Cbr />\r\nBut behind these efficiency gains lie \u003Cstrong>legal, ethical, and compliance risks\u003C/strong>: data breaches, algorithmic bias, loss of control over decisions, or failure to comply with GDPR and the \u003Cstrong>EU AI Act\u003C/strong>.\u003C/p>\r\n\u003Cp>The \u003Cstrong>AI usage charter\u003C/strong> thus becomes an \u003Cstrong>essential governance tool\u003C/strong>.\u003Cbr />\r\nIt establishes rules and best practices to \u003Cstrong>secure, structure, and hold accountable the use of AI\u003C/strong>, in compliance with \u003Cstrong>ethical, regulatory, and compliance requirements\u003C/strong>.\u003C/p>\r\n\u003Cp>\u003Cstrong>Objective:\u003C/strong> to provide a compass for ethical innovation, protect the company, reassure teams, and anticipate the obligations of the AI Act.\u003C/p>\r\n\u003Ch2 id=\"preparatory-phase-laying-the-foundations\">1. Preparatory phase: laying the foundations\u003C/h2>\r\n\u003Ch3 id=\"a-who-and-how-to-start-the-approach\">a) Who and how to start the approach\u003C/h3>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Identify the sponsor, governance (“who carries the charter”), involve internal stakeholders (IT, legal, HR, departments).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Conduct a state of play: what AI tools are already in use, what data is involved, what “shadow” practices exist.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cblockquote>\r\n\u003Cp>Before any drafting, it is essential to \u003Cstrong>identify the AIs used\u003C/strong> within the company, including those integrated without official validation (the “shadow IT” or \"shadow AI\"). This involves a \u003Cstrong>data flow analysis, generative tools, automation, and connectors\u003C/strong>.\u003Cbr />\r\n\u003Cbr />\r\n**\u003Cstrong>Objective:\u003C/strong> to achieve a \u003Cstrong>clear vision of actual usage and associated risks\u003C/strong>.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch3 id=\"b-key-questions-to-ask\">b) Key questions to ask\u003C/h3>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>What AI tools are currently employed (chatbots, generative AI, co-pilots, etc.)?\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>What data is being processed? Often: sensitive data, business secrets, personal data, etc.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>In which use cases do we want to allow AI and in which cases do we want to prohibit it?\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>How does usage align with the organization's values and objectives?\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cblockquote>\r\n\u003Cp>The drafting of an AI charter is a \u003Cstrong>collective approach\u003C/strong>. Its effectiveness relies on \u003Cstrong>the involvement of stakeholders\u003C/strong>: management, IT, legal, HR, departments, DPO, and sometimes external partners.\u003Cbr />\r\n\u003Cbr />\r\nThis co-construction allows for \u003Cstrong>defining the values, objectives, and red lines\u003C/strong> of the organization.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch3 id=\"c-mapping-risks-obligations\">c) Mapping risks &amp; obligations\u003C/h3>\r\n\u003Cp>Each use of AI must be analyzed from several angles:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>GDPR and data security\u003C/strong>,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>ethical risks and algorithmic bias\u003C/strong>,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>business and decision-making impact\u003C/strong>,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>regulatory compliance (AI Act)\u003C/strong>.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>The company must then \u003Cstrong>establish a prioritized risk matrix\u003C/strong> for each tool or usage.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>\u003Cstrong>Objective:\u003C/strong> to factually assess the risk associated with each AI use case and identify the necessary mitigation measures.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Chr />\r\n\u003Ch2 id=\"structuring-the-charter-essential-elements-to-include\">2. Structuring the charter: essential elements to include\u003C/h2>\r\n\u003Cp>Here are the essential sections identified in best practices:\u003C/p>\r\n\u003Ch4 id=\"a-preamble\">a) Preamble\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Objective of the charter, tailored to the context of AI usage within the organization.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"b-scope\">b) Scope\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Who the charter applies to (employees, contractors, interns...); which systems and use cases are covered.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"c-definitions\">c) Definitions\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Generative AI, sensitive data, prompt, authorized user, etc.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"d-objectives-and-scope\">d) Objectives and scope\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>That the charter aims to frame usage, complement existing policies (GDPR, security, ethics).\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"e-guiding-principles\">e) Guiding principles\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Primacy of the human, transparency, explainability, respect for fundamental rights, fairness, security, accountability, responsible innovation, traceability.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"f-authorizedprohibited-uses\">f) Authorized/prohibited Uses\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>How to define concrete examples: “authorized: non-confidential email draft”, “prohibited: customer data in a prompt”.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"g-control-and-protection-measures\">g) Control and protection measures\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Blocking unapproved tools, access restrictions, audits, monitoring.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"h-training-and-support\">h) Training and support\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Awareness, hands-on training, prompt workshops, etc.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"i-sanctions\">i) Sanctions\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Disciplinary measures in case of non-compliance.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"j-governance-updates-enforceability\">j) Governance, updates, enforceability\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Appendix to internal regulations, AI committee, periodic review.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch4 id=\"k-employee-commitment\">k) Employee commitment\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Signature, individual commitment, liability clause.\u003C/li>\r\n\u003C/ul>\r\n\u003Chr />\r\n\u003Ch2 id=\"writing-the-charter-good-practices\">3. Writing the charter: good practices\u003C/h2>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Use clear language, accessible to all (avoid legal jargon).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Illustrate with concrete examples (internal use cases) so that every employee can visualize.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Tailor the document to the organization's context (size, sector, type of data).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Think about operational format, not just theoretical. Provide a short document, appendices, FAQs, pictograms.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Involve end users in the review to ensure understanding.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Chr />\r\n\u003Ch2 id=\"implementation-and-monitoring-turning-the-charter-into-action\">4. Implementation and monitoring: turning the charter into action\u003C/h2>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Internal communication: dissemination, training, e-learning module.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Integration: attach to employment contracts, internal regulations.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Monitoring and control: audit of AI usage, tracking indicators (e.g., % of approved AI tools, incidents).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Updating: regularly review the charter to incorporate new tools, new regulations (AI Act).\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Governance: set up an AI committee, compliance officer, liaison with the DPO.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Awareness sessions for teams, internal FAQs to address practical questions.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cblockquote>\r\n\u003Cp>A useful charter is never static. Before its publication, it must be \u003Cstrong>reviewed and validated\u003C/strong> by internal stakeholders, then \u003Cstrong>adjusted\u003C/strong> according to their feedback.\u003Cbr />\r\n\u003Cbr />\r\n**\u003Cstrong>Objective:\u003C/strong> to transform the charter into a \u003Cstrong>collective reflex\u003C/strong> and not just a compliance document.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Chr />\r\n\u003Ch2 id=\"common-mistakes-to-avoid\">5. Common mistakes to avoid\u003C/h2>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Copying and pasting a generic charter without internal analysis. Not adapting the content to its context (sector, size, type of data) → poor understanding.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Not mapping uses or tools before drafting → risk of omission or inadequacy.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Neglecting team training and failing to raise awareness among users: then the charter remains a dead letter.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Writing a text that is too legalistic, incomprehensible to users.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Writing the charter as a mere “compliance document” without business ownership.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Forgetting to assess biases and transparency of the AIs used.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Not providing sanctions or monitoring → lack of credibility.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Chr />\r\n\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\r\n\u003Cp>The charter is a \u003Cstrong>strategic tool\u003C/strong> to support innovation, limit risks, and build trust. It is a structured approach combining internal reflection, tailored drafting, training, and governance. \u003Cbr />\r\n\u003Cbr />\r\nAt Dastra, we support companies in this process: from auditing AI uses to drafting the charter template, to training teams and establishing operational follow-up with our AI systems register and more.\u003C/p>\r\n","AI Usage Charter: Practical Guide 2025","Create an AI charter compliant with GDPR & AI Act: usage, risks, co-construction, drafting, deployment. The complete guide for businesses.",919,5,"How to create an AI usage charter in the workplace",0,null,"en","how-to-create-an-ai-usage-charter-in-the-workplace","Draft a clear AI usage charter compliant with GDPR and the AI Act. Discover the key steps: mapping of uses, risk assessment, co-construction, writing, and deployment.","Published",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-11-06T14:28:00","2025-11-06T14:28:02.2779945","2025-11-06T15:22:02.5688675",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":32},2,"Blog","A list of curated articles provided by the community","article","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},[],"https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-original.jpg",[51,52,53,54,55,56,57],"https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-1000.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-1500.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-800.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-600.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-300.webp","https://static.dastra.eu/content/3fb14bab-5865-4875-a3c7-f2728ba2e58e/visuel-article-5-100.webp",59717]