[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fWMcm-GEZzoZ0P-7i_-0Y0CjdwuOWUbvA_xbNXosRTR8":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":8,"featured":4,"state":15,"author":16,"authorId":17,"datePublication":21,"dateCreation":22,"dateUpdate":23,"mainCategory":24,"categories":40,"metaDatas":46,"imageUrl":47,"imageThumbUrls":48,"id":56},false,"## What is the NIST AI RMF ?\n\nThe National Institute of Standards and Technology (NIST) has developed a comprehensive [**Artificial Intelligence Risk Management Framework (AI RMF 1.0)**](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf) to assist organisations in the **responsible design, development, and deployment** of AI technologies.\n\nThis voluntary guide distinguishes AI-specific challenges (such as model opacity and data drift) from traditional software risks, emphasizing that **trustworthiness** is a multi-faceted concept involving safety, fairness, and transparency.\n\nThe document is architected around a **four-function Core** consisting of **Govern, Map, Measure, and Manage**, which provides a structured methodology for identifying and mitigating potential harms to individuals and society. Ultimately, the framework functions as a **living document** intended to foster a culture of risk awareness while promoting **innovation and public trust** in an evolving technological landscape.\n\n> The AI RMF Core is organized into four high-level functions: **GOVERN**, **MAP**, **MEASURE**, and **MANAGE**. These functions are designed to help organisations operationalise the management of AI risks throughout the system's lifecycle.\n>\n> The **NIST AI RMF Playbook** provides a set of recommended actions designed to support the implementation of the outcomes established in the AI RMF. Organizations may choose to adopt those that are relevant to their particular context.\n>\n> [Read the playbook here.](https://airc.nist.gov/airmf-resources/playbook/)\n\n## What are the main attributes of the AI RMF? \n\nThe NIST AI Risk Management Framework (AI RMF) was developed based on ten key attributes designed to guide its creation and ensure its effectiveness across diverse sectors (Appendix D of the AI RMF). These attributes specify that the AI RMF strives to:\n\n- **Be risk-based, resource-efficient, pro-innovation, and voluntary:** It focuses on managing risks without being overly burdensome or stifling innovation.\n- **Be consensus-driven and transparent:** It is developed and updated through an open process where all stakeholders have the opportunity to contribute.\n- **Use clear and plain language:** The framework is designed to be understandable to a broad audience, including non-professionals and senior executives, while remaining technically deep enough for practitioners.\n- **Provide a common language and understanding:** it offers a shared taxonomy, terminology, and definitions for managing AI risks.\n- **Be easily usable and adaptable:** It is intended to be intuitive and fit well within an organisation's existing broader risk management strategies.\n- **Be universally applicable:** The framework is designed to be useful across a wide range of perspectives, sectors, and technology domains.\n- **Be outcome-focused and non-prescriptive:** Rather than providing one-size-fits-all requirements, it offers a catalog of desired outcomes and approaches.\n- **Foster awareness of existing standards:** It takes advantage of existing best practices and methodologies while highlighting where additional resources are needed.\n- **Be law- and regulation-agnostic:** It supports an organisation's ability to operate under various domestic and international legal or regulatory regimes.\n- **Be a living document:** The AI RMF is intended to be regularly updated as technology, understanding, and stakeholder experiences evolve.\n\n## What are the four core functions of the AI RMF?\n\n### 1. GOVERN\n\nThe **GOVERN** function is a **cross-cutting** requirement that informs and is infused throughout the other three functions. It focuses on:\n\n- **Cultivating a risk management culture:** It establishes an organisational environment where risk is anticipated and managed proactively.\n- **Establishing policies and accountability:** It outlines the processes, legal/regulatory requirements, and organisational schemes needed to manage risks, including defining clear roles and responsibilities.\n- **Workforce diversity and training:** It prioritises diversity, equity, and inclusion in the risk management process and ensures personnel are trained to perform their duties.\n\n### 2. MAP\n\nThe **MAP** function is used to **establish the context** needed to frame risks related to an AI system. Key activities include:\n\n- **Identifying intended purposes and settings:** Understanding the specific goals, beneficial uses, and the environments where the AI will be deployed.\n- **Categorising the AI system:** Defining the specific tasks (e.g., generative models, classifiers) and identifying the system's knowledge limits.\n- **Characterising impacts:** Identifying the likelihood and magnitude of potential harms to individuals, groups, society, and the environment.\n- **Checking assumptions:** This function allows organisations to verify if their initial assumptions about the AI's use cases remain valid.\n\n### 3. MEASURE\n\nThe **MEASURE** function employs **quantitative and qualitative tools** to analyze, assess, and monitor identified AI risks. This includes:\n\n- **Evaluating trustworthy characteristics:** Testing the system for validity, reliability, safety, security, fairness, and privacy-enhancement.\n- **Rigorous testing (TEVV):** Implementing test, evaluation, verification, and validation processes, including comparisons against performance benchmarks.\n- **Tracking risks over time:** Establishing mechanisms to monitor existing, unanticipated, and emergent risks while the system is in production.\n\n### 4. MANAGE\n\nThe **MANAGE** function involves **allocating resources** to the risks that have been mapped and measured. It focuses on:\n\n- **Risk treatment:** Prioritising and acting upon risks based on their projected impact. Response options include mitigating, transferring, avoiding, or accepting the risk.\n- **Maximising benefits and minimising harm:** Implementing strategies to sustain the value of the AI system while reducing the likelihood of failures.\n- **Incident response and recovery:** Creating plans to respond to and recover from incidents, including mechanisms to deactivate or disengage systems that perform inconsistently with their intended use.\n\n## What are the seven characteristics of a trustworthy AI system?\n\nThe NIST AI Risk Management Framework identifies **seven key characteristics** that contribute to the trustworthiness of an AI system. These characteristics are socio-technical attributes, meaning they are influenced by both technical design and the social context in which the system is used.\n\nThe seven characteristics are:\n\n1. **Valid and reliable:** **Validity** is the confirmation that the system's requirements for its specific intended use are fulfilled, while **reliability** is its ability to perform without failure under given conditions over time. This characteristic includes accuracy (how close results are to true values) and robustness (the ability to maintain performance under varied circumstances).\n2. **Safe:** AI systems should not, under defined conditions, lead to a state that endangers **human life, health, property, or the environment**. Safety is improved through responsible design, clear information for users, and the ability to intervene or shut down a system if it deviates from expected functionality.\n3. **Secure and resilient:** **Resilience** is the ability of a system to withstand unexpected adverse events or changes in its environment. **Security** encompasses resilience but also includes protocols to protect against and recover from attacks, such as data poisoning or unauthorized access.\n4. **Accountable and transparent:** **Transparency** involves making information about an AI system and its outputs available to those interacting with it. **Accountability** relates to the responsibility for the system's outcomes and depends upon transparency to be effective.\n5. **Explainable and interpretable:** **Explainability** refers to describing the internal mechanisms of how an AI system works, while **interpretability** refers to the meaning and context of the system's output. These help users understand \"how\" and \"why\" a specific decision or recommendation was made.\n6. **Privacy-enhanced:** This characteristic relates to safeguarding **human autonomy, identity, and dignity**. It involves following norms like anonymity and confidentiality, and utilizing privacy-enhancing technologies (PETs) to prevent the unauthorized identification of individuals.\n7. **Fair, with harmful bias managed:** Fairness involves addressing concerns for **equality and equity**. This requires managing various forms of bias, including systemic bias (in datasets or organizational norms), computational bias (statistical errors), and human-cognitive bias (how individuals perceive information).\n\n> For a system to be truly trustworthy, these characteristics must be **balanced based on the specific context of use**, as they often involve tradeoffs (for example, a more private system might lose some predictive accuracy).\n\n## How can an organisation implement an AI RMF Profile?\n\nAn organisation can implement an AI RMF Profile by tailoring the Framework's functions, categories, and subcategories to a **specific setting or application** based on its unique requirements, risk tolerance, and available resources.\n\nThe implementation process generally involves the following steps and considerations:\n\n### 1. Identify the type of profile needed\n\nOrganisations can develop different types of profiles depending on their goals:\n\n- **Use-case profiles:** These are designed for specific applications, such as an AI RMF profile for hiring or fair housing.\n- **Temporal profiles:** These help track progress over time. A **current profile** describes the organisation's existing AI risk management activities, while a **target Profile** outlines the desired outcomes needed to meet specific risk management goals.\n- **Cross-sectoral profiles:** these cover risks for models or business processes used across multiple sectors, such as the acquisition of large language models or cloud-based services.\n\n### 2. Conduct a gap analysis\n\nBy comparing a **current profile** against a **target profile**, an organisation can identify specific gaps in its risk management objectives. This comparison helps the organisation understand which categories or subcategories of the AI RMF Core need more attention or resources.\n\n### 3. Develop and prioritise an action plan\n\nOnce gaps are identified, the organisation can:\n\n- **Create action plans** to address those gaps and fulfill the outcomes of specific subcategories.\n- **Prioritize mitigation efforts** based on the organisation's specific needs and established risk management processes.\n- **Gauge resource needs**, such as staffing and funding, to achieve their target risk management goals in a cost-effective manner.\n\n### 4. Maintain flexibility\n\nThe AI RMF does not prescribe specific templates for these profiles. This allows organisations the **flexibility** to implement the framework in a way that best aligns with their internal goals, legal or regulatory requirements, and industry best practices. Profiles also allow organisations to compare their risk management approaches with those of other entities.","\u003Ch2 id=\"what-is-the-nist-ai-rmf\">What is the NIST AI RMF ?\u003C/h2>\n\u003Cp>The National Institute of Standards and Technology (NIST) has developed a comprehensive \u003Ca href=\"https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf\" rel=\"nofollow\">\u003Cstrong>Artificial Intelligence Risk Management Framework (AI RMF 1.0)\u003C/strong>\u003C/a> to assist organisations in the \u003Cstrong>responsible design, development, and deployment\u003C/strong> of AI technologies.\u003C/p>\n\u003Cp>This voluntary guide distinguishes AI-specific challenges (such as model opacity and data drift) from traditional software risks, emphasizing that \u003Cstrong>trustworthiness\u003C/strong> is a multi-faceted concept involving safety, fairness, and transparency.\u003C/p>\n\u003Cp>The document is architected around a \u003Cstrong>four-function Core\u003C/strong> consisting of \u003Cstrong>Govern, Map, Measure, and Manage\u003C/strong>, which provides a structured methodology for identifying and mitigating potential harms to individuals and society. Ultimately, the framework functions as a \u003Cstrong>living document\u003C/strong> intended to foster a culture of risk awareness while promoting \u003Cstrong>innovation and public trust\u003C/strong> in an evolving technological landscape.\u003C/p>\n\u003Cblockquote>\n\u003Cp>The AI RMF Core is organized into four high-level functions: \u003Cstrong>GOVERN\u003C/strong>, \u003Cstrong>MAP\u003C/strong>, \u003Cstrong>MEASURE\u003C/strong>, and \u003Cstrong>MANAGE\u003C/strong>. These functions are designed to help organisations operationalise the management of AI risks throughout the system's lifecycle.\u003C/p>\n\u003Cp>The \u003Cstrong>NIST AI RMF Playbook\u003C/strong> provides a set of recommended actions designed to support the implementation of the outcomes established in the AI RMF. Organizations may choose to adopt those that are relevant to their particular context.\u003C/p>\n\u003Cp>\u003Ca href=\"https://airc.nist.gov/airmf-resources/playbook/\" rel=\"nofollow\">Read the playbook here.\u003C/a>\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"what-are-the-main-attributes-of-the-ai-rmf\">What are the main attributes of the AI RMF?\u003C/h2>\n\u003Cp>The NIST AI Risk Management Framework (AI RMF) was developed based on ten key attributes designed to guide its creation and ensure its effectiveness across diverse sectors (Appendix D of the AI RMF). These attributes specify that the AI RMF strives to:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Be risk-based, resource-efficient, pro-innovation, and voluntary:\u003C/strong> It focuses on managing risks without being overly burdensome or stifling innovation.\u003C/li>\n\u003Cli>\u003Cstrong>Be consensus-driven and transparent:\u003C/strong> It is developed and updated through an open process where all stakeholders have the opportunity to contribute.\u003C/li>\n\u003Cli>\u003Cstrong>Use clear and plain language:\u003C/strong> The framework is designed to be understandable to a broad audience, including non-professionals and senior executives, while remaining technically deep enough for practitioners.\u003C/li>\n\u003Cli>\u003Cstrong>Provide a common language and understanding:\u003C/strong> it offers a shared taxonomy, terminology, and definitions for managing AI risks.\u003C/li>\n\u003Cli>\u003Cstrong>Be easily usable and adaptable:\u003C/strong> It is intended to be intuitive and fit well within an organisation's existing broader risk management strategies.\u003C/li>\n\u003Cli>\u003Cstrong>Be universally applicable:\u003C/strong> The framework is designed to be useful across a wide range of perspectives, sectors, and technology domains.\u003C/li>\n\u003Cli>\u003Cstrong>Be outcome-focused and non-prescriptive:\u003C/strong> Rather than providing one-size-fits-all requirements, it offers a catalog of desired outcomes and approaches.\u003C/li>\n\u003Cli>\u003Cstrong>Foster awareness of existing standards:\u003C/strong> It takes advantage of existing best practices and methodologies while highlighting where additional resources are needed.\u003C/li>\n\u003Cli>\u003Cstrong>Be law- and regulation-agnostic:\u003C/strong> It supports an organisation's ability to operate under various domestic and international legal or regulatory regimes.\u003C/li>\n\u003Cli>\u003Cstrong>Be a living document:\u003C/strong> The AI RMF is intended to be regularly updated as technology, understanding, and stakeholder experiences evolve.\u003C/li>\n\u003C/ul>\n\u003Ch2 id=\"what-are-the-four-core-functions-of-the-ai-rmf\">What are the four core functions of the AI RMF?\u003C/h2>\n\u003Ch3 id=\"govern\">1. GOVERN\u003C/h3>\n\u003Cp>The \u003Cstrong>GOVERN\u003C/strong> function is a \u003Cstrong>cross-cutting\u003C/strong> requirement that informs and is infused throughout the other three functions. It focuses on:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Cultivating a risk management culture:\u003C/strong> It establishes an organisational environment where risk is anticipated and managed proactively.\u003C/li>\n\u003Cli>\u003Cstrong>Establishing policies and accountability:\u003C/strong> It outlines the processes, legal/regulatory requirements, and organisational schemes needed to manage risks, including defining clear roles and responsibilities.\u003C/li>\n\u003Cli>\u003Cstrong>Workforce diversity and training:\u003C/strong> It prioritises diversity, equity, and inclusion in the risk management process and ensures personnel are trained to perform their duties.\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"map\">2. MAP\u003C/h3>\n\u003Cp>The \u003Cstrong>MAP\u003C/strong> function is used to \u003Cstrong>establish the context\u003C/strong> needed to frame risks related to an AI system. Key activities include:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Identifying intended purposes and settings:\u003C/strong> Understanding the specific goals, beneficial uses, and the environments where the AI will be deployed.\u003C/li>\n\u003Cli>\u003Cstrong>Categorising the AI system:\u003C/strong> Defining the specific tasks (e.g., generative models, classifiers) and identifying the system's knowledge limits.\u003C/li>\n\u003Cli>\u003Cstrong>Characterising impacts:\u003C/strong> Identifying the likelihood and magnitude of potential harms to individuals, groups, society, and the environment.\u003C/li>\n\u003Cli>\u003Cstrong>Checking assumptions:\u003C/strong> This function allows organisations to verify if their initial assumptions about the AI's use cases remain valid.\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"measure\">3. MEASURE\u003C/h3>\n\u003Cp>The \u003Cstrong>MEASURE\u003C/strong> function employs \u003Cstrong>quantitative and qualitative tools\u003C/strong> to analyze, assess, and monitor identified AI risks. This includes:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Evaluating trustworthy characteristics:\u003C/strong> Testing the system for validity, reliability, safety, security, fairness, and privacy-enhancement.\u003C/li>\n\u003Cli>\u003Cstrong>Rigorous testing (TEVV):\u003C/strong> Implementing test, evaluation, verification, and validation processes, including comparisons against performance benchmarks.\u003C/li>\n\u003Cli>\u003Cstrong>Tracking risks over time:\u003C/strong> Establishing mechanisms to monitor existing, unanticipated, and emergent risks while the system is in production.\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"manage\">4. MANAGE\u003C/h3>\n\u003Cp>The \u003Cstrong>MANAGE\u003C/strong> function involves \u003Cstrong>allocating resources\u003C/strong> to the risks that have been mapped and measured. It focuses on:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Risk treatment:\u003C/strong> Prioritising and acting upon risks based on their projected impact. Response options include mitigating, transferring, avoiding, or accepting the risk.\u003C/li>\n\u003Cli>\u003Cstrong>Maximising benefits and minimising harm:\u003C/strong> Implementing strategies to sustain the value of the AI system while reducing the likelihood of failures.\u003C/li>\n\u003Cli>\u003Cstrong>Incident response and recovery:\u003C/strong> Creating plans to respond to and recover from incidents, including mechanisms to deactivate or disengage systems that perform inconsistently with their intended use.\u003C/li>\n\u003C/ul>\n\u003Ch2 id=\"what-are-the-seven-characteristics-of-a-trustworthy-ai-system\">What are the seven characteristics of a trustworthy AI system?\u003C/h2>\n\u003Cp>The NIST AI Risk Management Framework identifies \u003Cstrong>seven key characteristics\u003C/strong> that contribute to the trustworthiness of an AI system. These characteristics are socio-technical attributes, meaning they are influenced by both technical design and the social context in which the system is used.\u003C/p>\n\u003Cp>The seven characteristics are:\u003C/p>\n\u003Col>\n\u003Cli>\u003Cstrong>Valid and reliable:\u003C/strong> \u003Cstrong>Validity\u003C/strong> is the confirmation that the system's requirements for its specific intended use are fulfilled, while \u003Cstrong>reliability\u003C/strong> is its ability to perform without failure under given conditions over time. This characteristic includes accuracy (how close results are to true values) and robustness (the ability to maintain performance under varied circumstances).\u003C/li>\n\u003Cli>\u003Cstrong>Safe:\u003C/strong> AI systems should not, under defined conditions, lead to a state that endangers \u003Cstrong>human life, health, property, or the environment\u003C/strong>. Safety is improved through responsible design, clear information for users, and the ability to intervene or shut down a system if it deviates from expected functionality.\u003C/li>\n\u003Cli>\u003Cstrong>Secure and resilient:\u003C/strong> \u003Cstrong>Resilience\u003C/strong> is the ability of a system to withstand unexpected adverse events or changes in its environment. \u003Cstrong>Security\u003C/strong> encompasses resilience but also includes protocols to protect against and recover from attacks, such as data poisoning or unauthorized access.\u003C/li>\n\u003Cli>\u003Cstrong>Accountable and transparent:\u003C/strong> \u003Cstrong>Transparency\u003C/strong> involves making information about an AI system and its outputs available to those interacting with it. \u003Cstrong>Accountability\u003C/strong> relates to the responsibility for the system's outcomes and depends upon transparency to be effective.\u003C/li>\n\u003Cli>\u003Cstrong>Explainable and interpretable:\u003C/strong> \u003Cstrong>Explainability\u003C/strong> refers to describing the internal mechanisms of how an AI system works, while \u003Cstrong>interpretability\u003C/strong> refers to the meaning and context of the system's output. These help users understand \"how\" and \"why\" a specific decision or recommendation was made.\u003C/li>\n\u003Cli>\u003Cstrong>Privacy-enhanced:\u003C/strong> This characteristic relates to safeguarding \u003Cstrong>human autonomy, identity, and dignity\u003C/strong>. It involves following norms like anonymity and confidentiality, and utilizing privacy-enhancing technologies (PETs) to prevent the unauthorized identification of individuals.\u003C/li>\n\u003Cli>\u003Cstrong>Fair, with harmful bias managed:\u003C/strong> Fairness involves addressing concerns for \u003Cstrong>equality and equity\u003C/strong>. This requires managing various forms of bias, including systemic bias (in datasets or organizational norms), computational bias (statistical errors), and human-cognitive bias (how individuals perceive information).\u003C/li>\n\u003C/ol>\n\u003Cblockquote>\n\u003Cp>For a system to be truly trustworthy, these characteristics must be \u003Cstrong>balanced based on the specific context of use\u003C/strong>, as they often involve tradeoffs (for example, a more private system might lose some predictive accuracy).\u003C/p>\n\u003C/blockquote>\n\u003Ch2 id=\"how-can-an-organisation-implement-an-ai-rmf-profile\">How can an organisation implement an AI RMF Profile?\u003C/h2>\n\u003Cp>An organisation can implement an AI RMF Profile by tailoring the Framework's functions, categories, and subcategories to a \u003Cstrong>specific setting or application\u003C/strong> based on its unique requirements, risk tolerance, and available resources.\u003C/p>\n\u003Cp>The implementation process generally involves the following steps and considerations:\u003C/p>\n\u003Ch3 id=\"identify-the-type-of-profile-needed\">1. Identify the type of profile needed\u003C/h3>\n\u003Cp>Organisations can develop different types of profiles depending on their goals:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Use-case profiles:\u003C/strong> These are designed for specific applications, such as an AI RMF profile for hiring or fair housing.\u003C/li>\n\u003Cli>\u003Cstrong>Temporal profiles:\u003C/strong> These help track progress over time. A \u003Cstrong>current profile\u003C/strong> describes the organisation's existing AI risk management activities, while a \u003Cstrong>target Profile\u003C/strong> outlines the desired outcomes needed to meet specific risk management goals.\u003C/li>\n\u003Cli>\u003Cstrong>Cross-sectoral profiles:\u003C/strong> these cover risks for models or business processes used across multiple sectors, such as the acquisition of large language models or cloud-based services.\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"conduct-a-gap-analysis\">2. Conduct a gap analysis\u003C/h3>\n\u003Cp>By comparing a \u003Cstrong>current profile\u003C/strong> against a \u003Cstrong>target profile\u003C/strong>, an organisation can identify specific gaps in its risk management objectives. This comparison helps the organisation understand which categories or subcategories of the AI RMF Core need more attention or resources.\u003C/p>\n\u003Ch3 id=\"develop-and-prioritise-an-action-plan\">3. Develop and prioritise an action plan\u003C/h3>\n\u003Cp>Once gaps are identified, the organisation can:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Create action plans\u003C/strong> to address those gaps and fulfill the outcomes of specific subcategories.\u003C/li>\n\u003Cli>\u003Cstrong>Prioritize mitigation efforts\u003C/strong> based on the organisation's specific needs and established risk management processes.\u003C/li>\n\u003Cli>\u003Cstrong>Gauge resource needs\u003C/strong>, such as staffing and funding, to achieve their target risk management goals in a cost-effective manner.\u003C/li>\n\u003C/ul>\n\u003Ch3 id=\"maintain-flexibility\">4. Maintain flexibility\u003C/h3>\n\u003Cp>The AI RMF does not prescribe specific templates for these profiles. This allows organisations the \u003Cstrong>flexibility\u003C/strong> to implement the framework in a way that best aligns with their internal goals, legal or regulatory requirements, and industry best practices. Profiles also allow organisations to compare their risk management approaches with those of other entities.\u003C/p>\n","Understanding the NIST AI RMF (AI Risk Management Framework)","Understand the NIST AI Risk Management Framework designed to assist organisations in the responsible design, development and deployment of AI technologies. ",1561,9,0,null,"en","understanding-the-nist-ai-risk-management-framework","Published",{"id":17,"displayName":18,"avatarUrl":19,"bio":12,"blogUrl":12,"color":12,"userId":17,"creationDate":20},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2026-03-24T09:23:00","2026-03-24T09:23:02.5609409","2026-03-24T13:31:02.1769992",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":30},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[31,34,37],{"lang":32,"name":26,"description":33},"fr","Une liste d'articles rédigés par la communauté",{"lang":35,"name":26,"description":36},"es","Una lista de artículos escritos por la comunidad",{"lang":38,"name":26,"description":39},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[41],{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":42},[43,44,45],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},[],"https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-original.jpg",[49,50,51,52,53,54,55],"https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-1000.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-1500.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-800.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-600.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-300.webp","https://static.dastra.eu/content/d569eabe-51a0-456c-b874-1e91160f50cd/visuel-article-30-100.webp",59940]