[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$fNoJxxlNxLG-USOo0LOcjO92EITgxqilO91yz7eK_Amw":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":11,"nbDownloads":12,"excerpt":13,"lang":14,"url":15,"intro":16,"featured":4,"state":17,"author":18,"authorId":19,"datePublication":23,"dateCreation":24,"dateUpdate":25,"mainCategory":26,"categories":42,"metaDatas":48,"imageUrl":49,"imageThumbUrls":50,"id":58},false,"The recent rise of artificial intelligence is no longer limited to systems able to analyze or generate information. A new generation of tools, described as **“agentic artificial intelligence”**, goes a step further: these systems are capable of **planning, making decisions and executing actions autonomously**, by interacting with various services, databases and digital environments.\n\n> *For example, in the context of a business trip, an agent can detect this trip in a calendar and proactively initiate bookings by interacting with third‑party services.*\n\nThe agent does more than respond to a request: it can anticipate situations, detect changes in its environment and **initiate actions itself** to achieve the objective assigned to it.\n\nThis technological evolution opens significant opportunities for automating many organizational processes, including those involving **processing of personal data**.\n\nHowever, this capacity for autonomous action profoundly changes the nature of risks to data protection. Unlike traditional AI systems, agents can **simultaneously access multiple sources of information, retain persistent memories and perform automated actions**, which complicates the traceability of processing activities and control over personal data flows.\n\nThe Spanish Data Protection Agency (AEPD) published, in February 2026, a [guide](https://www.aepd.es/en/guides/agentic-artificial-intelligence.pdf) dedicated to agentic AI, so that the integration of these systems into organizations is not seen as a mere technological tool but as a transformation of data processing workflows requiring strengthened governance.\n\n## Agentic artificial intelligence and personal data protection\n\nAgentic AI directly impacts how personal information is collected, used and monitored:\n\n- **Access to unstructured data**: agents can autonomously access emails, meeting minutes, internal documents or customer databases to enrich their context and make more relevant decisions. This level of access **introduces a significant risk of violating the data minimization principle** (Art. 5(1) GDPR).\n\n> *For example, in a system of five agents tasked with finding hotels, it would be technically possible for the AI to consult irrelevant information (such as internal customer preferences or unrelated exchanges) simply to improve its context.*\n>\n> This situation **makes it difficult to demonstrate to a supervisory authority** that only the data strictly necessary were used.\n\n- **Automated decisions**: increased autonomy can lead to decisions, without human intervention, **that have legal or similarly significant effects on individuals** (Art. 22 GDPR). The risk lies in the difficulty of controlling and demonstrating **that these decisions do not have an adverse or discriminatory impact**, especially when multiple agents collaborate and interact with numerous data sources.\n\n> *For example, in an automated recruitment process, an agent could evaluate applications and automatically reject certain profiles based on criteria analyzed across different internal systems (CVs, tests, interview notes) and external sources (social networks, public recommendations), thereby disadvantaging some candidates.*\n\n- **Confidentiality and agent initiative**: agent autonomy can generate specific risks to data confidentiality. Agents’ ability to act proactively, without constant human supervision, makes it **difficult to anticipate, control and trace data exchanges**. This exposes organizations to confidentiality breaches.\n\n> *For example, an agent might deem that a third‑party service offers the ideal tool to process information and decide to automatically transfer internal company files to unknown external servers via unaudited APIs.*\n\n- **Agent memory:** there is a risk of unintended retention and reuse of data. AI agents have multiple memories:\n\n  1. Management memory: logs of the agent’s activity and actions.\n  2. Working memory: semantic (information updates), episodic (event archive) and procedural (rules for executing tasks).\n\n  This functioning creates a **specific risk in terms of personal data protection**.\n\n> *For example, if an agent receives a mission involving health data and its global memory retains that information, the agent may later reuse that data for a completely different task, without consent or any purpose related to the initial mission.*\n\nFaced with the specific risks introduced by agentic AI, several recommendations emerge from the guide: integrate these systems into information governance, anticipate biases and errors, limit data access, structure metadata, and compartmentalize agents’ memories.\n\nThese requirements call for **tools able to structure, document and manage the processing of personal data**.\n\n## How can a governance tool like DASTRA help address these challenges?\n\nSeveral features of DASTRA can help respond to the issues raised:\n\n- **Integration into data governance** AI agents can be integrated into the **record of processing activities**, allowing documentation of their purposes, categories of data used, information sources and recipients. This mapping improves visibility over data flows generated by these systems.\n\n- **Documentation and traceability of risks** Data protection impact assessments (DPIAs) can be used to **identify and document risks specific to agentic AI**, such as decision‑making autonomy, persistent memory or interactions with third‑party services.\n\n- **Access and data flow management** By documenting roles, responsibilities and categories of data access, governance can **formalize access policies for information processed by agents** and identify sensitive points in data flows.\n\n- **Data and metadata cataloging** A structured data mapping makes it easier to identify the **sources used by agents**, the types of data processed and their sensitivity, which is essential when these systems interact with multiple information repositories.\n\n- **Implementing protective measures** Measures such as pseudonymization, purpose limitation or data compartmentalization can be **documented, monitored and audited**.\n\n{% button href=\"https://www.dastra.eu/en/contacts/demo\" text=\"Talk to a Dastra expert\" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}","\u003Cp>The recent rise of artificial intelligence is no longer limited to systems able to analyze or generate information. A new generation of tools, described as \u003Cstrong>“agentic artificial intelligence”\u003C/strong>, goes a step further: these systems are capable of \u003Cstrong>planning, making decisions and executing actions autonomously\u003C/strong>, by interacting with various services, databases and digital environments.\u003C/p>\n\u003Cblockquote>\n\u003Cp>\u003Cem>For example, in the context of a business trip, an agent can detect this trip in a calendar and proactively initiate bookings by interacting with third‑party services.\u003C/em>\u003C/p>\n\u003C/blockquote>\n\u003Cp>The agent does more than respond to a request: it can anticipate situations, detect changes in its environment and \u003Cstrong>initiate actions itself\u003C/strong> to achieve the objective assigned to it.\u003C/p>\n\u003Cp>This technological evolution opens significant opportunities for automating many organizational processes, including those involving \u003Cstrong>processing of personal data\u003C/strong>.\u003C/p>\n\u003Cp>However, this capacity for autonomous action profoundly changes the nature of risks to data protection. Unlike traditional AI systems, agents can \u003Cstrong>simultaneously access multiple sources of information, retain persistent memories and perform automated actions\u003C/strong>, which complicates the traceability of processing activities and control over personal data flows.\u003C/p>\n\u003Cp>The Spanish Data Protection Agency (AEPD) published, in February 2026, a \u003Ca href=\"https://www.aepd.es/en/guides/agentic-artificial-intelligence.pdf\" rel=\"nofollow\">guide\u003C/a> dedicated to agentic AI, so that the integration of these systems into organizations is not seen as a mere technological tool but as a transformation of data processing workflows requiring strengthened governance.\u003C/p>\n\u003Ch2 id=\"agentic-artificial-intelligence-and-personal-data-protection\">Agentic artificial intelligence and personal data protection\u003C/h2>\n\u003Cp>Agentic AI directly impacts how personal information is collected, used and monitored:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cstrong>Access to unstructured data\u003C/strong>: agents can autonomously access emails, meeting minutes, internal documents or customer databases to enrich their context and make more relevant decisions. This level of access \u003Cstrong>introduces a significant risk of violating the data minimization principle\u003C/strong> (Art. 5(1) GDPR).\u003C/li>\n\u003C/ul>\n\u003Cblockquote>\n\u003Cp>\u003Cem>For example, in a system of five agents tasked with finding hotels, it would be technically possible for the AI to consult irrelevant information (such as internal customer preferences or unrelated exchanges) simply to improve its context.\u003C/em>\u003C/p>\n\u003Cp>This situation \u003Cstrong>makes it difficult to demonstrate to a supervisory authority\u003C/strong> that only the data strictly necessary were used.\u003C/p>\n\u003C/blockquote>\n\u003Cul>\n\u003Cli>\u003Cstrong>Automated decisions\u003C/strong>: increased autonomy can lead to decisions, without human intervention, \u003Cstrong>that have legal or similarly significant effects on individuals\u003C/strong> (Art. 22 GDPR). The risk lies in the difficulty of controlling and demonstrating \u003Cstrong>that these decisions do not have an adverse or discriminatory impact\u003C/strong>, especially when multiple agents collaborate and interact with numerous data sources.\u003C/li>\n\u003C/ul>\n\u003Cblockquote>\n\u003Cp>\u003Cem>For example, in an automated recruitment process, an agent could evaluate applications and automatically reject certain profiles based on criteria analyzed across different internal systems (CVs, tests, interview notes) and external sources (social networks, public recommendations), thereby disadvantaging some candidates.\u003C/em>\u003C/p>\n\u003C/blockquote>\n\u003Cul>\n\u003Cli>\u003Cstrong>Confidentiality and agent initiative\u003C/strong>: agent autonomy can generate specific risks to data confidentiality. Agents’ ability to act proactively, without constant human supervision, makes it \u003Cstrong>difficult to anticipate, control and trace data exchanges\u003C/strong>. This exposes organizations to confidentiality breaches.\u003C/li>\n\u003C/ul>\n\u003Cblockquote>\n\u003Cp>\u003Cem>For example, an agent might deem that a third‑party service offers the ideal tool to process information and decide to automatically transfer internal company files to unknown external servers via unaudited APIs.\u003C/em>\u003C/p>\n\u003C/blockquote>\n\u003Cul>\n\u003Cli>\u003Cp>\u003Cstrong>Agent memory:\u003C/strong> there is a risk of unintended retention and reuse of data. AI agents have multiple memories:\u003C/p>\n\u003Col>\n\u003Cli>Management memory: logs of the agent’s activity and actions.\u003C/li>\n\u003Cli>Working memory: semantic (information updates), episodic (event archive) and procedural (rules for executing tasks).\u003C/li>\n\u003C/ol>\n\u003Cp>This functioning creates a \u003Cstrong>specific risk in terms of personal data protection\u003C/strong>.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cblockquote>\n\u003Cp>\u003Cem>For example, if an agent receives a mission involving health data and its global memory retains that information, the agent may later reuse that data for a completely different task, without consent or any purpose related to the initial mission.\u003C/em>\u003C/p>\n\u003C/blockquote>\n\u003Cp>Faced with the specific risks introduced by agentic AI, several recommendations emerge from the guide: integrate these systems into information governance, anticipate biases and errors, limit data access, structure metadata, and compartmentalize agents’ memories.\u003C/p>\n\u003Cp>These requirements call for \u003Cstrong>tools able to structure, document and manage the processing of personal data\u003C/strong>.\u003C/p>\n\u003Ch2 id=\"how-can-a-governance-tool-like-dastra-help-address-these-challenges\">How can a governance tool like DASTRA help address these challenges?\u003C/h2>\n\u003Cp>Several features of DASTRA can help respond to the issues raised:\u003C/p>\n\u003Cul>\n\u003Cli>\u003Cp>\u003Cstrong>Integration into data governance\u003C/strong> AI agents can be integrated into the \u003Cstrong>record of processing activities\u003C/strong>, allowing documentation of their purposes, categories of data used, information sources and recipients. This mapping improves visibility over data flows generated by these systems.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>Documentation and traceability of risks\u003C/strong> Data protection impact assessments (DPIAs) can be used to \u003Cstrong>identify and document risks specific to agentic AI\u003C/strong>, such as decision‑making autonomy, persistent memory or interactions with third‑party services.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>Access and data flow management\u003C/strong> By documenting roles, responsibilities and categories of data access, governance can \u003Cstrong>formalize access policies for information processed by agents\u003C/strong> and identify sensitive points in data flows.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>Data and metadata cataloging\u003C/strong> A structured data mapping makes it easier to identify the \u003Cstrong>sources used by agents\u003C/strong>, the types of data processed and their sensitivity, which is essential when these systems interact with multiple information repositories.\u003C/p>\n\u003C/li>\n\u003Cli>\u003Cp>\u003Cstrong>Implementing protective measures\u003C/strong> Measures such as pseudonymization, purpose limitation or data compartmentalization can be \u003Cstrong>documented, monitored and audited\u003C/strong>.\u003C/p>\n\u003C/li>\n\u003C/ul>\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/contacts/demo\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">Talk to a Dastra expert\u003C/a>\u003C/div>\n","Agentic AI and personal data: what GDPR risks?","Agentic AI is transforming data processing. Discover the GDPR risks related to autonomy, memory, and automated decision-making.",870,5,"AI agents in data processing: what challenges for compliance and governance?",0,null,"en","ai-agents-in-data-processing-what-challenges-for-compliance-and-governance","Agentic AI is transforming data processing. Discover the GDPR risks linked to autonomy, memory, and automated decisions.","Published",{"id":19,"displayName":20,"avatarUrl":21,"bio":13,"blogUrl":13,"color":13,"userId":19,"creationDate":22},2986,"Maëva Vidal","https://static.dastra.eu/tenant-3/avatar/2986/maeva-min-min-min-150.png","2022-09-05T13:22:36","2026-04-01T08:34:00","2026-04-01T08:34:05.6107541","2026-04-01T08:44:40.9679017",{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":32},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[33,36,39],{"lang":34,"name":28,"description":35},"fr","Une liste d'articles rédigés par la communauté",{"lang":37,"name":28,"description":38},"es","Una lista de artículos escritos por la comunidad",{"lang":40,"name":28,"description":41},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[43],{"id":27,"name":28,"description":29,"url":30,"color":31,"parentId":13,"count":13,"imageUrl":13,"parent":13,"order":12,"translations":44},[45,46,47],{"lang":34,"name":28,"description":35},{"lang":37,"name":28,"description":38},{"lang":40,"name":28,"description":41},[],"https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-original.webp",[51,52,53,54,55,56,57],"https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-1000.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-1500.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-800.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-600.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-300.webp","https://static.dastra.eu/content/e5fda074-d518-4b03-931e-e77a5f47b45a/privacy-by-design-300-100.webp",59952]