[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$f6lZirdYF1lrwZ5kRugWVc5AL6vKjEXtkzvLj3bBTIIg":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":8,"featured":4,"state":15,"author":16,"authorId":17,"datePublication":21,"dateCreation":22,"dateUpdate":23,"mainCategory":24,"categories":40,"metaDatas":46,"imageUrl":47,"imageThumbUrls":48,"id":56},true,"Tired of general newsletters that skim over your real concerns? **Dastra Insights,** offers legal and regulatory monitoring **specifically designed for DPOs, lawyers, and privacy professionals**.\r\n\r\nEach month, we go beyond a simple recap: we select about ten decisions, news, or positions **that have a concrete impact on your missions and organizations**.\r\n\r\n🎯 **Targeted, useful monitoring grounded in the real-world realities of data protection and AI.**\r\n\r\nHere is our selection for **February 2026:**\r\n\r\n## Risks related to AI‑generated images: the Data Protection Authority signs a joint statement\r\n\r\nOn 23 February 2026, 61 data protection authorities worldwide, including the [Belgian Data Protection Authority (Autorité de protection des données, APD)](https://www.autoriteprotectiondonnees.be/citoyen/actualites/2026/02/23/risques-lies-aux-images-generees-par-l-ia-l-apd-signe-une-declaration-commune), published and signed a [joint statement](https://www.autoriteprotectiondonnees.be/publications/joint-statement-on-ai-generated-imagery-and-the-protection-of-privacy.pdf) warning of the **privacy and fundamental rights risks posed by images and videos generated by artificial intelligence**.\r\n\r\nThis initiative was carried out under the auspices of the **Global Privacy Assembly (GPA)** and coordinated by the **International Enforcement Cooperation Working Group (IEWG)**.\r\n\r\n### What are the main concerns?\r\n\r\nThe signatory authorities express serious concerns about AI systems capable of producing **ultra‑realistic images and videos depicting identifiable persons**, often without their **knowledge or consent**, notably:\r\n\r\n- content that may infringe on the **privacy** or **dignity** of the persons depicted;\r\n- the creation of **intimate or defamatory deepfakes** that could harm reputations or be exploited for malicious purposes;\r\n- **harms specific to children** and other vulnerable groups.\r\n\r\n### Principles and recommendations\r\n\r\nThe statement does more than sound the alarm: it sets out **fundamental principles** that organizations developing or using image‑generation systems should respect:\r\n\r\n- **Implement robust measures** to prevent abusive use or non‑consensual dissemination of personal data;\r\n- **Ensure meaningful transparency** about systems’ capabilities and limitations, as well as permitted uses;\r\n- **Provide effective mechanisms** allowing data subjects to request prompt removal of harmful content involving their data;\r\n- **Mitigate risks specific to children**, including enhanced protections and information tailored to young people, their parents and educators.\r\n\r\n### Why this matters\r\n\r\nThe APD and its counterparts recall that generative AI technologies offer significant opportunities, **but can also infringe fundamental rights** (such as the right to privacy or human dignity) if deployed without appropriate safeguards. They therefore call on developers, vendors, platforms and users to **cooperate with authorities** **to ensure that technological innovation does not come at the expense of people’s freedoms and rights**.\r\n\r\n## Documentation: CNIL publishes the 2026 update of the Tables Informatique et Libertés\r\n\r\n[CNIL](https://www.cnil.fr/fr/tables-informatique-et-libertes-2026) has published the [**2026 edition of its *Tables Informatique et Libertés***](https://www.cnil.fr/sites/default/files/2026-03/tables_il.pdf), a **key doctrinal resource** compiling and structuring the essentials of **case law and decisional practice on the protection of personal data**, at both national and European levels.\r\n\r\nThe *Tables Informatique et Libertés* are an **organized corpus of thematic summaries** of major decisions from:\r\n\r\n- French courts (e.g. the Conseil d’État or the Cour de cassation);\r\n- European courts (notably the Court of Justice of the European Union);\r\n- CNIL itself (decisions, corrective measures, doctrinal positions);\r\n- the European Data Protection Board (EDPB).\r\n\r\nPresented according to a **detailed thematic classification** (principles, legal bases, data subject rights, data security, international transfers, sanctions, etc.), they provide a **coherent overall view of the doctrine applicable to the GDPR and the French Data Protection Act (*Informatique et Libertés*)**.\r\n\r\nCNIL emphasizes that these Tables have a dual purpose:\r\n\r\n- **Internal**: to ensure a homogeneous appropriation of doctrine among CNIL staff in the face of the constantly increasing number of legal questions arising from GDPR application.\r\n- **External**: to make doctrinal positions and legal points more accessible to **professionals, academics and legal practitioners**, which are not always published in individual decisions.\r\n\r\nThis document will be updated regularly to reflect ongoing developments in data protection practice, notably with regard to new technologies and case law.\r\n\r\n## Call for testers of a **GDPR auditing tool for AI models**\r\n\r\nThe **French National Cybersecurity Agency (ANSSI)**, in partnership with **CNIL**, PEReN and the IPoP project of the Cybersecurity PEPR led by Inria, has launched a [**call for expressions of interest (AMI)**](https://cyber.gouv.fr/actualites/ami-outil-audit-rgpd-ia/) to select stakeholders willing to test a new **privacy auditing tool for artificial intelligence models**. This initiative is part of the **PANAME** project (*Privacy Auditing of AI Models*).\r\n\r\nThe main objective of the project is to develop a **software library to technically audit the privacy of AI models**, in particular against **information‑extraction tests** that can reveal personal data from training datasets. This aims to help organizations **assess the GDPR compliance of their models**, especially when they use or are trained on sensitive data.\r\n\r\nThis call for expressions of interest is open from **26 February to 28 March 2026** to all public or private entities established in the European Union: **companies, startups, research laboratories, public administrations or any other organization using or developing AI models**. Selected applicants will participate in a practical testing phase to validate and enrich the tool’s functionalities.\r\n\r\n## United Kingdom: Information Commissioner’s Office (ICO) fines Reddit £14.47m for failures in protecting children’s data\r\n\r\nOn 24 February 2026, the [**Information Commissioner’s Office (ICO)**](https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/02/reddit-issued-with-1447m-fine-for-children-s-privacy-failures/), the UK’s independent data protection authority, announced a sanction of **£14.47 million** (approximately **€17 million**) against **Reddit, Inc.**, for unlawfully using **personal data of children under 13** and failing to implement **adequate age‑verification mechanisms**.\r\n\r\nAccording to the ICO:\r\n\r\n- Reddit **did not implement a robust age‑verification mechanism**, relying until July 2025 on a simple **self‑declaration by users**, which was easily circumvented.\r\n- The company **did not carry out a Data Protection Impact Assessment (DPIA)** before January 2025 to assess and mitigate risks related to the processing of children’s data, as required by UK law and the GDPR.\r\n- The ICO concluded that **many children under 13 were present on the platform** without a lawful basis for processing their data, which could have **exposed them to inappropriate or potentially harmful content**.\r\n\r\nThis sanction is part of the ICO’s strengthened enforcement of the *Children’s Code* (Age Appropriate Design Code) and UK data protection legislation, notably by imposing enhanced obligations on platforms likely to be used by minors.\r\n\r\n## Croatia: real estate agency **fined €100,000** for GDPR breaches\r\n\r\nOn 19 February 2026, the **Croatian Personal Data Protection Agency (Agencija za zaštitu osobnih podataka – AZOP)** imposed an administrative fine of **€100,000** on a real estate agency acting as controller for multiple GDPR violations.\r\n\r\n### Grounds for the sanction\r\n\r\nAZOP found that the agency breached several fundamental GDPR obligations, notably:\r\n\r\n- **Violation of the storage limitation principle**: the agency retained personal data of 11,887 clients well beyond the time necessary for the processing purpose.\r\n- **Lack of a legal basis for certain data**: copies of sensitive documents (for example 898 ID cards, 6 passports, 3 copies of bank cards, one health insurance card, one minor’s ID card, etc.) were stored in the archives without a clear legal justification.\r\n- **Insufficient technical and organizational measures**: the agency had not ensured adequate supervision and training of staff accessing the data, contrary to the requirements of Article 32 GDPR.\r\n\r\nDuring the inspection, AZOP noted that many brokerage contracts for property purchase or rental, concluded between **2010 and 2019**, **were still archived with attached data**, even though these documents were no longer necessary for the original processing purpose.\r\n\r\n### Implications of this decision\r\n\r\nThis ruling highlights several key points:\r\n\r\n- The **data minimisation and storage limitation principle** requires that data be kept *only for as long as necessary* for the original purpose (Art. 5 GDPR).\r\n- The **lawfulness of processing** requires a clear legal basis for each type of data stored, particularly for sensitive documents such as copies of identity documents or bank cards (Art. 5 GDPR).\r\n- **Organisational and technical security measures** (staff training, access monitoring, clear processing rules) must be proportionate to the risk (Art. 32 GDPR).\r\n\r\n## Poland: DPD Polska fined **11 million PLN (\\~€2.5 million)** for subcontracting and security failures\r\n\r\nThe **Polish Office for Personal Data Protection (UODO – Urząd Ochrony Danych Osobowych)** imposed **fines totalling 11,000,000 PLN** on **DPD Polska Sp. z o.o.** following several serious breaches concerning the processing of customers’ personal data.\r\n\r\nUODO found that DPD Polska:\r\n\r\n1. **Had not concluded data processing agreements (DPAs) with its external subprocessors** (notably carriers with access to shipping labels containing personal data), in breach of Article 28(3) GDPR (contractual obligations with processors).\r\n2. **Did not ensure that employees processed personal data only pursuant to appropriate authorisations** (Art. 32(4) GDPR). The company’s internal system generating pseudo‑authorisations lacked essential elements such as name and signature, undermining traceability and the lawfulness of processing.\r\n\r\nThe 11 million PLN (\\~€2.5 million) sanction reflects the seriousness of the breaches: the company had not properly secured the **contractual relationship with its subprocessors**, nor regulated **internal processing by employees**, exposing customers’ data to uncontrolled risks.\r\n\r\n## European Union: CJEU annuls the General Court’s order in Ireland v. WhatsApp\r\n\r\nOn 10 February 2026, the [Court of Justice of the European Union (CJEU)](https://eur-lex.europa.eu/legal-content/FR/TXT/PDF/?uri=CELEX:62023CJ0097) annulled an order of the General Court (formerly the Tribunal of the European Union) that had declared inadmissible the action brought by WhatsApp Ireland Ltd challenging a decision related to proceedings initiated by the Irish Data Protection Authority.\r\n\r\n### The Irish authority’s decision\r\n\r\nFollowing the entry into force of the GDPR, the Irish Data Protection Commission (DPC) received several complaints concerning the transparency of WhatsApp’s processing, in particular regarding possible data sharing with other entities of the Facebook group (now Meta).\r\n\r\nIn its final decision, the DPC found that WhatsApp had breached:\r\n\r\n- the transparency principle (Article 5(1)(a) GDPR);\r\n- the information obligations set out in Articles 12 to 14 GDPR.\r\n\r\nPursuant to Article 58(2) GDPR, the authority imposed **four administrative fines**, totalling **€225 million**.\r\n\r\n### The General Court’s inadmissibility ruling\r\n\r\nSeveral European supervisory authorities raised objections to the Irish draft decision, and the European Data Protection Board (EDPB) was consulted.\r\n\r\nThe EDPB adopted a **binding decision** under Article 65 GDPR, requiring the DPC to incorporate certain analyses and to revise specific elements, notably concerning the qualification of infringements and sanctions. **The final Irish decision was therefore adopted taking the EDPB’s position into account.**\r\n\r\nClaiming that this binding decision directly affected its legal situation, WhatsApp brought an action for annulment before the General Court, challenging the legality of the EDPB’s decision.\r\n\r\nThe General Court had, however, declared the action **inadmissible**, finding that WhatsApp was not **directly concerned** by the contested EDPB decision, which was formally addressed to the Irish authority.\r\n\r\n### The CJEU’s annulment\r\n\r\nThe Court essentially found that the General Court’s analysis of the absence of direct effect was incorrect. Indeed, **the European decision in question produced binding legal effects likely to affect WhatsApp’s legal situation directly**, notably as regards the content of the final decision and the amount of the fines.\r\n\r\nBy annulling the General Court’s order of inadmissibility, the Court of Justice of the European Union **allows a substantive examination of the action brought by WhatsApp Ireland Ltd**.","\u003Cp>Tired of general newsletters that skim over your real concerns? \u003Cstrong>Dastra Insights,\u003C/strong> offers legal and regulatory monitoring \u003Cstrong>specifically designed for DPOs, lawyers, and privacy professionals\u003C/strong>.\u003C/p>\r\n\u003Cp>Each month, we go beyond a simple recap: we select about ten decisions, news, or positions \u003Cstrong>that have a concrete impact on your missions and organizations\u003C/strong>.\u003C/p>\r\n\u003Cp>🎯 \u003Cstrong>Targeted, useful monitoring grounded in the real-world realities of data protection and AI.\u003C/strong>\u003C/p>\r\n\u003Cp>Here is our selection for \u003Cstrong>February 2026:\u003C/strong>\u003C/p>\r\n\u003Ch2 id=\"risks-related-to-aigenerated-images-the-data-protection-authority-signs-a-joint-statement\">Risks related to AI‑generated images: the Data Protection Authority signs a joint statement\u003C/h2>\r\n\u003Cp>On 23 February 2026, 61 data protection authorities worldwide, including the \u003Ca href=\"https://www.autoriteprotectiondonnees.be/citoyen/actualites/2026/02/23/risques-lies-aux-images-generees-par-l-ia-l-apd-signe-une-declaration-commune\" rel=\"nofollow\">Belgian Data Protection Authority (Autorité de protection des données, APD)\u003C/a>, published and signed a \u003Ca href=\"https://www.autoriteprotectiondonnees.be/publications/joint-statement-on-ai-generated-imagery-and-the-protection-of-privacy.pdf\" rel=\"nofollow\">joint statement\u003C/a> warning of the \u003Cstrong>privacy and fundamental rights risks posed by images and videos generated by artificial intelligence\u003C/strong>.\u003C/p>\r\n\u003Cp>This initiative was carried out under the auspices of the \u003Cstrong>Global Privacy Assembly (GPA)\u003C/strong> and coordinated by the \u003Cstrong>International Enforcement Cooperation Working Group (IEWG)\u003C/strong>.\u003C/p>\r\n\u003Ch3 id=\"what-are-the-main-concerns\">What are the main concerns?\u003C/h3>\r\n\u003Cp>The signatory authorities express serious concerns about AI systems capable of producing \u003Cstrong>ultra‑realistic images and videos depicting identifiable persons\u003C/strong>, often without their \u003Cstrong>knowledge or consent\u003C/strong>, notably:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>content that may infringe on the \u003Cstrong>privacy\u003C/strong> or \u003Cstrong>dignity\u003C/strong> of the persons depicted;\u003C/li>\r\n\u003Cli>the creation of \u003Cstrong>intimate or defamatory deepfakes\u003C/strong> that could harm reputations or be exploited for malicious purposes;\u003C/li>\r\n\u003Cli>\u003Cstrong>harms specific to children\u003C/strong> and other vulnerable groups.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch3 id=\"principles-and-recommendations\">Principles and recommendations\u003C/h3>\r\n\u003Cp>The statement does more than sound the alarm: it sets out \u003Cstrong>fundamental principles\u003C/strong> that organizations developing or using image‑generation systems should respect:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Implement robust measures\u003C/strong> to prevent abusive use or non‑consensual dissemination of personal data;\u003C/li>\r\n\u003Cli>\u003Cstrong>Ensure meaningful transparency\u003C/strong> about systems’ capabilities and limitations, as well as permitted uses;\u003C/li>\r\n\u003Cli>\u003Cstrong>Provide effective mechanisms\u003C/strong> allowing data subjects to request prompt removal of harmful content involving their data;\u003C/li>\r\n\u003Cli>\u003Cstrong>Mitigate risks specific to children\u003C/strong>, including enhanced protections and information tailored to young people, their parents and educators.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch3 id=\"why-this-matters\">Why this matters\u003C/h3>\r\n\u003Cp>The APD and its counterparts recall that generative AI technologies offer significant opportunities, \u003Cstrong>but can also infringe fundamental rights\u003C/strong> (such as the right to privacy or human dignity) if deployed without appropriate safeguards. They therefore call on developers, vendors, platforms and users to \u003Cstrong>cooperate with authorities\u003C/strong> \u003Cstrong>to ensure that technological innovation does not come at the expense of people’s freedoms and rights\u003C/strong>.\u003C/p>\r\n\u003Ch2 id=\"documentation-cnil-publishes-the-2026-update-of-the-tables-informatique-et-libertes\">Documentation: CNIL publishes the 2026 update of the Tables Informatique et Libertés\u003C/h2>\r\n\u003Cp>\u003Ca href=\"https://www.cnil.fr/fr/tables-informatique-et-libertes-2026\" rel=\"nofollow\">CNIL\u003C/a> has published the \u003Ca href=\"https://www.cnil.fr/sites/default/files/2026-03/tables_il.pdf\" rel=\"nofollow\">\u003Cstrong>2026 edition of its \u003Cem>Tables Informatique et Libertés\u003C/em>\u003C/strong>\u003C/a>, a \u003Cstrong>key doctrinal resource\u003C/strong> compiling and structuring the essentials of \u003Cstrong>case law and decisional practice on the protection of personal data\u003C/strong>, at both national and European levels.\u003C/p>\r\n\u003Cp>The \u003Cem>Tables Informatique et Libertés\u003C/em> are an \u003Cstrong>organized corpus of thematic summaries\u003C/strong> of major decisions from:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>French courts (e.g. the Conseil d’État or the Cour de cassation);\u003C/li>\r\n\u003Cli>European courts (notably the Court of Justice of the European Union);\u003C/li>\r\n\u003Cli>CNIL itself (decisions, corrective measures, doctrinal positions);\u003C/li>\r\n\u003Cli>the European Data Protection Board (EDPB).\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Presented according to a \u003Cstrong>detailed thematic classification\u003C/strong> (principles, legal bases, data subject rights, data security, international transfers, sanctions, etc.), they provide a \u003Cstrong>coherent overall view of the doctrine applicable to the GDPR and the French Data Protection Act (\u003Cem>Informatique et Libertés\u003C/em>)\u003C/strong>.\u003C/p>\r\n\u003Cp>CNIL emphasizes that these Tables have a dual purpose:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Internal\u003C/strong>: to ensure a homogeneous appropriation of doctrine among CNIL staff in the face of the constantly increasing number of legal questions arising from GDPR application.\u003C/li>\r\n\u003Cli>\u003Cstrong>External\u003C/strong>: to make doctrinal positions and legal points more accessible to \u003Cstrong>professionals, academics and legal practitioners\u003C/strong>, which are not always published in individual decisions.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>This document will be updated regularly to reflect ongoing developments in data protection practice, notably with regard to new technologies and case law.\u003C/p>\r\n\u003Ch2 id=\"call-for-testers-of-a-gdpr-auditing-tool-for-ai-models\">Call for testers of a \u003Cstrong>GDPR auditing tool for AI models\u003C/strong>\u003C/h2>\r\n\u003Cp>The \u003Cstrong>French National Cybersecurity Agency (ANSSI)\u003C/strong>, in partnership with \u003Cstrong>CNIL\u003C/strong>, PEReN and the IPoP project of the Cybersecurity PEPR led by Inria, has launched a \u003Ca href=\"https://cyber.gouv.fr/actualites/ami-outil-audit-rgpd-ia/\" rel=\"nofollow\">\u003Cstrong>call for expressions of interest (AMI)\u003C/strong>\u003C/a> to select stakeholders willing to test a new \u003Cstrong>privacy auditing tool for artificial intelligence models\u003C/strong>. This initiative is part of the \u003Cstrong>PANAME\u003C/strong> project (\u003Cem>Privacy Auditing of AI Models\u003C/em>).\u003C/p>\r\n\u003Cp>The main objective of the project is to develop a \u003Cstrong>software library to technically audit the privacy of AI models\u003C/strong>, in particular against \u003Cstrong>information‑extraction tests\u003C/strong> that can reveal personal data from training datasets. This aims to help organizations \u003Cstrong>assess the GDPR compliance of their models\u003C/strong>, especially when they use or are trained on sensitive data.\u003C/p>\r\n\u003Cp>This call for expressions of interest is open from \u003Cstrong>26 February to 28 March 2026\u003C/strong> to all public or private entities established in the European Union: \u003Cstrong>companies, startups, research laboratories, public administrations or any other organization using or developing AI models\u003C/strong>. Selected applicants will participate in a practical testing phase to validate and enrich the tool’s functionalities.\u003C/p>\r\n\u003Ch2 id=\"united-kingdom-information-commissioners-office-ico-fines-reddit-14.47m-for-failures-in-protecting-childrens-data\">United Kingdom: Information Commissioner’s Office (ICO) fines Reddit £14.47m for failures in protecting children’s data\u003C/h2>\r\n\u003Cp>On 24 February 2026, the \u003Ca href=\"https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/02/reddit-issued-with-1447m-fine-for-children-s-privacy-failures/\" rel=\"nofollow\">\u003Cstrong>Information Commissioner’s Office (ICO)\u003C/strong>\u003C/a>, the UK’s independent data protection authority, announced a sanction of \u003Cstrong>£14.47 million\u003C/strong> (approximately \u003Cstrong>€17 million\u003C/strong>) against \u003Cstrong>Reddit, Inc.\u003C/strong>, for unlawfully using \u003Cstrong>personal data of children under 13\u003C/strong> and failing to implement \u003Cstrong>adequate age‑verification mechanisms\u003C/strong>.\u003C/p>\r\n\u003Cp>According to the ICO:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Reddit \u003Cstrong>did not implement a robust age‑verification mechanism\u003C/strong>, relying until July 2025 on a simple \u003Cstrong>self‑declaration by users\u003C/strong>, which was easily circumvented.\u003C/li>\r\n\u003Cli>The company \u003Cstrong>did not carry out a Data Protection Impact Assessment (DPIA)\u003C/strong> before January 2025 to assess and mitigate risks related to the processing of children’s data, as required by UK law and the GDPR.\u003C/li>\r\n\u003Cli>The ICO concluded that \u003Cstrong>many children under 13 were present on the platform\u003C/strong> without a lawful basis for processing their data, which could have \u003Cstrong>exposed them to inappropriate or potentially harmful content\u003C/strong>.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>This sanction is part of the ICO’s strengthened enforcement of the \u003Cem>Children’s Code\u003C/em> (Age Appropriate Design Code) and UK data protection legislation, notably by imposing enhanced obligations on platforms likely to be used by minors.\u003C/p>\r\n\u003Ch2 id=\"croatia-real-estate-agency-fined-100000-for-gdpr-breaches\">Croatia: real estate agency \u003Cstrong>fined €100,000\u003C/strong> for GDPR breaches\u003C/h2>\r\n\u003Cp>On 19 February 2026, the \u003Cstrong>Croatian Personal Data Protection Agency (Agencija za zaštitu osobnih podataka – AZOP)\u003C/strong> imposed an administrative fine of \u003Cstrong>€100,000\u003C/strong> on a real estate agency acting as controller for multiple GDPR violations.\u003C/p>\r\n\u003Ch3 id=\"grounds-for-the-sanction\">Grounds for the sanction\u003C/h3>\r\n\u003Cp>AZOP found that the agency breached several fundamental GDPR obligations, notably:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Violation of the storage limitation principle\u003C/strong>: the agency retained personal data of 11,887 clients well beyond the time necessary for the processing purpose.\u003C/li>\r\n\u003Cli>\u003Cstrong>Lack of a legal basis for certain data\u003C/strong>: copies of sensitive documents (for example 898 ID cards, 6 passports, 3 copies of bank cards, one health insurance card, one minor’s ID card, etc.) were stored in the archives without a clear legal justification.\u003C/li>\r\n\u003Cli>\u003Cstrong>Insufficient technical and organizational measures\u003C/strong>: the agency had not ensured adequate supervision and training of staff accessing the data, contrary to the requirements of Article 32 GDPR.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>During the inspection, AZOP noted that many brokerage contracts for property purchase or rental, concluded between \u003Cstrong>2010 and 2019\u003C/strong>, \u003Cstrong>were still archived with attached data\u003C/strong>, even though these documents were no longer necessary for the original processing purpose.\u003C/p>\r\n\u003Ch3 id=\"implications-of-this-decision\">Implications of this decision\u003C/h3>\r\n\u003Cp>This ruling highlights several key points:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>The \u003Cstrong>data minimisation and storage limitation principle\u003C/strong> requires that data be kept \u003Cem>only for as long as necessary\u003C/em> for the original purpose (Art. 5 GDPR).\u003C/li>\r\n\u003Cli>The \u003Cstrong>lawfulness of processing\u003C/strong> requires a clear legal basis for each type of data stored, particularly for sensitive documents such as copies of identity documents or bank cards (Art. 5 GDPR).\u003C/li>\r\n\u003Cli>\u003Cstrong>Organisational and technical security measures\u003C/strong> (staff training, access monitoring, clear processing rules) must be proportionate to the risk (Art. 32 GDPR).\u003C/li>\r\n\u003C/ul>\r\n\u003Ch2 id=\"poland-dpd-polska-fined-11-million-pln-2.5-million-for-subcontracting-and-security-failures\">Poland: DPD Polska fined \u003Cstrong>11 million PLN (~€2.5 million)\u003C/strong> for subcontracting and security failures\u003C/h2>\r\n\u003Cp>The \u003Cstrong>Polish Office for Personal Data Protection (UODO – Urząd Ochrony Danych Osobowych)\u003C/strong> imposed \u003Cstrong>fines totalling 11,000,000 PLN\u003C/strong> on \u003Cstrong>DPD Polska Sp. z o.o.\u003C/strong> following several serious breaches concerning the processing of customers’ personal data.\u003C/p>\r\n\u003Cp>UODO found that DPD Polska:\u003C/p>\r\n\u003Col>\r\n\u003Cli>\u003Cstrong>Had not concluded data processing agreements (DPAs) with its external subprocessors\u003C/strong> (notably carriers with access to shipping labels containing personal data), in breach of Article 28(3) GDPR (contractual obligations with processors).\u003C/li>\r\n\u003Cli>\u003Cstrong>Did not ensure that employees processed personal data only pursuant to appropriate authorisations\u003C/strong> (Art. 32(4) GDPR). The company’s internal system generating pseudo‑authorisations lacked essential elements such as name and signature, undermining traceability and the lawfulness of processing.\u003C/li>\r\n\u003C/ol>\r\n\u003Cp>The 11 million PLN (~€2.5 million) sanction reflects the seriousness of the breaches: the company had not properly secured the \u003Cstrong>contractual relationship with its subprocessors\u003C/strong>, nor regulated \u003Cstrong>internal processing by employees\u003C/strong>, exposing customers’ data to uncontrolled risks.\u003C/p>\r\n\u003Ch2 id=\"european-union-cjeu-annuls-the-general-courts-order-in-ireland-v.whatsapp\">European Union: CJEU annuls the General Court’s order in Ireland v. WhatsApp\u003C/h2>\r\n\u003Cp>On 10 February 2026, the \u003Ca href=\"https://eur-lex.europa.eu/legal-content/FR/TXT/PDF/?uri=CELEX:62023CJ0097\" rel=\"nofollow\">Court of Justice of the European Union (CJEU)\u003C/a> annulled an order of the General Court (formerly the Tribunal of the European Union) that had declared inadmissible the action brought by WhatsApp Ireland Ltd challenging a decision related to proceedings initiated by the Irish Data Protection Authority.\u003C/p>\r\n\u003Ch3 id=\"the-irish-authoritys-decision\">The Irish authority’s decision\u003C/h3>\r\n\u003Cp>Following the entry into force of the GDPR, the Irish Data Protection Commission (DPC) received several complaints concerning the transparency of WhatsApp’s processing, in particular regarding possible data sharing with other entities of the Facebook group (now Meta).\u003C/p>\r\n\u003Cp>In its final decision, the DPC found that WhatsApp had breached:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>the transparency principle (Article 5(1)(a) GDPR);\u003C/li>\r\n\u003Cli>the information obligations set out in Articles 12 to 14 GDPR.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Pursuant to Article 58(2) GDPR, the authority imposed \u003Cstrong>four administrative fines\u003C/strong>, totalling \u003Cstrong>€225 million\u003C/strong>.\u003C/p>\r\n\u003Ch3 id=\"the-general-courts-inadmissibility-ruling\">The General Court’s inadmissibility ruling\u003C/h3>\r\n\u003Cp>Several European supervisory authorities raised objections to the Irish draft decision, and the European Data Protection Board (EDPB) was consulted.\u003C/p>\r\n\u003Cp>The EDPB adopted a \u003Cstrong>binding decision\u003C/strong> under Article 65 GDPR, requiring the DPC to incorporate certain analyses and to revise specific elements, notably concerning the qualification of infringements and sanctions. \u003Cstrong>The final Irish decision was therefore adopted taking the EDPB’s position into account.\u003C/strong>\u003C/p>\r\n\u003Cp>Claiming that this binding decision directly affected its legal situation, WhatsApp brought an action for annulment before the General Court, challenging the legality of the EDPB’s decision.\u003C/p>\r\n\u003Cp>The General Court had, however, declared the action \u003Cstrong>inadmissible\u003C/strong>, finding that WhatsApp was not \u003Cstrong>directly concerned\u003C/strong> by the contested EDPB decision, which was formally addressed to the Irish authority.\u003C/p>\r\n\u003Ch3 id=\"the-cjeus-annulment\">The CJEU’s annulment\u003C/h3>\r\n\u003Cp>The Court essentially found that the General Court’s analysis of the absence of direct effect was incorrect. Indeed, \u003Cstrong>the European decision in question produced binding legal effects likely to affect WhatsApp’s legal situation directly\u003C/strong>, notably as regards the content of the final decision and the amount of the fines.\u003C/p>\r\n\u003Cp>By annulling the General Court’s order of inadmissibility, the Court of Justice of the European Union \u003Cstrong>allows a substantive examination of the action brought by WhatsApp Ireland Ltd\u003C/strong>.\u003C/p>\r\n","Dastra Insights: what happened in February?","Privacy & AI insights from the Dastra hub: actionable updates for pros who work daily in the field.",1882,10,0,null,"en","dastra-insights-what-happened-in-february","Published",{"id":17,"displayName":18,"avatarUrl":19,"bio":12,"blogUrl":12,"color":12,"userId":17,"creationDate":20},2986,"Maëva Vidal","https://static.dastra.eu/tenant-3/avatar/2986/maeva-min-min-min-150.png","2022-09-05T13:22:36","2026-03-03T13:10:00","2026-03-03T13:10:34.7971238","2026-03-03T13:20:54.8592595",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":30},2,"Blog","A list of curated articles provided by the community","blog","#28449a",[31,34,37],{"lang":32,"name":26,"description":33},"fr","Une liste d'articles rédigés par la communauté",{"lang":35,"name":26,"description":36},"es","Una lista de artículos escritos por la comunidad",{"lang":38,"name":26,"description":39},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[41],{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":42},[43,44,45],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},[],"https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-original.png",[49,50,51,52,53,54,55],"https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-1000.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-1500.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-800.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-600.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-300.webp","https://static.dastra.eu/content/fd59c3f6-db86-4a70-b579-ad1d95fb2fc4/dastra-insights-100.webp",59902]