Javascript is required
logo-dastralogo-dastra

DastraNews: what happened in Privacy & AI in October?

DastraNews: what happened in Privacy & AI in October?
Leïla Sayssa
Leïla Sayssa
29 October 2025·11 minutes read time

Tired of general newsletters that skim over your real concerns? DastraNews, offers legal and regulatory monitoring specifically designed for DPOs, lawyers, and privacy professionals.

Each month, we go beyond a simple recap: we select about ten decisions, news, or positions that have a concrete impact on your missions and organizations.

🎯 Targeted, useful monitoring grounded in the real-world realities of data protection and AI.

Here is our selection for October 2025:

EDPB’s 2026 coordinated enforcement topic: transparency obligations

On 14 October 2025, the European Data Protection Board (EDPB) announced that its next Coordinated Enforcement Framework (the fifth) will focus on transparency and information obligations under Articles 12, 13, and 14 of the GDPR.

In each coordinated action, the EDPB selects a common priority topic for national Data Protection Authorities (DPAs) to investigate.

In 2026, supervisory authorities across Europe will jointly assess how controllers and processors comply with their duty to inform individuals when their data is processed.

Why it matters:

  • Transparency is a core principle of the GDPR: without clear information, individuals cannot effectively exercise their rights.

  • The outcomes of these national investigations are then consolidated and analysed to provide a deeper, EU-wide understanding of the issue, enabling targeted follow-up and enforcement at both national and European levels.

  • The initiative is expected to start in 2026, giving organizations some time (but not much) to prepare.

🔗 For more information, click here.

EDPB adopts opinions recommending UK adequacy extension

During its latest plenary, the European Data Protection Board (EDPB) adopted two opinions on the European Commission’s draft decisions to extend the validity of the UK adequacy decisions, under both the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED), until December 2031.

This would allow EU organisations and authorities to continue transferring personal data to the UK without additional safeguards.

So, the EDPB notes that most UK legal updates aim to clarify or facilitate compliance but flags areas requiring closer monitoring by the European Commission:

  • Onward transfers: The UK’s new adequacy test (Data Use and Access Act 2025) lacks references to crucial safeguards like government access, individual redress, and independent supervision.
  • Encryption concerns: Technical Capability Notices (TCNs) allowing circumvention of encryption could create systemic vulnerabilities.
  • ICO restructuring: The new Information Commission model should be monitored for independence and enforcement capacity, though its transparency policy is welcomed.

🎯The UK remains an adequate destination for EU data transfers until 2031, but only under strict, ongoing EU monitoring. Good news for organizations transferring personal data between the EU and the UK, but vigilance remains key.

🔗 For more information, click here.

Public consultation on the draft joint guidelines covering the interplay between the Digital Markets Act (DMA) & the GDPR

As of 9 October 2025, the European Commission and the European Data Protection Board (EDPB) have launched a public consultation inviting comments on draft joint guidelines clarifying how the DMA and GDPR interact.

The DMA targets large digital platforms (gatekeepers) and imposes obligations that often trigger GDPR processing. These guidelines aim to align both regulatory regimes.

These guidelines are designed to help “gatekeepers” under the DMA understand and meet their GDPR-compliance obligations, especially where the DMA mandates data processing operations, such as combining user data, portability, or distribution of third-party apps.

The consultation closes on 4 December 2025, with the final guidelines expected to be adopted in 2026.

🔗 Read the draft guidelines here.

🔗 For more information, click here.

Experian (US credit and data trader) fined €2.7 million for GDPR breaches

The Dutch data protection authority (AP) has imposed a fine of €2.7 million on Experian Nederland B.V., a US credit and data trader, for violations of the GDPR.

Key findings:

  • Experian collected and processed extensive personal and sensitive data (including from energy, telecoms and public registers) to build credit reports, often without sufficient transparency to the individuals concerned. The data included negative payment behavior, outstanding debts, and bankruptcy information used to generate credit assessments supplied to service providers and sellers.

  • Complaints from consumers who faced higher deposits or were denied services prompted the AP investigation.

  • The company failed to properly inform data subjects of its processing operations (violations of Articles 12 & 14) and relied on the legal basis of “legitimate interests” without demonstrating necessity or balance in favour of individual rights.

  • Experian has ceased its consumer credit rating operations in the Netherlands as of January 2025 and committed to deleting the related database by the end of this year.

🔗 For more information, click here.

The EU AI Act enters its operational phase: European Commission launches official information platform

On 8 October 2025, the European Commission launched the AI Act Single Information Platform, marking the start of the EU AI Act’s operational phase. This one-stop platform is designed to help both public and private organisations understand and implement the regulation effectively.

🔍 Why it matters

Since its partial entry into force on 1 August 2024, the AI Act has been reshaping how AI is developed, deployed, and governed across the EU. Yet many organisations still struggle with one key question: where to start?

The new platform aims to provide clarity and practical tools:

  • AI Act Explorer: an interactive interface to browse the regulation and annexes;

  • Compliance Checker: a self-assessment tool to identify applicable obligations;

  • National Resources Hub: to access local initiatives and competent authorities;

  • Service Desk: direct expert support from the European AI Office.

It also centralises the official FAQs and guidance managed by the AI Act Service Desk, offering the first unified reference to distinguish between immediate and future obligations and to encourage experimentation in compliant environments.

🔗 Access the platform: AI Act Single Information Platform

🔗 Access the FAQ: List of FAQs

Italy’s new AI law: first of its kind in the EU

On 23 September 2025, Italy’s Parliament approved Law No. 132/2025 (initially Bill 1146-B) regulating artificial intelligence, which will enter into force on 10 October 2025.

Italy thereby becomes the first EU Member State to adopt a comprehensive national AI law. A central aspect is its explicit alignment with the EU AI Act. The government is expected to adopt decrees aimed at harmonising national law with the EU regulation within twelve months.

🔍 What it introduces

  • Reinforces a human-centred approach: AI systems must respect fundamental rights, transparency, security, data protection, non-discrimination, gender equality and sustainability.

  • The law introduces criminal sanctions: anyone who disseminates AI-generated or manipulated content (e.g., deepfakes) that causes unjust harm can face 1 to 5 years in prison.

  • Incorporates data protection provisions: for example, children under 14 require parental consent for AI system use; minors 14+ may give their own consent (under conditions).

  • The legislation also strengthens rules on copyright and data-training practices: only works generated with “genuine human intellectual effort” are eligible for protection; mass data scraping or text & data mining (TDM) is limited to non-copyrighted content or authorised scientific uses.

  • New governance & supervisory authorities: Agency for Digital Italy (AgID) and National Cybersecurity Authority (ACN) are designated key authorities under the EU AI Act.

🔗 Read the law here.

OECD policy paper: Mapping relevant data collection mechanisms for AI training

Published 3 October 2025, the OECD’s latest policy paper examines the various mechanisms used to collect data for training AI systems, and proposes a taxonomy to support policy discussions on privacy, data governance and responsible AI development.

“When developing AI systems, practitioners often focus on model building, while sometimes underestimating the importance of analysing the diverse data collection mechanisms. However, the diversity of mechanisms used for data collection deserves closer attention "

🔍 Key take-aways

  • AI model quality depends not just on model architecture but on the origin, diversity and governance of training data.

  • Data-collection mechanisms are categorised into two broad sources:

    1. Direct collection from individuals and organisations: e.g., data provided by users, observed during interaction with digital services, or voluntary data donations.

    2. Collection from third-parties: e.g., commercial data licensing, open-data initiatives, and large-scale web scraping.

  • Each mechanism has distinct implications for privacy, IP rights, transparency, traceability of datasets and the ability of individuals to exercise rights.

  • The paper highlights emerging roles for privacy-enhancing technologies (PETs) and synthetic data as tools to mitigate data governance and privacy risks.

🔗 Read the paper here.

California regulates “AI Chatbots” after a series of teen suicides

On October 13, 2025, California became the first U.S. state to adopt legislation regulating AI chatbots, following several tragic cases of teen suicides involving emotional attachments formed with these programs. With this move, Governor Gavin Newsom is directly challenging the White House, which has so far resisted imposing national AI regulations.

The new law requires:

  • Age verification for chatbot users,

  • Regular warning messages reminding users they are interacting with a machine (every three hours for minors);

  • Suicide-prevention protocols integrated into conversational AI systems.

One of the key texts, Bill SB243, specifically targets chatbots designed to act as companions or confidants. It follows lawsuits filed in 2024 against the platform Character.AI, after the suicide of a 14-year-old who had developed a virtual relationship with a chatbot allegedly reinforcing his suicidal thoughts.

The CNIL explains how to oppose the reuse of your personal data for training conversational AI agents

The CNIL has published guidance showing how individuals can object to the reuse of their personal data in the training of AI chat-bots and conversational agents.

Key points:

  • The guidance covers major platforms (e.g., Meta AI, Google Gemini) and explains how users can adjust account settings or submit a formal opposition request.

  • Disabling “activity” settings or submitting a right-to-object form may lead to loss of conversation history or other side-effects; the CNIL emphasises users should be aware of these trade-offs.

  • The CNIL explicitly notes it is not taking a position yet on whether the relevant processing fully complies with the General Data Protection Regulation (GDPR), rather it provides practical steps for users.

🔗 For more information, click here.

Reddit files second lawsuit over large-scale scraping of its content

On October 22, 2025, Reddit filed a lawsuit before the U.S. Federal Court in New York against Perplexity and three data-scraping companies (Oxylabs, AWMProxy, and SerpApi).
The case concerns the automated extraction (“scraping”) of massive volumes of Reddit data using specialized software.

According to the complaint, these companies bypassed access restrictions and technical safeguards to harvest Reddit content, including through Google search result pages, in order to train artificial intelligence models.

Reddit alleges:

  • Violation of its Terms of Service,

  • Infringement of copyright on user-generated content,

  • Unlawful circumvention of technical protection measures.

This marks Reddit’s second lawsuit of this kind, signaling a broader legal strategy to assert the commercial value of its data against AI companies relying on large-scale web scraping.

🔗 For more information, click here.


About the author
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.