Tired of general newsletters that skim over your real concerns? DastraNews, offers legal and regulatory monitoring specifically designed for DPOs, lawyers, and privacy professionals.
Each month, we go beyond a simple recap: we select about ten decisions, news, or positions that have a concrete impact on your missions and organizations.
🎯 Targeted, useful monitoring grounded in the real-world realities of data protection and AI.
Here is our selection for June 2025:
Back to basics: the CNIL revisits the qualification of roles of actors in the processing chain
Before launching a personal data processing, it is essential to clarify the roles of the involved parties. The CNIL recalls, in light of the EDPB's guidelines from 2020, the contours of the concepts of data controller, processor, and joint controller, illustrated by concrete examples.
Thus, developers using the same federated learning protocol to train an AI system from data for which they are initially each responsible can become joint controllers, as long as they jointly determine the purpose (training the AI) and the means (which data to exploit, which protocol to choose).
The CNIL emphasizes: beyond contracts, it is essential to precisely define who does what, indicate the level of responsibility of each (which may vary), and document the reasoning that led to the selected qualification.
Finally, although contractual qualification sets obligations (for example, maintaining the register), it does not bind the CNIL, which can always requalify roles based on factual elements.
📌 Good to know: the CNIL also offers sectoral examples to help stakeholders better understand their responsibilities.
👩🚀 Dastra's extra tip: https://www.dastra.eu/fr/article/comment-verifier-le-respect-du-rgpd-par-ses-sous-traitants/58946
The CNIL publishes its recommendations on legitimate interest in the development of AI systems
Following the opinion adopted by the EDPB in December 2024, the CNIL reminds us in this new note that legitimate interest is a possible legal basis for the development of AI systems, as long as the data controller documents and examines the compliance of their processing with the following 3 conditions, applied to the case of AI systems:
- Lawful, specific, and existing interest: for example, the development of an AI system to facilitate public access to certain information is generally considered a legitimate interest. However, the interest ceases to be legitimate if it cannot be legally deployed (e.g., prohibited by the AI Act) or if there is no link between the AI system and the organization's activity/mission.
- Necessary for the processing purpose (and therefore no less intrusive alternative), considering the principle of data minimization.
- Does not override the fundamental rights and freedoms of the data subject: the more significant the expected benefits of the processing (for the data controller, but also for end users or the public interest), the more likely it is that the controller's legitimate interest prevails.
However, these benefits must be balanced against the potentially negative impacts on the data subjects (notably, the risks of data confidentiality loss, or the risks of spreading false information).
These impacts should be assessed considering elements such as the reasonable expectations of individuals or additional measures in place to mitigate them (e.g., anonymization or a prior right to object).
This note provides concrete examples supporting these statements, where legitimate interest can or cannot be invoked.
If these conditions are not met, consent becomes necessary, or as per legal provisions providing for this.
👩🚀 Dastra's extra tip: Note that a commercial interest can constitute a legitimate interest (Tennisbond ruling from the CJEU on October 4, 2024) under certain conditions.
Focus note from the CNIL on legitimate interest and web scraping
Aligned with the note outlined above, the French authority has published a focus note on web scraping, which generally relies on legitimate interest, and the measures to be implemented by the data controller.
This involves defining precise collection criteria in advance, excluding certain categories of unnecessary data via filters, or by the websites that oppose scraping their content, removing irrelevant data collected despite these measures. Additionally, limiting the collection to data that is freely accessible and which individuals are aware they are making publicly available.
Moreover, it is essential to take into account the reasonable expectations of the concerned individuals, which vary depending on the public nature of the data, the type of sites and their restrictions, the type of publication, etc.
Therefore, it is important to keep individuals informed of the collection (e.g., an updated list of sites subject to web scraping) and of their rights (notably a prior right to object to the collection).
Tracking pixels: the CNIL launches a public consultation on its draft recommendation
The CNIL has just opened a consultation on its draft recommendations regarding the use of tracking pixels embedded in emails, a technique that allows the sender to know if a message has been opened. In light of the increasing number of complaints, the CNIL clarifies that:
- The use of these pixels must comply with the GDPR.
- In principle, their use for marketing purposes (measuring campaign performance, personalization, profile creation, etc.) requires prior consent from the recipient.
- Exceptions exist for strictly technical purposes such as authentication, security, or the anonymous measurement of the overall open rate of solicited emails.
The project emphasizes the necessity of informed consent, distinct for each purpose, as well as the possibility to withdraw consent at any time (for example, via a link in the email footer).
📝 The public consultation is open until July 24, 2025.
This comes after the Norwegian authority imposed sanctions on June 10, on six websites for non-compliant use of tracking pixels. These sites transmitted information about their visitors to third parties without adequate legal basis and without respecting user information requirements.
Project PANAME: auditing AI models for compliance verification
In an opinion adopted in December 2024, the EDPB reminds us that the GDPR frequently applies to AI models trained on personal data, mainly due to their memorization capabilities.
However, tools to conduct these audits are still limited. The scientific literature is abundant but poorly suited to the industrial context, open-source techniques require significant development, and there is not yet a standard to formalize privacy tests.
To address these challenges, the CNIL, PEReN, ANSSI, and the IPoP project are launching PANAME, a collaborative project aimed at developing a largely open-source software library to standardize and simplify these audits. The tool will be tested with administrations and industries to ensure its alignment with concrete needs and GDPR requirements.
Professional emails: the French Supreme Court recognizes their status as personal data
In a ruling on June 18, 2025, the French Supreme Court (the Court of Cassation) made an important clarification regarding the status of professional emails exchanged in the course of work and the right of access of employees to these messages.
The case involved a managing partner dismissed for misconduct after complaints about sexist and sexually suggestive remarks. Contesting his dismissal, he demanded that his employer provide all his professional emails along with the associated metadata to prepare his defense. The employer refused, arguing that these elements, received in the course of his duties, were not personal data.
The Court of Cassation rejected this position, confirming the decision of the Court of Appeal: it stated that the professional emails of an employee, whether sent or received, are indeed personal data under the meaning of Article 4 of the GDPR. Consequently, when the employee exercises their right of access as provided by Article 15 of the GDPR, the employer must provide all emails and metadata.
The Court did, however, clarify that this communication could be restricted if it would infringe on the rights and freedoms of other individuals. In this case, the employee could seek restitution for damages due to non-compliance with their right of access.
The AEPD and the EDPS emphasize federated learning for a more privacy-respecting AI
The Spanish Data Protection Authority (AEPD) and the European Data Protection Supervisor (EDPS) have published a joint report examining the strategic importance of federated learning for developing AI models that comply with data protection principles.
This training mode allows for local data processing on each terminal or site without transferring them to a central server. Only parameters or training results are shared, which limits the exposure of personal data. Federated learning thus supports principles such as data minimization, purpose limitation, and enhances the accountability of stakeholders by facilitating auditability.
The report identifies major use cases, notably:
In the health sector, where data is particularly sensitive.
For voice assistants or autonomous vehicles.
It highlights several challenges:
Ensuring data security and quality throughout the ecosystem.
Avoiding biases and not assuming that models or parameters are anonymous without thorough analysis.
Prioritizing a "privacy by design" approach to mitigate risks and enhance trust among stakeholders.
Thus, federated learning is seen as a dual-leverage technology, promoting both data protection and the growth of the digital economy by enabling collaboration on strategic data without directly sharing them.
United Kingdom: New data protection laws
The Data Use and Access (DUAA) Act of 2025 received royal assent on June 19. The Information Commissioner's Office (ICO) published an article highlighting the expected benefits for British organizations.
The DUAA aims to energize the economy, modernize public services, and simplify the lives of Britons. Its main provisions include sharing health data between institutions (e.g., between hospitals), data retention during judicial investigations, as well as online identity verification, accompanied by the creation of a trust label for service providers.
To foster innovation:
Scientific research: the DUAA clarifies when it is permissible to use personal data for scientific research, including commercial research, and allows for broad consent covering a field of study.
Exemption from individual notification: allows for the reuse of personal data for research without sending a new privacy notice if it would be a disproportionate effort, provided that the rights of individuals are protected and this information is published on the organization’s website.
Automated decisions: opens the possibility of using any legal basis (including legitimate interest) for automated decisions that have a significant impact, subject to appropriate safeguards. However, this does not apply to sensitive data.
To simplify data management:
New basis for "recognized legitimate interests": exempts from balancing the rights of individuals and the pursued interest, for example, for public safety.
Facilitated sharing with authorities: allows for the transfer of personal data to other public bodies (like the police) without the need to verify if it falls within their missions, the verification being the responsibility of the requesting body.
Presumed compatibility: admits that certain secondary uses (e.g., archiving in the public interest) are automatically compatible with the initial purpose, without an analysis of compatibility.
The ICO provides practical guidelines to help organizations apply these new rules as they come into effect.
United States: Moving towards local AI regulation after the lifting of the moratorium
The U.S. Senate votes 99 to 1 to lift the moratorium on AI, paving the way for regulation by the states.
The U.S. moratorium on AI regulation refers to a legislative measure adopted in May 2025 by the U.S. House of Representatives, which aims to prohibit all U.S. states from regulating artificial intelligence for 10 years under a bill dubbed the "Big Beautiful Bill" by the Trump administration.
Why? Proponents of the moratorium argue that the current regulatory fragmentation (each state being able to adopt its own laws) creates legal uncertainty and harms American competitiveness, particularly against Europe and China. Large tech companies support this freeze, believing that strict rules would hinder innovation.
However, critics from both political sides have strongly opposed this measure, warning that such a ban would expose consumers to risks and allow large AI companies to operate with minimal accountability.
The decisive action of the Senate reflects a growing awareness of the need for agile and responsive technology policy, particularly in rapidly evolving areas like AI.
The lifting of the moratorium marks a turning point for AI innovation in the United States. Rather than a centralized and potentially restrictive federal ban, the way is now open for states to formulate their own regulatory frameworks.
This could lead to a more nuanced approach to governance, allowing states to tailor regulation to their specific economic, social, and technological contexts.
Texas adopts a law on responsible AI governance
On June 22, the Texas governor signed the Texas Responsible Artificial Intelligence Governance Act, which will come into effect on September 1, 2025. This unanimously passed law aims to establish a framework for the responsible use of AI within state public agencies.
Once the law takes effect on January 1, 2026, Texas will:
Establish basic obligations for AI "developers" and "users."
Prohibit AI systems intended for social scoring or discrimination.
Create the first regulatory sandbox for AI in the United States.
Grant the Attorney General exclusive authority to enforce the law.
Supersede local AI regulations with extensive preemption.
Texas thus joins a broader movement across the United States to regulate the development and deployment of AI in the public sector, ensuring that it is used ethically and responsibly.