Tired of general newsletters that skim over your real concerns? Dastra Insights, offers legal and regulatory monitoring specifically designed for DPOs, lawyers, and privacy professionals.
Each month, we go beyond a simple recap: we select about ten decisions, news, or positions that have a concrete impact on your missions and organizations.
🎯 Targeted, useful monitoring grounded in the real-world realities of data protection and AI.
Here is our selection for December 2025:
United States: AI policy enters a new phase
On 11 December 2025, President Trump signed an executive order aimed at limiting the impact of state-level artificial intelligence laws.
Why?
The administration considers that U.S. leadership in AI requires a single, harmonised national framework, rather than a patchwork of 50 different state regulatory regimes likely to fragment regulation, slow innovation, and increase compliance complexity for businesses.
What the executive order does (and does not do)
The executive order does not amend existing federal law. Instead, it instructs the White House to prepare legislative proposals for Congress to establish a federal AI framework capable of pre-empting state laws.
It increases federal pressure on states. Several states, including California and Texas, have already adopted or proposed AI safety laws. The order signals a clear intent to neutralise these initiatives at the federal level.
It establishes an AI Litigation Task Force. The Department of Justice is creating a task force responsible for challenging the constitutionality of state AI laws that conflict with the federal policy outlined in the order.
It mobilises federal agencies. The FCC and the FTC are instructed to take steps that could lead to the adoption of federal standards replacing state rules in cases of overlap or conflict.
Major legal uncertainties
In the absence of a clear statutory basis adopted by Congress, the legality and effectiveness of this strategy may be challenged before the courts. It is worth noting that this executive order follows failed congressional attempts earlier in 2025 to impose a federal moratorium on state AI laws.
AI Act: publication of the first draft Code of Practice on marking AI-generated content
The European Commission has published the first draft Code of Practice on the marking and labelling of AI-generated or AI-manipulated content. This publication follows the timeline for finalising the Code by June 2026, ahead of the entry into application of the corresponding obligations under the AI Act.
A voluntary tool to anticipate AI Act obligations
Article 50 of the AI Act sets out specific transparency requirements. In particular, it requires providers of AI systems to mark AI-generated or manipulated content in a machine-readable format.
It also requires users deploying generative AI systems for professional purposes to clearly label deepfakes and certain AI-generated textual content, especially when addressing matters of public interest.
To assist providers and deployers in complying with these requirements, the Commission has initiated the development of a voluntary Code of Practice, drafted by independent experts. The aim is to provide an operational framework ahead of the entry into force of legally binding rules.
Content of the draft Code
The draft Code is structured into two distinct sections.
The first section is addressed to providers of generative AI systems and focuses on rules for marking and detecting AI-generated content, with an emphasis on technical solutions enabling automated identification.
The second section targets deployers of generative AI systems and concerns the labelling of deepfakes and certain AI-generated or manipulated text relating to matters of public interest, in order to ensure fair and transparent information to the public.
Consultation and next steps
The Commission will collect feedback from participants and observers on this first draft until 23 January. A second version of the Code is expected by mid-March 2026, with final adoption planned for June 2026.
EU–UK data transfers: adequacy decisions renewed until 2031
On 19 December 2025, the European Commission renewed the two adequacy decisions adopted in 2021, confirming that personal data may continue to flow freely between the European Union and the United Kingdom until 27 December 2031.
These decisions had been technically extended in June 2025 for a period of six months, allowing the Commission to carry out an in-depth assessment of the UK legal framework, in particular following the adoption of the Data (Use and Access) Act.
Following this assessment, the Commission concluded that the level of protection provided by the United Kingdom remains essentially equivalent to that guaranteed under EU law.
The renewal follows a favourable opinion from the European Data Protection Board and approval by the Member States under the comitology procedure.
The renewed decisions are subject to a six-year validity period, with a review scheduled after four years in order to monitor developments in the UK framework.
What this means for organisations
For organisations operating in both the EU and the UK, this decision provides welcome legal certainty in a changing regulatory landscape. It confirms that EU–UK data transfers can continue without additional safeguards, such as standard contractual clauses or specific technical measures, thereby reducing cost and compliance complexity.
However, adequacy is not guaranteed indefinitely. The UK framework will remain under close scrutiny, particularly given ongoing debates around potential regulatory divergence.
Data breach: CNIL fines Mobius Solutions Ltd one million euros
The CNIL imposed a one-million-euro fine on Mobius Solutions Ltd following a personal data breach revealing serious failures to comply with the security obligations set out under the GDPR.
The investigation identified insufficient technical and organisational measures, which allowed unauthorised access to personal data. In particular, the CNIL noted the absence of adequate safeguards to prevent intrusion, as well as shortcomings in security risk management.
This decision reiterates that security obligations are not limited to general principles but require concrete, risk-appropriate measures that are regularly reviewed and updated. It also highlights the importance of a prompt and structured response to incidents, both from a technical standpoint and in terms of notification to the authority and affected individuals.
Security: CNIL sanctions NEXPUBLICA FRANCE for inadequate security measures
The CNIL has imposed a fine of over €1 million on NEXPUBLICA FRANCE for failing to pay sufficient attention to the security of the data it processes.
With its decision, the CNIL strongly reiterates that the role of subcontractor in no way reduces the requirements for personal data security. As the publisher of a user relationship management tool in the sensitive field of social action, NEXPUBLICA FRANCE acted on behalf of the MDPH as a subcontractor. As such, it was responsible for ensuring a level of security appropriate to the risks, in accordance with Article 32 of the GDPR.
NEXPUBLICA had proven expertise in the development of IT solutions and could not ignore either its regulatory obligations or the existence of known vulnerabilities affecting the security of its tool.
Although multi-factor authentication had been implemented, it proved insufficient in the absence of essential complementary measures, such as active traceability to detect, alert, and analyze abnormal behavior. The publisher's inability to precisely identify the data affected by the breaches, combined with the failure to correct identified vulnerabilities, led the CNIL to conclude that the security measures deployed were insufficient.
This decision illustrates the extent of the subcontractor's responsibility, which is subject to a genuine obligation of enhanced security measures, regardless of the role of the data controller.
Cookies: CNIL imposes a 15-million-euro fine on American Express
The CNIL fined American Express 15 million euros for breaches of the rules governing cookies and trackers.
The authority found that advertising cookies were placed before user consent was obtained, in breach of Article 82 of the French Data Protection Act. It also identified defective mechanisms for refusing and withdrawing consent, which prevented users from exercising their choices effectively.
This decision confirms the CNIL’s consistent position that cookie compliance does not rest solely on displaying a banner, but on the effective and fair technical implementation of consent mechanisms.
It also serves as a reminder that international companies operating in France are fully subject to national and European data protection requirements.
United Kingdom: ICO fines a password manager provider
The UK Information Commissioner’s Office imposed a financial penalty on a password manager provider for failing to comply with security obligations under the UK GDPR.
The investigation revealed deficiencies in technical and organisational measures that exposed sensitive data to a risk of unauthorised access. The ICO identified weaknesses in secure-by-design implementation and vulnerability management, despite the fact that the service was intended to enhance user security.
This decision underscores that organisations offering cybersecurity or identity protection services are subject to heightened vigilance requirements.
It also reminds organisations that a promise of security must be reflected in robust, documented, and regularly assessed technical choices, particularly where highly sensitive data is involved.
User accounts: when commercial convenience clashes with the GDPR
In December 2025, the European Data Protection Board (EDPB) published Recommendations 2/2025 aimed at clarifying the conditions under which an e-commerce site can legally require the creation of a user account under the GDPR.
On e-commerce websites, users are often required to create an online account before they can access offers or purchase goods and services.
While data controllers in this sector may have a commercial interest in requiring users to create an account, these recommendations reiterate that the obligation to create an account must not become a pretext for collecting personal data beyond what is necessary. Imposing an account automatically amounts to a form of mass processing of personal data, which can increase the risks to the rights and freedoms of the individuals concerned.
The EDPB recommends offering a “guest” mode by default, allowing users to browse and purchase without creating an account. This approach promotes the principles of privacy-by-design and data minimization under the GDPR, limiting the collection and storage of data to only those operations that are strictly necessary.
