Tired of general newsletters that skim over your real concerns? Dastra Insights, offers legal and regulatory monitoring specifically designed for DPOs, lawyers, and privacy professionals.
Each month, we go beyond a simple recap: we select about ten decisions, news, or positions that have a concrete impact on your missions and organizations.
🎯 Targeted, useful monitoring grounded in the real-world realities of data protection and AI.
Here is our selection for January 2026:
Information of data subjects: the CNIL fines a company for transmitting data to a social network for advertising purposes without consent
On 30 December 2025, the CNIL imposed a fine of €3.5 million on a company for having transmitted the data of members of its loyalty program to a social network for advertising targeting purposes, without valid consent.
Regarding advertising targeting on social networks, the European Data Protection Board (EDPB) recalls in its Guidelines 8/2020 on the targeting of social media users, adopted on 13 April 2021, that the targeter acts as a data controller when it “determines the purposes and means of the processing by actively collecting, processing and transmitting the personal data of the data subjects to the social media provider for advertising purposes.”
The sanctioned company should therefore be regarded as the controller for the processing related to the transmission of data to the social network.
Under Article 6(1) of the GDPR, processing of personal data is lawful only if it is based on one of the legal bases provided by the Regulation, notably the consent of the data subject.
The CNIL’s restricted panel found that, based solely on the information appearing on the loyalty program sign-up form, the information provided to members was not sufficient to guarantee informed consent to the transmission of their data to a social network for targeted advertising. Although those statements did refer to electronic prospecting (by SMS and/or email) aimed at promoting the company’s products, the CNIL stressed that targeted advertising on a social network is a distinct processing operation, both in its methods and its implications.
Indeed, transmitting data to a third party in order to display targeted advertising slots on a website is substantially different from sending commercial communications by email or SMS, which do not necessarily involve transmitting data to another entity. The specific purpose of targeted advertising on the social network, which requires transmission of data to that network, was not clearly specified in the form presented to the data subjects.
Accordingly, the CNIL considers that members were not able to understand the exact nature of the processing, the recipients of their data, or the consequences of that transmission. It concluded that the consent obtained could not be considered specific and informed, in breach of the GDPR’s requirements, thus justifying the sanction imposed.
Security: the CNIL fines FREE and FREE MOBILE a total of €42 million
On 13 January 2026, the CNIL issued two sanction decisions against FREE MOBILE and FREE, imposing fines of €27 million and €15 million respectively, in light of the inadequacy of the measures taken to ensure the security of their subscribers’ data.
Under Article 32(1) of the GDPR, the controller and the processor must implement, taking into account the state of the art, implementation costs, the nature, scope, context and purposes of the processing and the risks to the rights and freedoms of natural persons, appropriate technical and organisational measures to ensure a level of security appropriate to the risk.
The CNIL’s restricted panel recalls that the security obligation under Article 32 of the GDPR is an obligation of means, which requires the controller to take measures that, given the characteristics of the processing, reduce the likelihood of a data breach occurring and, where applicable, mitigate its severity. It is therefore not expected that measures eliminate all risk, and the mere occurrence of a breach does not by itself characterise a failure to comply with Article 32.
Nevertheless, the controller remains required to mitigate the risk of breaches, and not to prevent every breach (CJEU, 25 January 2024, C‑687/21, para. 39). Consequently, a failure to meet the security obligation can be found irrespective of the actual occurrence of a breach.
In this case, the CNIL found that:
The authentication procedure for connecting to the companies’ VPNs—used notably for employees’ teleworking—was not sufficiently robust.
Measures implemented to detect abnormal behaviours on the information system were ineffective.
Given the number and nature of the data processed, the restricted panel considered that the security measures deployed by the companies were not appropriate to ensure the confidentiality of subscribers’ personal data.
Marketing: the CNIL opens a public consultation on proof of consent
The CNIL announced the opening of a public consultation on the proof of consent, in a context of repeated inspections and sanctions related to commercial prospecting, targeted advertising and cookies, with a view to drafting a recommendation on proof of consent.
This initiative falls under Article 7(1) of the GDPR, according to which, where processing is based on consent, the controller must be able to demonstrate that the data subject has indeed given consent. This requirement is part of the accountability principle and lies solely with the controller.
Proof of consent is not limited to the existence of a formal agreement. It must allow establishing that the consent was free, specific, informed and unambiguous, and that it was given by a clear affirmative act. Failing that, the processing is regarded as lacking a legal basis, even if consent is asserted.
Such a recommendation could include, in particular, clarifications on:
the concrete elements of evidence to be retained (timestamp, identity of the person, collection path, information provided);
the allocation of the burden of proof in complex marketing chains (advertisers, partners, data brokers, joint controllers);
the consent collection interfaces, especially where their design may unduly steer the user.
Cookies and other trackers: the CNIL publishes its final recommendations on multi-device consent
The CNIL published its final recommendations regarding collection of consent in a multi-device context, following a consultation phase with stakeholders.
These recommendations continue the “cookies and other trackers” guidance and aim to respond to practices that are increasingly common in digital environments.
Today, digital usage is no longer limited to a single device. The same user can browse a site or an app from a computer, smartphone, tablet or connected TV.
In this context, some actors have sought to reuse consent given on one device so as to extend it to other devices belonging to the same user, notably for advertising or audience measurement purposes.
This practice raised a central question: can consent given on one device be valid for another, even though technical environments, interfaces and information conditions differ?
The CNIL’s recommendations aim to strictly frame the conditions under which consent may be taken into account across multiple devices. They remind that consent must remain attached to the data subject’s effective control, and that any extension of consent to other devices requires enhanced guarantees, both as to the information provided and the technical means implemented.
The CNIL thus insists on preserving the user’s freedom of choice and avoiding any automatic generalisation of consent that could render it meaningless.
These recommendations provide an operational framework awaited in a field where technical practices had outpaced the law.
Cybersecurity: proposal to revise the Cybersecurity Act
The European Commission has paved the way for a revision of the Cybersecurity Act, the regulation adopted in 2019, to adjust the European cybersecurity framework to a digital environment that has become more complex and exposed.
The current framework
The Cybersecurity Act structured European action around two main axes:
strengthening the role of ENISA, the EU Agency for Cybersecurity;
creating a European cybersecurity certification framework for digital products, services and processes.
The objective was to establish a common level of trust within the internal market while allowing Member States some room for implementation.
A profoundly changed context
Since 2019, conditions have evolved. Cyberattacks have intensified, digital supply chains have lengthened and technological dependencies have increased, including in sensitive sectors.
At the same time, the European legal framework has grown richer, notably with NIS 2 and DORA, raising the question of coherence among the different texts applicable to cybersecurity.
Orientation of the revision
The envisaged revision aims to adjust the Cybersecurity Act without changing its overall structure. It follows a logic of clarification and strengthening, notably to:
improve the functioning and uptake of European certification schemes;
clarify ENISA’s role in an increasingly dense regulatory landscape;
better cover recent technologies and uses that have become central to economic and public activities.
The challenge is to ensure a high level of security across the Union while limiting national divergences.
A further step in European construction
This revision is part of a broader dynamic to structure European cybersecurity law. It reflects the institutions’ desire to consolidate a common framework capable of responding to now cross-border risks while preserving the clarity of obligations for the actors concerned.
Security incident: HubEE, victim of a leak of 160,000 documents
HubEE (the Hub d’Échange de l’État) is a digital platform operated by the Interministerial Directorate for Digital Affairs (DINUM) that serves as a document exchange network between public administrations, processing services and online services used by citizens to carry out administrative procedures. It plays a central role in the circulation of documents originating from online services such as those offered via service-public.fr, notably for requests for civil status records, social files or public benefits procedures, connecting more than 8,000 municipalities, several ministries and public bodies in their digital exchanges.
The incident
In early January 2026, HubEE was the victim of a cyberattack that resulted in the exfiltration of at least 70,000 case files, or approximately 160,000 administrative documents, some of which contain sensitive personal data provided by users during their online procedures.
DINUM detected and contained the intrusion after 9 January and immediately deployed containment measures to block the attacker’s access. The competent authorities — notably the National Cybersecurity Agency of France (ANSSI) and the CNIL — were informed, and a complaint was filed with the competent courts.
It is important to note that it was not the Service-Public.fr website itself that was hacked, but the internal technical platform HubEE that transmits documents between public services in the context of online services.
What types of data?
According to the information released, the pirated documents may contain identifiers, identity data and supporting documents (for example: identity documents or proofs provided by users in the context of their procedures), thus potentially exposing data subjects to risks of identity theft, phishing or other malicious uses.
This incident raises several important issues from a data protection law perspective:
Security of public platforms: shared platforms that concentrate large volumes of personal data must be subject to particularly high security levels to prevent intrusions.
Mandatory notification: the CNIL was notified in accordance with the law, which is an obligation in the event of a personal data breach involving a risk to the rights and freedoms of data subjects.
Risks to individuals: the exploitation of documents containing personal data can lead to identity theft, targeted phishing campaigns or other fraud if such information is made public or used by malicious third parties.
Spain: the Spanish Authority (AEPD) updates its GDPR FAQ to help SMEs
On 21 January 2026, the Spanish Data Protection Agency (Agencia Española de Protección de Datos – AEPD) significantly updated its “ Preguntas frecuentes” (Frequently Asked Questions) section on the GDPR, aiming to better meet the needs of small and medium-sized enterprises (SMEs), controllers and data protection professionals.
The renewed FAQ now contains more than 200 answers to the most common questions, covering essential topics such as obligations to maintain the records of processing activities, the legal basis for processing, information obligations, and standard models of consent notices adapted to different processing contexts.
The AEPD states that this update is part of its Strategic Plan 2025–2030, which focuses on strengthening its role as a guide and facilitator for organisations, particularly micro-enterprises, SMEs, self-employed workers and administrations. The plan seeks to encourage a practical, simple and effective GDPR compliance culture by providing tools and resources immediately usable in practice.
The revised FAQ includes, in particular:
practical targeted categories on controllers’ obligations;
concrete examples of processing and compliance;
useful template documents (records of processing activities, information notices, consent forms);
answers to the most frequently asked questions by SMEs and privacy professionals.
Germany: court ruling on transparency of credit scores
SCHUFA Holding AG is Germany’s main credit reference agency that compiles personal financial information and assigns credit scores used to assess the likelihood of a person repaying debts. These scores influence key decisions such as loan approvals, telephony contracts or rental agreements.
On 19 November 2025, the 6th chamber of the Administrative Court of Wiesbaden issued an important judgment on transparency of credit scores in a case brought by a consumer against SCHUFA. It ruled that SCHUFA must provide detailed and individualized explanations of how its scores are calculated, under the access right provided by Article 15 of the GDPR.
In this case:
The claimant had applied for a personal loan in 2018 which was refused after a credit score (~86%) was transmitted by SCHUFA to the lenders.
She subsequently requested from SCHUFA explanations about the data factors used and the concrete logic of the score calculation.
SCHUFA responded in general terms, citing information available on its website but without explaining how her personal data had specifically influenced the precise score calculation, nor why the score had been classified as a “high risk” for her.
The court found that these replies did not satisfy the GDPR requirements:
Providing a general description of methods or types of data used was not sufficient; SCHUFA had to indicate which specific data were taken into account, their weighting in the calculation, and why the resulting score was interpreted as a high probability of risk for that person.
The court’s decision also relies on a Court of Justice of the European Union (CJEU) ruling of 7 December 2023 (case C-634/21), which held that the establishment of a credit score by a credit bureau constitutes an “automated decision” within the meaning of Article 22 of the GDPR when that score is used determinatively by a third party to make its own contractual decisions.
South Korea: adoption of a novel legal framework for artificial intelligence
On 22 January 2026, South Korea officially enacted an ambitious framework law on artificial intelligence, known as the AI Basic Act (Framework Act on the Development of Artificial Intelligence and the Creation of a Foundation for Trust), making the country one of the first in the world to introduce a comprehensive regulatory package for AI. This initiative is part of the national strategy to strengthen the safety, trust and international competitiveness of the AI sector.
The law seeks to regulate the use of AI systems, in particular those qualified as “high-impact”, meaning likely to have significant effects on people’s lives, public safety, or sensitive sectors such as healthcare, transport, drinking water, finance or nuclear safety. It notably imposes:
Mandatory human oversight for high-impact AI applications;
Transparency obligations, with notifications to users and clear labeling of AI-generated content when it is hard to distinguish from reality;
Risk management and security requirements for AI operators.
The government has provided at least a one-year grace period before strict sanctions apply, to allow companies and startups to adapt their systems and practices. After this phase, organisations failing to comply with obligations, for example labeling AI-generated content, could face fines of up to 30 million South Korean won (around €17,400).
While the law is presented as a means to consolidate public trust in AI and to boost the sector by providing a clear legal framework, several stakeholders, notably local startups, expressed concerns about compliance burdens, which they consider sometimes vague or costly to implement. Some fear these requirements could heavily burden young companies, potentially hindering innovation or increasing development costs.
President Lee Jae-myung acknowledged these concerns and encouraged a balance between regulation and support, with guidance measures and assistance platforms dedicated to companies during the transition period.
Cybersecurity standards for AI: ETSI publishes its expectations
The European Telecommunications Standards Institute (ETSI) has published its security expectations for AI, which will serve as a baseline cybersecurity reference for organisations implementing AI technologies in Europe.
These standards were developed based on guidance issued by the UK National Cyber Security Centre (NCSC) and the UK Department for Science, Innovation and Technology (DSIT).
They specify security expectations applicable to AI tools from design through deployment and use.
