How do the risks specific to AI differ from the risks of traditional software?
The risks specific to artificial intelligence differ from those of traditional software mainly because of how AI systems are designed, their complexity, and their deep embedding in social dynamics. While traditional software relies on explicit code written by humans, AI systems often rely on data-driven learning, which introduces several new challenges.
The main differences identified are:
1. Dependence on data and dynamic nature
Evolving training data: AI systems are trained on data that may change significantly and unpredictably over time, which can affect their behavior in ways that are hard to understand.
Loss of context: Datasets used for training can become disconnected from their original context or outdated relative to real-world deployment conditions.
Drift: AI systems require more frequent maintenance due to data, model, or concept drift—that is, when the statistical properties of input data change over time.
2. Complexity and lack of transparency
Number of decision points: AI systems can contain billions or even trillions of decision points, making them far more complex than traditional software.
Lack of interpretability: AI systems are often "opaque," meaning it is difficult to explain how or why a specific decision was made.
Emergent properties: Large-scale models can exhibit emergent behaviors, making their failure modes much harder to anticipate than those of conventional software.
3. Socio-technical nature and bias
Societal influence: AI-related risks are inherently socio-technical, arising from the interaction between technical code and social factors, such as the people operating the system or the context in which it is deployed.
Harmful bias: Unlike traditional software, AI can more easily amplify, perpetuate, or exacerbate systemic, computational, or human cognitive biases, leading to unfair outcomes.
4. Security and testing challenges
Specific attack surfaces: AI systems are vulnerable to specific attacks that traditional frameworks do not fully cover, such as data poisoning, adversarial examples, or model extraction.
Limits of testing: Traditional software testing methods are often insufficient for AI, because these systems are not subject to the same strict controls as classical software development. It can even be difficult to determine what should be tested.
5. Human perception and interaction
Overreliance: Users tend to perceive AI systems as more objective or capable than traditional software, which can lead to excessive trust and lack of intervention in case of errors.
Loss of context: Translating complex human phenomena into mathematical models for AI often entails a loss of context, complicating the management of individual and societal impacts.
What are the risks related to artificial intelligence (AI)?
1. Algorithmic bias and discrimination
This is one of the best-documented risks. An AI system merely reflects the data on which it was trained: if that data contains biases, the system will reproduce and amplify them in its decisions, potentially leading to discrimination.
A prominent example is Amazon’s recruiting algorithm, where analysts found in 2018 that the program penalized female applicants because it had been trained on résumé datasets in which men were historically overrepresented in technical roles.
Beyond recruitment, medical diagnostic systems may return less accurate results for historically underrepresented populations, and predictive policing tools may disproportionately target certain marginalized communities.
2. Privacy and data protection violations
AI can have a severe impact on the right to privacy. It can be used in facial recognition devices or to profile and track people online. It can also combine different data sources to create new information about a person and produce unexpected results.
Technically, large language models require immense volumes of training data. Data scraped from the web is often collected without users’ consent and may contain identifiable information.
3. Disinformation, deepfakes, and democratic manipulation
AI may also pose a risk to democracy: it is held responsible for creating "echo chambers" on the web, presenting individuals only with content they like. It is also used to create deepfakes. These phenomena polarize public discourse and can have major political consequences.
The DGSI illustrates this risk concretely: a manager of a French industrial site received a videoconference call from someone presenting themselves as the group's CEO, whose physical appearance and voice exactly matched the CEO’s. It was actually an attempted scam using hyper-editing (deepfake), combining the CEO’s face and voice via AI.
4. Cybersecurity risks
Malicious actors can exploit AI to launch cyberattacks: cloning voices, generating fake identities and convincing phishing emails, with the aim of scamming, hacking, or stealing identities. According to IBM, the global average cost of a data breach was $4.88 million in 2024, and only 24% of generative AI initiatives are secured.
Attackers can also manipulate an AI system’s input data—for example, subtly altering an image—to fool the algorithm and cause errors or dangerous behavior in critical systems such as autonomous vehicles or medical devices. These are called adversarial example attacks.
5. System opacity: the "black box" problem
Decisions made by some AI systems are described as "black boxes," difficult for users to understand or challenge, which complicates informed decision-making and can reduce trust.
This problem of explainability is directly linked to legal liability: in the event of a dispute, the use of AI may complicate litigation resolution, especially if the organization cannot demonstrate transparency or compliance in data processing, or cannot clearly justify automated decision-making before relevant legal authorities.
6. Hallucinations and information reliability
Hallucinations—i.e., errors or fabrications produced by AI—can have serious consequences if not detected and corrected: inappropriate strategic decisions, erroneous financial reports, non-compliant contracts, or misinformation.
AI systems also introduce risks related to data quality. Result reliability depends directly on the quality, representativeness, and timeliness of the data used. Incomplete, biased, or outdated data can lead to incorrect outcomes, undermining the relevance of decisions made and trust in systems.
7. Impact on employment and labor market transformation
A major risk of AI is the automation of repetitive and analytical tasks, which threatens many jobs traditionally performed by humans, especially in industrial production, logistics, retail, and administrative services.
8. Environmental risks
Another often-overlooked risk concerns the environmental impact of AI systems. The development, training, and deployment of models—especially large models—require considerable computing resources, involving significant energy consumption and infrastructure. Training a model can occupy data centers for days or weeks, with a substantial carbon footprint tied to electricity production, which in some regions still depends on fossil fuels.
9. Systemic risks of general-purpose AI models (GPAI)
Some general-purpose AI models, capable of performing a wide range of tasks and serving as the foundation for many AI systems in the EU, could pose systemic risks if they are very powerful or widely deployed.
10. Intellectual property risks
AI systems, especially generative models, rely on vast training datasets whose origins and associated rights are not always clearly identified. This raises complex questions about the use of copyrighted content, notably when works, texts, images, or code have been used without explicit authorization. Downstream, AI-generated content can itself closely reproduce or be heavily inspired by existing works, exposing organizations to infringement or other intellectual property risks.
In summary, AI-related risks unfold across four interdependent levels: individual (discrimination, privacy), organizational (cyberattacks, legal liability), societal (democracy, employment), and systemic (critical infrastructure security).
How to manage these risks with a platform like Dastra?
Rigorous governance, combining regulatory compliance, algorithmic transparency, and effective human oversight, is now indispensable for any organization deploying AI systems.
To effectively manage these AI-specific risks, organizations must adopt a structured and continuous approach. Solutions like Dastra enable operationalization of this risk management through dedicated modules.
The Risk Management module allows identification, assessment, and monitoring of AI-related risks, integrating concepts such as model drift, bias, or specific vulnerabilities.
In parallel, the Compliance module incorporates frameworks such as the NIST AI RMF, translating its principles into concrete requirements, tests, and controls to implement.
This approach enables a shift from theoretical understanding of risks to operational management, with evidence of compliance, continuous monitoring, and the ability to demonstrate risk control to stakeholders and regulators.
