Audit modelGlobal AI Regulatory Eligibility Questionnaire
1. Introduction
2. SECTION 1 — SYSTEM IDENTIFICATION & CONTEXT
Under the EU AI Act (Art. 3), NIST AI RMF, and ISO 42001, AI includes any system using statistical, logical, symbolic, or machine learning techniques to generate outcomes influencing decisions.
Different model types trigger different regulatory obligations (e.g., foundation models → AI Act GPAI, China GenAI Measures).
High-risk domains are listed in EU AI Act Annex III; industry-specific rules exist in China, US, Brazil.
Provider/Deployer distinction drives regulatory obligations (AI Act, Colorado, China).
Vulnerability level increases the system’s risk under OECD, AI Act, and CPRA.
Decision-making systems fall under CPRA ADMT, Colorado, GDPR Art. 22.
Biometric or behavioral surveillance triggers strict regimes (AI Act prohibited, PIPL, China Algorithmic Regs).
Triggers transparency duties under AI Act Art. 52, China GenAI Measures, OECD.
3. SECTION 2 — DATA, PRIVACY & SENSITIVE INFORMATION
Personal data triggers GDPR, CPRA, PIPL, LGPD, and Colorado Privacy Act. Even pseudonymized or inferred data is considered personal under several laws.
Sensitive data triggers stricter regimes: GDPR Art. 9, CPRA “sensitive data”, PIPL “sensitive data”, Colorado sensitivity rules, and high-risk classification under AI Act Annex III.
Profiling = automated processing evaluating personal aspects (GDPR Art. 4(4)).
Triggers CPRA ADMT & Colorado AI Act for “high-risk” automated decisions.
(Face, voice, gait, keystroke dynamics, fingerprints, iris, vein patterns).
Biometric data activates high-risk categories under EU AI Act (Annex III), restricted processing under GDPR, and strict obligations under PIPL & China biometrics regulations.
Children’s data triggers heightened protection: CPRA minors, GDPR Art. 8, PIPL minors protection, China child-specific safeguards.
Third-party datasets require provenance, licensing, risk documentation (ISO 42001, AI Act data governance, NIST MAP).
Synthetic data may still embed bias; human-labeled data raises fairness and sourcing issues.
Cross-border transfers trigger GDPR Chapter V, PIPL export rules, LGPD Art. 33, CPRA data localization concerns.
4. SECTION 3 — RISK & IMPACT DRIVERS (GLOBAL HIGH-RISK MAPPING)
Safety-sensitive applications trigger high-risk classification under EU AI Act Annex III (healthcare, machinery, transportation), ISO 42001, and OECD Safety Principle.
This triggers CPRA ADMT, Colorado high-risk AI, GDPR Art. 22, and AI Act in domains like HR, credit, welfare.
EU AI Act Annex III covers creditworthiness, biometric identification, access to public services, employment. Colorado & CPRA apply to ADM systems affecting rights.
AI Act Annex III includes critical infrastructure operation & safety functions.
Autonomy = higher risk. Many frameworks require “meaningful human oversight”.
China Algorithmic Recommendation Regulation (2022) governs any personalized ranking or content curation.
EU AI Act prohibits certain biometric and predictive policing systems.
China & US impose strict controls.
China GenAI Measures (2023) regulate publicly accessible generative AI outputs.
Frameworks such as OECD, ISO 42001, NIST RMF require classification of risk severity.
Contestability is required under GDPR Art. 22, CPRA ADMT, Colorado AI Act, and OECD Human Agency principle.
5. SECTION 4 — TRANSPARENCY, HUMAN OVERSIGHT & USER INTERACTION
AI Act Art. 52 requires AI-generated content or interactions to be clearly disclosed. China GenAI Measures impose explicit disclosure, especially for public-facing systems.
CPRA ADMT, Colorado AI Act, GDPR Art. 22 and OECD mandate user notification for automated decisions affecting rights.
AI Act Art. 14 requires human oversight for high-risk AI. Oversight must be effective, not symbolic (“rubber-stamping”). NIST & ISO also require governance oversight.
Required under EU AI Act for high-risk systems (Art. 15 safety), ISO 42001 safety controls, NIST RMF “Manage”.
Explainability is mandated by CPRA ADMT, Colorado, OECD Principles, China Rules (algorithmic transparency), and required by AI Act for high-risk AI.
Logging is required by EU AI Act (Art. 12), ISO 42001 (documented lifecycle), and NIST RMF for traceability and auditability.
Monitoring is a requirement under AI Act (Art. 15 Post-Market Monitoring), NIST RMF (Manage), ISO 42001 (continuous improvement), and China Regs (algorithm stability).
6. SECTION 5 — AI LIFECYCLE, GOVERNANCE & RISK MANAGEMENT MATURITY
Data lineage is required under ISO/IEC 42001 §6.3, EU AI Act data governance (Art. 10), and NIST RMF “MAP” for traceability and auditability.
Version tracking ensures auditability and impacts safety & compliance. Required under ISO 42001, NIST RMF, OECD and AI Act (technical documentation).
Training documentation is required by EU AI Act Annex IV, ISO 42001, and NIST RMF (“MAP” and “MEASURE”).
Monitoring is required under EU AI Act Art. 15 (post-market monitoring), ISO 42001, NIST RMF (“MANAGE”), China Algorithmic Regs “algorithm stability”.
Incident management is required under EU AI Act Art. 62 reporting, ISO 42001 incident process, NIST RMF risk mitigation, and OECD accountability.
ISO 42001 requires role assignment; AI Act distinguishes provider/deployer responsibilities; OECD requires accountability; NIST RMF expects clear governance structure.
Vendor risk management is required under CPRA (service providers), Colorado (developers), ISO 42001 (supply chain), NIST RMF, China (provider accountability).
Continuous improvement is required under ISO 42001, NIST RMF (“IMPROVE”), China Regulations (periodic reviews), and OECD accountability.
7. SECTION 6 — GEOGRAPHIC FOOTPRINT & JURISDICTION TRIGGERS
Deployment location triggers extraterritorial applicability of AI regulations such as EU AI Act, CPRA, Colorado AI Act, China Algorithmic Regulation, PIPL, Brazil PL 2338, Singapore Guidelines, Japan/Korea/India AI frameworks.
Laws usually protect the individual, not the server.
Even if the processing is done elsewhere, the law applies if the user lives in the jurisdiction (e.g., PIPL, GDPR, CPRA).
Even if AI processing occurs in-region, cloud-vendor infrastructure may create implicit cross-border transfers: GDPR Ch. V, PIPL Export Rules, LGPD, CPRA vendor requirements.
AI regulations apply whenever users in the jurisdiction are affected, regardless of where the AI system is hosted or developed (AI Act Art. 2, CPRA extraterritoriality, PIPL extraterritorial effect).
Attribution / Pas d'utilisation commerciale
CC-BY-NC