Javascript is required
logo-dastralogo-dastra

Audit modelGlobal AI Regulatory Eligibility Questionnaire

AI
This assessment identifies the regulatory applicability, risk classification, and compliance obligations arising from global AI frameworks

1. Introduction

2. SECTION 1 — SYSTEM IDENTIFICATION & CONTEXT

2.1. Q1 — Does the system use machine learning, statistical inference, or algorithmic decision-making?

Under the EU AI Act (Art. 3), NIST AI RMF, and ISO 42001, AI includes any system using statistical, logical, symbolic, or machine learning techniques to generate outcomes influencing decisions.

2.2. Q2 — What type of AI model does the system rely on?

Different model types trigger different regulatory obligations (e.g., foundation models → AI Act GPAI, China GenAI Measures).

2.3. Q3 — In which domain is the system used?

High-risk domains are listed in EU AI Act Annex III; industry-specific rules exist in China, US, Brazil.

2.4. Q4 — Who develops the system (provider role)?

Provider/Deployer distinction drives regulatory obligations (AI Act, Colorado, China).

2.5. Q5 — Who are the impacted users?

Vulnerability level increases the system’s risk under OECD, AI Act, and CPRA.

2.6. Q6 — Does the system produce decisions or recommendations affecting individuals?

Decision-making systems fall under CPRA ADMT, Colorado, GDPR Art. 22.

2.7. Q7 — Does the system perform monitoring, surveillance, or tracking?

Biometric or behavioral surveillance triggers strict regimes (AI Act prohibited, PIPL, China Algorithmic Regs).

2.8. Q8 — Does the system interact with humans autonomously (chat, voice, avatars)?

Triggers transparency duties under AI Act Art. 52, China GenAI Measures, OECD.

3. SECTION 2 — DATA, PRIVACY & SENSITIVE INFORMATION

3.1. Q9 — Does the system process personal data (directly or indirectly)?

Personal data triggers GDPR, CPRA, PIPL, LGPD, and Colorado Privacy Act. Even pseudonymized or inferred data is considered personal under several laws.

3.2. Q10 — Does the system process sensitive personal data (health, biometrics, ethnicity, beliefs, criminal records, political opinions, genetic data)?

Sensitive data triggers stricter regimes: GDPR Art. 9, CPRA “sensitive data”, PIPL “sensitive data”, Colorado sensitivity rules, and high-risk classification under AI Act Annex III.

3.3. Q11 — Does the system perform profiling or scoring of individuals?

Profiling = automated processing evaluating personal aspects (GDPR Art. 4(4)).
Triggers CPRA ADMT & Colorado AI Act for “high-risk” automated decisions.

3.4. Q12 — Does the system collect or infer biometric data?

(Face, voice, gait, keystroke dynamics, fingerprints, iris, vein patterns).

Biometric data activates high-risk categories under EU AI Act (Annex III), restricted processing under GDPR, and strict obligations under PIPL & China biometrics regulations.

3.5. Q13 — Does the system use or impact children’s data (under local definitions)?

Children’s data triggers heightened protection: CPRA minors, GDPR Art. 8, PIPL minors protection, China child-specific safeguards.

3.6. Q14 — Are training, validation, or test datasets sourced from external third parties?

Third-party datasets require provenance, licensing, risk documentation (ISO 42001, AI Act data governance, NIST MAP).

3.7. Q15 — Are datasets synthetic, human-labeled, or both?

Synthetic data may still embed bias; human-labeled data raises fairness and sourcing issues.

3.8. Q16 — Does the system involve cross-border data transfer or processing?

Cross-border transfers trigger GDPR Chapter V, PIPL export rules, LGPD Art. 33, CPRA data localization concerns.

4. SECTION 3 — RISK & IMPACT DRIVERS (GLOBAL HIGH-RISK MAPPING)

4.1. Q17 — Does the AI system have the potential to impact human safety (physical harm, operational safety, product safety)?

Safety-sensitive applications trigger high-risk classification under EU AI Act Annex III (healthcare, machinery, transportation), ISO 42001, and OECD Safety Principle.

4.2. Q18 — Does the system evaluate, classify, rank, or score individuals?

This triggers CPRA ADMT, Colorado high-risk AI, GDPR Art. 22, and AI Act in domains like HR, credit, welfare.

4.3. Q19 — Does the system make or support decisions that may affect fundamental rights (credit, hiring, healthcare access, insurance, mobility)?

EU AI Act Annex III covers creditworthiness, biometric identification, access to public services, employment. Colorado & CPRA apply to ADM systems affecting rights.

4.4. Q20 — Is the system used inside a regulated or critical infrastructure sector (transportation, energy, telecom, water supply, utilities)?

AI Act Annex III includes critical infrastructure operation & safety functions.

4.5. Q21 — Does the system operate autonomously without systematic human validation?

Autonomy = higher risk. Many frameworks require “meaningful human oversight”.

4.6. Q22 — Does the system influence or personalize content presented to users (recommendation, ranking, feed optimization, nudging)?

China Algorithmic Recommendation Regulation (2022) governs any personalized ranking or content curation.

4.7. Q23 — Does the system involve law enforcement, border control, surveillance, or forensic identification?

EU AI Act prohibits certain biometric and predictive policing systems.
China & US impose strict controls.

4.8. Q24 — Does the system generate content accessible to the public (e.g., public-facing generative AI outputs)?

China GenAI Measures (2023) regulate publicly accessible generative AI outputs.

4.9. Q25 — Could a failure or incorrect output cause significant financial, physical, psychological, or reputational harm?

Frameworks such as OECD, ISO 42001, NIST RMF require classification of risk severity.

4.10. Q26 — Can affected individuals contest decisions or request human review?

Contestability is required under GDPR Art. 22, CPRA ADMT, Colorado AI Act, and OECD Human Agency principle.

5. SECTION 4 — TRANSPARENCY, HUMAN OVERSIGHT & USER INTERACTION

5.1. Q27 — Does the system clearly disclose to users that they are interacting with an AI system?

AI Act Art. 52 requires AI-generated content or interactions to be clearly disclosed. China GenAI Measures impose explicit disclosure, especially for public-facing systems.

5.2. Q28 — Are individuals informed when decisions about them are made or supported by automated systems?

CPRA ADMT, Colorado AI Act, GDPR Art. 22 and OECD mandate user notification for automated decisions affecting rights.

5.3. Q29 — Is meaningful human oversight integrated into the AI’s critical outputs or decisions?

AI Act Art. 14 requires human oversight for high-risk AI. Oversight must be effective, not symbolic (“rubber-stamping”). NIST & ISO also require governance oversight.

5.4. Q30 — Are fallback, override, or fail-safe mechanisms defined and documented?

Required under EU AI Act for high-risk systems (Art. 15 safety), ISO 42001 safety controls, NIST RMF “Manage”.

5.5. Q31 — Are model outputs explainable to users, auditors, or regulators?

Explainability is mandated by CPRA ADMT, Colorado, OECD Principles, China Rules (algorithmic transparency), and required by AI Act for high-risk AI.

5.6. Q32 — Are logs maintained for training, inference, errors, and user interactions?

Logging is required by EU AI Act (Art. 12), ISO 42001 (documented lifecycle), and NIST RMF for traceability and auditability.

5.7. Q33 — Are output quality, fairness, drift, and bias monitored regularly?

Monitoring is a requirement under AI Act (Art. 15 Post-Market Monitoring), NIST RMF (Manage), ISO 42001 (continuous improvement), and China Regs (algorithm stability).

6. SECTION 5 — AI LIFECYCLE, GOVERNANCE & RISK MANAGEMENT MATURITY

6.1. Q34 — Is data lineage documented (sources, transformations, quality checks)?

Data lineage is required under ISO/IEC 42001 §6.3, EU AI Act data governance (Art. 10), and NIST RMF “MAP” for traceability and auditability.

6.2. Q35 — Are AI model versions tracked, logged, and uniquely identifiable?

Version tracking ensures auditability and impacts safety & compliance. Required under ISO 42001, NIST RMF, OECD and AI Act (technical documentation).

6.3. Q36 — Are training procedures and evaluation methods documented?

Training documentation is required by EU AI Act Annex IV, ISO 42001, and NIST RMF (“MAP” and “MEASURE”).

6.4. Q37 — Is the system monitored in production (drift, quality, anomalies, fairness)?

Monitoring is required under EU AI Act Art. 15 (post-market monitoring), ISO 42001, NIST RMF (“MANAGE”), China Algorithmic Regs “algorithm stability”.

6.5. Q38 — Are AI incidents logged, classified, and remediated?

Incident management is required under EU AI Act Art. 62 reporting, ISO 42001 incident process, NIST RMF risk mitigation, and OECD accountability.

6.6. Q39 — Are governance roles clearly defined (AI owner, risk officer, reviewer, operator)?

ISO 42001 requires role assignment; AI Act distinguishes provider/deployer responsibilities; OECD requires accountability; NIST RMF expects clear governance structure.

6.7. Q40 — Are third-party vendors or external AI providers assessed for AI risks?

Vendor risk management is required under CPRA (service providers), Colorado (developers), ISO 42001 (supply chain), NIST RMF, China (provider accountability).

6.8. Q41 — Are periodic internal reviews or audits performed (quarterly, annual, per release)?

Continuous improvement is required under ISO 42001, NIST RMF (“IMPROVE”), China Regulations (periodic reviews), and OECD accountability.

7. SECTION 6 — GEOGRAPHIC FOOTPRINT & JURISDICTION TRIGGERS

7.1. Q42 — In which regions or countries will the AI system be deployed or used?

Deployment location triggers extraterritorial applicability of AI regulations such as EU AI Act, CPRA, Colorado AI Act, China Algorithmic Regulation, PIPL, Brazil PL 2338, Singapore Guidelines, Japan/Korea/India AI frameworks.

7.2. Q43 — Where are the data subjects located (users, employees, customers)?

Laws usually protect the individual, not the server.
Even if the processing is done elsewhere, the law applies if the user lives in the jurisdiction (e.g., PIPL, GDPR, CPRA).

7.3. Q44 — Are infrastructure, cloud providers, or data centers located outside the country of operation?

Even if AI processing occurs in-region, cloud-vendor infrastructure may create implicit cross-border transfers: GDPR Ch. V, PIPL Export Rules, LGPD, CPRA vendor requirements.

7.4. Q45 — Will the decisions or outputs of the AI system affect individuals located in foreign jurisdictions?

AI regulations apply whenever users in the jurisdiction are affected, regardless of where the AI system is hosted or developed (AI Act Art. 2, CPRA extraterritoriality, PIPL extraterritorial effect).

Created at:10/17/2025

Updated on :11/01/2025

License : © Creative commons :
Attribution / Pas d'utilisation commerciale
CC-BY-NC AttributionPas d'utilisation commerciale

Author :
Paul-Emmanuel Bidault
Paul-Emmanuel Bidault



Access all our audit templates

Try Dastra now to access all of our audit templates that you can customize for your organization.It's free and there's no obligation for the first 30 days (no credit card required)

Build my audit
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.