⚠️ EU AI Act Full Enforcement: August 2, 2026 — Loading… remaining · Fines up to €35M or 7% global turnover
EU AI Act — August 2, 2026

Is Your AI System
Legal in the EU After August?

Describe your AI system below. We'll map it against the EU AI Act's prohibited and high-risk categories — and tell you exactly what you need to fix before the enforcement deadline.

€35M
Max fine for
prohibited AI
7%
Global turnover
fine ceiling
Aug 2
Enforcement
starts 2026
3%
Fine for high-risk
AI violations

AI System Compliance Assessment

Answer the questions below — takes 3 minutes. No data is sent to any server; all analysis runs locally in your browser.

Disclaimer: This tool provides a preliminary self-assessment based on publicly available EU AI Act text (Regulation 2024/1689). It is not legal advice. Consult a qualified EU AI Act compliance lawyer for formal assessments, especially for high-risk systems or regulated industries.

Assessment Report

Overall Risk Level

Specific Findings

    EU AI Act — Prohibited AI Practices (Annex I)

    These are BANNED outright from August 2, 2026. Fines: up to €35,000,000 or 7% of global annual turnover.

    Subliminal Manipulation

    AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is likely to cause harm.

    Art. 5(1)(a)

    Exploiting Vulnerabilities

    AI exploiting vulnerabilities of specific groups (age, disability, social/economic situation) to distort their behaviour causing harm.

    Art. 5(1)(b)

    Social Scoring by Public Authorities

    AI systems for social scoring of natural persons by public authorities leading to detrimental or unfavourable treatment.

    Art. 5(1)(c)

    Real-Time Remote Biometric ID in Public

    Real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement. (Limited exceptions apply.)

    Art. 5(1)(h)

    Emotion Recognition at Work/Education

    AI systems for inferring emotions of natural persons in the areas of workplace and educational institutions. (Narrow medical/safety exceptions.)

    Art. 5(1)(f)

    Biometric Categorisation for Sensitive Data

    Categorisation of individuals based on biometric data to deduce race, political opinions, religious beliefs, sexual orientation, or trade union membership.

    Art. 5(1)(g)

    Untargeted Facial Image Scraping

    Scraping the internet or CCTV footage to create or expand facial recognition databases.

    Art. 5(1)(e)

    Predictive Policing on Individuals

    AI assessing the risk of a natural person to commit criminal offences solely based on profiling or personality/character traits.

    Art. 5(1)(d)

    Key High-Risk AI Categories (Annex III)

    These require conformity assessments, CE marking, and registration in the EU database before deployment.

    High Risk

    HR & Recruitment AI

    CV filtering, candidate ranking, interview analysis, promotion decisions, contract terminations. Any AI affecting employment decisions for EU workers.

    High Risk

    Credit Scoring & Financial AI

    AI used in creditworthiness assessment of natural persons, insurance pricing, lending eligibility decisions.

    High Risk

    Educational Assessment AI

    AI determining access to or evaluation within education, including exam proctoring, grading, dropout prediction.

    High Risk

    Critical Infrastructure AI

    AI in management of electricity, water, gas, transport networks, digital infrastructure, and essential services.

    High Risk

    Medical Devices & Health AI

    AI as medical devices or safety components of medical devices, clinical decision support, diagnostic AI.

    High Risk

    Law Enforcement AI

    AI for individual risk assessment, polygraph testing, crime hotspot prediction, deepfake/evidence analysis, crime victim profiling.

    EU AI Act FAQ

    Does the EU AI Act apply to my company if we're not in the EU?

    Yes, if your AI system is used by people in the EU or your AI output affects EU residents, the Act applies regardless of where your company is headquartered. This is similar to GDPR's extraterritorial reach.

    What's the difference between "prohibited" and "high-risk"?

    Prohibited practices are outright banned — zero tolerance, no exemptions. High-risk systems are allowed but require extensive compliance work: conformity assessment, CE marking, technical documentation, data governance, and registration in the EU AI database.

    When exactly do prohibitions take effect?

    The prohibition rules (Article 5) apply from August 2, 2026 — six months after the full Regulation entered into force. High-risk AI rules in most sectors apply from August 2, 2026 as well. GPAI model rules applied from August 2, 2025.

    We only use AI internally (HR tools, Slack AI, etc.) — do we still need to comply?

    Yes. If your internal AI tool falls into a high-risk category (like recruitment or workplace monitoring), the obligations apply. This includes commercially purchased AI tools that you deploy.

    What is a "conformity assessment"?

    For high-risk AI, you must demonstrate compliance through documentation (technical file), risk assessment, data governance procedures, human oversight measures, and accuracy metrics. Some categories require third-party audits.

    Is a ChatGPT-based chatbot a high-risk AI system?

    Generally no, if it's general-purpose. However, if you integrate it into a high-risk use case (like scoring job applicants), the output system becomes high-risk even if the underlying model isn't.