Is Your AI System
Legal in the EU After August?
Describe your AI system below. We'll map it against the EU AI Act's prohibited and high-risk categories — and tell you exactly what you need to fix before the enforcement deadline.
prohibited AI
fine ceiling
starts 2026
AI violations
AI System Compliance Assessment
Answer the questions below — takes 3 minutes. No data is sent to any server; all analysis runs locally in your browser.
Assessment Report
Specific Findings
EU AI Act — Prohibited AI Practices (Annex I)
These are BANNED outright from August 2, 2026. Fines: up to €35,000,000 or 7% of global annual turnover.
Subliminal Manipulation
AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is likely to cause harm.
Exploiting Vulnerabilities
AI exploiting vulnerabilities of specific groups (age, disability, social/economic situation) to distort their behaviour causing harm.
Social Scoring by Public Authorities
AI systems for social scoring of natural persons by public authorities leading to detrimental or unfavourable treatment.
Real-Time Remote Biometric ID in Public
Real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement. (Limited exceptions apply.)
Emotion Recognition at Work/Education
AI systems for inferring emotions of natural persons in the areas of workplace and educational institutions. (Narrow medical/safety exceptions.)
Biometric Categorisation for Sensitive Data
Categorisation of individuals based on biometric data to deduce race, political opinions, religious beliefs, sexual orientation, or trade union membership.
Untargeted Facial Image Scraping
Scraping the internet or CCTV footage to create or expand facial recognition databases.
Predictive Policing on Individuals
AI assessing the risk of a natural person to commit criminal offences solely based on profiling or personality/character traits.
Key High-Risk AI Categories (Annex III)
These require conformity assessments, CE marking, and registration in the EU database before deployment.
HR & Recruitment AI
CV filtering, candidate ranking, interview analysis, promotion decisions, contract terminations. Any AI affecting employment decisions for EU workers.
Credit Scoring & Financial AI
AI used in creditworthiness assessment of natural persons, insurance pricing, lending eligibility decisions.
Educational Assessment AI
AI determining access to or evaluation within education, including exam proctoring, grading, dropout prediction.
Critical Infrastructure AI
AI in management of electricity, water, gas, transport networks, digital infrastructure, and essential services.
Medical Devices & Health AI
AI as medical devices or safety components of medical devices, clinical decision support, diagnostic AI.
Law Enforcement AI
AI for individual risk assessment, polygraph testing, crime hotspot prediction, deepfake/evidence analysis, crime victim profiling.
EU AI Act FAQ
Does the EU AI Act apply to my company if we're not in the EU?
Yes, if your AI system is used by people in the EU or your AI output affects EU residents, the Act applies regardless of where your company is headquartered. This is similar to GDPR's extraterritorial reach.
What's the difference between "prohibited" and "high-risk"?
Prohibited practices are outright banned — zero tolerance, no exemptions. High-risk systems are allowed but require extensive compliance work: conformity assessment, CE marking, technical documentation, data governance, and registration in the EU AI database.
When exactly do prohibitions take effect?
The prohibition rules (Article 5) apply from August 2, 2026 — six months after the full Regulation entered into force. High-risk AI rules in most sectors apply from August 2, 2026 as well. GPAI model rules applied from August 2, 2025.
We only use AI internally (HR tools, Slack AI, etc.) — do we still need to comply?
Yes. If your internal AI tool falls into a high-risk category (like recruitment or workplace monitoring), the obligations apply. This includes commercially purchased AI tools that you deploy.
What is a "conformity assessment"?
For high-risk AI, you must demonstrate compliance through documentation (technical file), risk assessment, data governance procedures, human oversight measures, and accuracy metrics. Some categories require third-party audits.
Is a ChatGPT-based chatbot a high-risk AI system?
Generally no, if it's general-purpose. However, if you integrate it into a high-risk use case (like scoring job applicants), the output system becomes high-risk even if the underlying model isn't.