EU AI Act · plain English
The big pieces of the law — in order
Regulation (EU) 2024/1689 is long, but most product and compliance conversations boil down to a handful of ideas: what is forbidden, what counts as high-risk, what you must prove, and how people stay in control. Here is a simple map — always confirm details with your legal team.
Banned uses (Article 5)
A short list of AI practices the EU does not allow — for example certain social scoring, manipulative systems, or emotion inference in schools or workplaces. If you are in this bucket, it is not a paperwork problem: the use case itself has to change.
High-risk list (Annex III)
If your AI is used in areas like biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice, it is often treated as high-risk. That triggers the full provider rulebook — not optional extras.
What high-risk providers must do (Arts. 8–15)
Risk management, training data governance, technical documentation, logging, transparency, human oversight, accuracy, and cybersecurity — all designed so authorities can see how the system was built and how it stays safe in production.
The documentation pack (Annex IV)
Annex IV spells out what “technical documentation” means: system design, data, testing, monitoring plans, and more. Think of it as the structured evidence bundle behind your AI, not a one-page marketing summary.
Transparency & informing users
Many systems must make clear when people are talking to an AI, when content is synthetic, or when emotion or biometric categorisation is used — so users are not misled about what is happening.
Humans in the loop
High-risk systems need meaningful human oversight: people who understand the limits of the system, can stop it, and are not just rubber-stamping outputs — especially where decisions affect rights or safety.
Simplified for orientation only — not legal advice. Official text and guidance from the EU and national authorities always prevail.