EU AI Act essentials
The European Union's AI Act sorts applications into prohibited, high, limited and minimal risk tiers. Compliance requirements scale with risk. Developers selling AI products within the EU must therefore gauge risk early and document mitigation measures.
Limited versus minimal risk
Most generative systems fall under limited or minimal risk. Limited risk applications like chatbots must disclose to users that they are interacting with AI1. Minimal risk uses such as spam filtering generally face no special rules.
High-risk obligations
Products used for credit scoring, employment screening or education are high risk. Builders have to implement risk management, data governance, technical documentation and automated record keeping. Human oversight and system integrity checks are mandatory2. High‑risk systems have two years from the act's signing to comply.
General purpose AI
Large foundation models are treated as general purpose AI. Their developers must document training processes, outline model capabilities and publish training summaries3. Models requiring more than 10^25 FLOPs are labelled "systemic" and face extra scrutiny4.
LLM evaluation platforms can help by logging usage data, versioning datasets and monitoring model performance. These features make it easier to show regulators that a high‑risk system remains safe and accurate.