The European Union is taking a major step with its proposed AI Act, aimed at regulating the rapidly growing field of artificial intelligence. This new law comes as a response to the emergence of advanced technologies, like ChatGPT, and seeks to limit AI's risks while making use of its potential.
The AI Act tries to find a middle ground between promoting AI-driven innovation while protecting individual rights and the environment, which demonstrates the EU's commitment to responsible leadership in the development of AI technologies.
The AI Act classifies AI systems based on the level of risk they pose:
Unacceptable Risk
- AI systems banned for posing severe threats, such as cognitive manipulation or unauthorized public biometric monitoring.
- Narrow exceptions for high-stakes applications, like law enforcement under stringent conditions.
High Risk
- AI systems that need strict compliance measures before market entry, including those in critical infrastructure and as product safety components.
- Rigorous and ongoing assessment to ensure these high-risk systems protect public safety and fundamental rights.
Limited Risk
- AI systems with lesser risk that could still affect user experience, potentially requiring transparency about the AI interaction.
- Aligns with the "minimal" risk category, warranting less stringent oversight but may include certain disclosure obligations.
Generative AI / Systemic Risk
- Not categorized separately but would fall under other categories based on usage.
- Transparency obligations likely apply, ensuring users are aware of AI-generated content.
- Measures in place to prevent misuse and illicit content generation, as well as disclosure of training data practices.

Failure to comply with the AI Act carries serious financial implications, pushing companies to align their practices with EU standards. These punitive measures signal the EU's commitment to enforce its policy decisions within the AI sector.

What does this all mean for businesses?
The EU AI Act will be enforced within the next 12–24 months, and companies of all sizes will have to move quickly to get their AI compliance in order.
Businesses should start by evaluating their AI strategies and processes, and holistically reviewing every facet of their AI initiatives—from data quality and model robustness to security. The EU AI Act requires unprecedented transparency across the value chain and businesses should be prepared to deliver that.
David Haber, Co-founder and CEO of Lakera
For us as at Lakera AI, we are now accelerating our work together with our regulatory partners to provide robust security and safety measures that streamline upcoming compliance processes for you.
