Navigating the AI Regulatory Landscape: An Overview, Highlights, and Key Considerations for Businesses

The recent weeks have highlighted the increasing concerns over AI safety and security and showcased a collaborative effort among global entities in the EU, US, and the UK aiming to mitigate these risks. Here's a brief overview of the most recent key regulatory developments and their potential implications for businesses.

Lakera Team
December 1, 2023

In recent weeks, AI safety and security have taken center stage in conversations about the rapid evolution and widespread adoption of Generative AI.

With AI's significant influence on industries and societies at large, and the increasing multitude of risks with potentially profound, long-term implications for businesses and individual users alike, the urgency to establish regulations ensuring safe and secure deployment is gaining momentum.

In response to these growing concerns, regulatory bodies are taking action: the EU is half negotiating and half bringing their AI Act to the finish line, the U.S. has issued a pivotal Executive Order on Safe, Secure, and Trustworthy AI, and the UK has spearheaded key conversations at the AI Safety Summit.

For businesses, these developments spell out a clear message: the time to prepare is now.

At Lakera, our engagement with the evolving AI regulatory landscape has been both proactive and influential:

  • Our founding team contributed to laying the foundations for the EU AI Act over the last decade. More recently, our CEO, David Haber, was invited to share his technical insights at an EU Parliament meeting in October 2023, discussing the potential implications of Article 28b for enterprises and startups.
  • In the US, our partnership with White House-supported initiatives—like the Generative Red Teaming Challenge at DEFCON 31—has sparked conversations regarding regulatory changes and how to adapt to them.
  • Engaging in dialogues with policy experts like Kai Zenner and partners like Credo AI and DEKRA has allowed us to explore the impact of AI regulations on the corporate world and advocate for responsible innovation.

In this article, we’d like to give you a brief overview of the most recent key regulatory developments and their potential implications for businesses.

1. EU AI Act, Article 28b

The EU AI Act, proposed in April 2021, is a comprehensive regulatory framework designed to govern the deployment of AI systems within the EU. Article 28b specifically addresses the need for enterprises to responsibly manage risks associated with AI foundation models. This includes ensuring that the AI they use does not compromise safety or ethical standards.

The European Parliament approved this AI Act on June 14, 2023, and the final version of the AI Act is expected to be published by the end of 2023.

On October 9, 2023, EU policymakers, AI business leaders, top foundation model providers, and researchers gathered for a roundtable discussion at the European Parliament to focus on the Governance of General-Purpose AI.


2. The US Executive Order on Safe, Secure, and Trustworthy AI

A few weeks later, on October 30th, the United States responded with its own set of directives to shape the AI landscape, with President Biden issuing the Executive Order on Safe, Secure, and Trustworthy AI.

The Executive Order provides guidelines for AI governance, research and development, and encourages collaboration between the government and the private sector to advance AI technologies that are secure and beneficial for the public.

The EU & US AI Regulatory landscape: Key considerations for businesses

The European Union's AI Act and the United States' Executive Order on AI represent two significant regulatory approaches to artificial intelligence by two of the world's leading economies.

Here’s a brief rundown of the key considerations for enterprises:

  • Safety and Security: Both the EU AI Act and the U.S. Executive Order place a strong emphasis on the safety and security of AI systems. For instance, Article 28b of the EU AI Act mandates enterprises to responsibly manage risks associated with AI, while the U.S. order mandates developers to share safety test results with the government.
  • Risk Management: The EU's approach to risk assessment and mitigation reflects in the U.S. strategy. On both sides of the Atlantic,extensive red-team testing is mandated to ensure AI systems are secure before their public release.
  • Transparency and Ethical Use: Both regulatory frameworks promote transparency in AI applications and ethical deployment, with the U.S. focusing on the detection of AI-generated content and the EU specifying clear responsibilities for businesses in their use of AI.

3. The AI Safety Summit

On November 1st and 2nd, the UK held the AI Safety Summit at Bletchley Park. The event  brought together international governments, leading AI companies, civil society groups, and experts in research. 

The core focus revolved around misuse risks and the potential loss of human control over both narrow and frontier AI technologies—those that possess dangerous capabilities or exhibit advanced, multifaceted performance that could match or outstrip today's leading models.


As stated by the UK government, the AI Safety Summit has 5 objectives:

  • A shared understanding of the risks posed by frontier AI and the need for action
  • A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  • Appropriate measures which individual organizations should take to increase frontier AI safety
  • Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  • Showcase how ensuring the safe development of AI will enable AI to be used for good globally

The outcomes of the summit are anticipated to be instrumental in shaping international regulatory frameworks, establishing a roadmap for the secure and responsible integration of AI into societal norms.

AI Regulatory Landscape: Highlights & Best Practices

Here are our key highlights and essential best practices for preparing for AI regulatory changes, a topic we've elaborated on in an article published on Kainos's blog. The post includes insights from Lakera CEO David Haber, John Sotiropoulos, core contributor to the OWASP Top 10 for LLM, and Dr. Suzanne Iris Brink, Data Ethics Manager at Kainos. 

Here’s a brief overview.

1. Increase testing and assurance: Foundation models providers must rigorously test and apply red teaming to both open-source and proprietary models, responding to the uncertain regulatory environment. Enhanced transparency and diverse development teams are critical to minimize risks.

2. Adopt actionable open standards: Developers should embrace standards like the OWASP Top 10 for LLM to secure AI integrations and address novel risks such as prompt injections. These standards aid in fortifying AI applications alongside established security protocols.

3. Accelerate standards alignment: Amidst emerging AI threats, there is a need for consensus and cooperation among the organizations to harmonize AI security measures.This will help prevent contradictions and foster effective defenses against threats like privacy inference attacks on LLMs.

4. Invest in automated defenses: New AI security tools, like Lakera Guard, are automating the protection of AI systems. This helps companies quickly identify and mitigate risks from data poisoning to toxic language outputs.

5. Integrate security with ethics: Security in AI should extend beyond traditional measures to include ethical implications, ensuring that AI systems do not perpetuate bias or discrimination. Integrating data ethics frameworks is essential for comprehensive risk management.

6. Promote secure-by-design and ethics-by-design AI delivery: Effective AI security must be woven into the very fabric of project delivery, beginning with thorough threat models and risk assessments. It's crucial to integrate ethical considerations from the start, utilizing secure-by-design practices to address safety challenges proactively.

Navigating the AI Regulatory landscape: Summary

In the evolving world of AI, regulatory landscapes are shifting as dynamically as the technologies they seek to govern. The recent weeks have not only highlighted the increasing concerns over AI safety and security but have also showcased a collaborative spirit among global entities aiming to mitigate these risks. As illustrated by pivotal movements in the EU, U.S., and the UK's AI Safety Summit, the impetus to create a framework for safe, secure, and ethical AI is stronger than ever.

Adapting to these changes requires not only a compliance mindset but also a dedication to continuous learning, ethical consideration, and international collaboration. The road ahead is one of partnership—across industries, borders, and cultures—forging a path that ensures AI technologies enhance our global society responsibly.

Lakera works with the Fortune 500 companies, startups, and organizations to mitigate compliance risks. Get in touch with us at or sign up for free for Lakera Guard.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Lakera Team
Read LLM Security Playbook
Learn about the most common LLM threats and how to prevent them.
You might be interested
No items found.
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.