In recent weeks, AI safety and security have taken center stage in conversations about the rapid evolution and widespread adoption of Generative AI.
With AI's significant influence on industries and societies at large, and the increasing multitude of risks with potentially profound, long-term implications for businesses and individual users alike, the urgency to establish regulations ensuring safe and secure deployment is gaining momentum.
In response to these growing concerns, regulatory bodies are taking action: the EU is half negotiating and half bringing their AI Act to the finish line, the U.S. has issued a pivotal Executive Order on Safe, Secure, and Trustworthy AI, and the UK has spearheaded key conversations at the AI Safety Summit.
For businesses, these developments spell out a clear message: the time to prepare is now.
At Lakera, our engagement with the evolving AI regulatory landscape has been both proactive and influential:
In this article, we’d like to give you a brief overview of the most recent key regulatory developments and their potential implications for businesses.
The EU AI Act, proposed in April 2021, is a comprehensive regulatory framework designed to govern the deployment of AI systems within the EU. Article 28b specifically addresses the need for enterprises to responsibly manage risks associated with AI foundation models. This includes ensuring that the AI they use does not compromise safety or ethical standards.
The European Parliament approved this AI Act on June 14, 2023, and the final version of the AI Act is expected to be published by the end of 2023.
On October 9, 2023, EU policymakers, AI business leaders, top foundation model providers, and researchers gathered for a roundtable discussion at the European Parliament to focus on the Governance of General-Purpose AI.
A few weeks later, on October 30th, the United States responded with its own set of directives to shape the AI landscape, with President Biden issuing the Executive Order on Safe, Secure, and Trustworthy AI.
The Executive Order provides guidelines for AI governance, research and development, and encourages collaboration between the government and the private sector to advance AI technologies that are secure and beneficial for the public.
The European Union's AI Act and the United States' Executive Order on AI represent two significant regulatory approaches to artificial intelligence by two of the world's leading economies.
Here’s a brief rundown of the key considerations for enterprises:
On November 1st and 2nd, the UK held the AI Safety Summit at Bletchley Park. The event brought together international governments, leading AI companies, civil society groups, and experts in research.
The core focus revolved around misuse risks and the potential loss of human control over both narrow and frontier AI technologies—those that possess dangerous capabilities or exhibit advanced, multifaceted performance that could match or outstrip today's leading models.
As stated by the UK government, the AI Safety Summit has 5 objectives:
The outcomes of the summit are anticipated to be instrumental in shaping international regulatory frameworks, establishing a roadmap for the secure and responsible integration of AI into societal norms.
Here are our key highlights and essential best practices for preparing for AI regulatory changes, a topic we've elaborated on in an article published on Kainos's blog. The post includes insights from Lakera CEO David Haber, John Sotiropoulos, core contributor to the OWASP Top 10 for LLM, and Dr. Suzanne Iris Brink, Data Ethics Manager at Kainos.
Here’s a brief overview.
1. Increase testing and assurance: Foundation models providers must rigorously test and apply red teaming to both open-source and proprietary models, responding to the uncertain regulatory environment. Enhanced transparency and diverse development teams are critical to minimize risks.
2. Adopt actionable open standards: Developers should embrace standards like the OWASP Top 10 for LLM to secure AI integrations and address novel risks such as prompt injections. These standards aid in fortifying AI applications alongside established security protocols.
3. Accelerate standards alignment: Amidst emerging AI threats, there is a need for consensus and cooperation among the organizations to harmonize AI security measures.This will help prevent contradictions and foster effective defenses against threats like privacy inference attacks on LLMs.
4. Invest in automated defenses: New AI security tools, like Lakera Guard, are automating the protection of AI systems. This helps companies quickly identify and mitigate risks from data poisoning to toxic language outputs.
5. Integrate security with ethics: Security in AI should extend beyond traditional measures to include ethical implications, ensuring that AI systems do not perpetuate bias or discrimination. Integrating data ethics frameworks is essential for comprehensive risk management.
6. Promote secure-by-design and ethics-by-design AI delivery: Effective AI security must be woven into the very fabric of project delivery, beginning with thorough threat models and risk assessments. It's crucial to integrate ethical considerations from the start, utilizing secure-by-design practices to address safety challenges proactively.
In the evolving world of AI, regulatory landscapes are shifting as dynamically as the technologies they seek to govern. The recent weeks have not only highlighted the increasing concerns over AI safety and security but have also showcased a collaborative spirit among global entities aiming to mitigate these risks. As illustrated by pivotal movements in the EU, U.S., and the UK's AI Safety Summit, the impetus to create a framework for safe, secure, and ethical AI is stronger than ever.
Adapting to these changes requires not only a compliance mindset but also a dedication to continuous learning, ethical consideration, and international collaboration. The road ahead is one of partnership—across industries, borders, and cultures—forging a path that ensures AI technologies enhance our global society responsibly.
Lakera works with the Fortune 500 companies, startups, and organizations to mitigate compliance risks. Get in touch with us at firstname.lastname@example.org or sign up for free for Lakera Guard.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure you’re on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.