Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

AI Safety Unplugged: Key Takeaways and Highlights from the World Economic Forum

Read about key insights on AI safety straight from the World Economic Forum 2024.

Lakera Team
January 22, 2024
Hide table of contents
Show table of contents

We are pleased to share takeaways from our AI House Davos panel titled "AI Safety Unplugged: Navigating the Risks Without the Hype" during the World Economic Forum 2024. 

On stage, panelists Yann LeCun, Chief AI Scientist at Meta, David Haber, CEO of Lakera, Seraphina Goldfarb-Tarrant, Head of Safety at Cohere, and Max Tegmark, Professor at MIT, delved into the challenges, benefits, risks, and future predictions of AI development and deployment. They debated different perspectives on AGI, which fears are real and which aren't, and laid out what each of them envision for the future of AI. 

AI Safety Unplugged: Key Takeaways and Highlights

Max Tegmark highlighted the risks associated with large-scale AI deployment, particularly regarding misinformation and deep fakes, emphasizing the significant concerns surrounding AI-driven misinformation. “This is going to be the year of fakes. More than four billion people are going to the elections. Brace yourself for some really hardcore deep fakes.” 

Seraphina Goldfarb-Tarrant addressed the challenges in enterprise AI deployments, including the lack of effective evaluation methods at an extreme scale, leading to risks such as the propagation of rare events and biases. “For enterprise deployments, the biggest issue we're running into is a combination of two things: the absence of a good evaluation method and dealing with extreme scale.”

David Haber expanded on this and discussed companies deploying AI technologies at scale, introducing new accessible interfaces and capabilities that propagate risks to hundreds of millions of users. He also shared his prediction for the upcoming "Internet of Agents" (IoA) era, which will likely amplify cyber risks: “We’re preparing for the ‘Internet of Agents,’ where a network of AI agents is capable of interacting with one another to complete transactions previously executed by humans. It’s the interconnectedness of AI systems that will quickly amplify many of the risks we see today.”

Yann LeCun was outspoken about the status of AGI, stating, “We're still missing some very, very basic things [...] We're nowhere near human-level intelligence, despite what you might hear from the most optimistic people who tell you AGI is just around the corner … I'd be happy if by the end of my career we can get something as smart as a cat.” Yann also emphasized the limitations of autoaggressive LLMs, expressing, “My prediction is that autoregressive LLMs are intrinsically unsafe. They cannot be fine-tuned to death to be safe. It's not possible. You can always jailbreak them.”

The challenge of evaluating AI was a significant discussion point, highlighting that intelligence comprises a diverse set of skills and abilities and no single test could ever fully capture its full complexity. As LeCun put it, “There is no single test that really measures intelligence [...] depending on which system you build for what skills, you're going to have certain skills and not others. And so you cannot have a single test.”

In the context of AI development, panelists believe the debate over open source should focus on finding a balanced approach. As Tegmark explained, “It's not a binary debate where you open source everything or nothing.” Open-source platforms are essential for creating AI systems that work in all languages and cultures, enabling decentralized development and fine-tuning to cater to diverse values and interests. The challenge lies in avoiding undue power concentration and promoting nuanced discussions about the role of open source in AI development.

Finally, in a time when technology adoption has never moved faster, the panelists emphasized the need to focus on the immediate and certain risks associated with the rapid deployment of AI technologies. As Haber explained, “Rather than having the public discourse be dominated by what may happen in the future, I would love to see a lot more of our discourse happening on the 100% certain risks.” The key takeaway is the importance of clarity and understanding of AI capabilities and control for policymakers, businesses, researchers, and society at large, with a focus on AI tools that empower humans.

Interested in watching the full session? Check out the recording on YouTube.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
Lakera Featured in a NIST Report on AI Security
Lakera Featured in a NIST Report on AI Security
1
min read

Lakera Featured in a NIST Report on AI Security

Lakera Featured in a NIST Report on AI Security

Lakera's technology has been recognized by NIST in their report on Adversarial Machine Learning.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.