AI Safety Unplugged: Navigating the Risks Without the Hype (WEF)
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/655f8ad96e2d30c7746839ae_Group%20316128163.webp)
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/655f8ad9d9c273a37a3ec566_Group%20316128165.webp)
![](https://cdn.prod.website-files.com/65080baa3f9a607985451de3/656de11a0ccf9cd92950c6f4_Group%20316128161.webp)
We're pleased to announce that Lakera will participate in the AI House Davos event, held in conjunction with the World Economic Forum 2024. AI House Davos serves as a hub for tech leaders, scholars, and policy makers to discuss the progress of Artificial Intelligence.
You can register to attend in-person here.
Our upcoming session, "AI Safety Unplugged: Navigating the Risks Without the Hype," is designed to stimulate a meaningful conversation about present and looming challenges associated with AI's accelerating integration into the business world.
The expert panelists we’ve invited from both industry and academia will focus on a pragmatic approach to navigating the adoption of AI, with an emphasis on recognizing and managing risks.
Panelists:
- Yann LeCun, VP & Chief AI Scientist, Meta
- Max Tegmark, MIT Professor & President of the Future of Institute
- Seraphina Goldfarb-Tarrant, Head of Safety, Cohere
- David Haber, CEO & co-founder, Lakera
For more information on AI House Davos, visit aihousedavos.com. For live updates, ideas, and highlights from the event, follow Lakera on LinkedIn and Instagram.
- The advance toward artificial general intelligence (AGI)—what it does and doesn’t entail, and how we can prepare for its inception
- Intelligent agents—their proliferation and potential dominance, ensuring alignment across use cases, and their expected impact on the internet in the coming years
- The pressing safety and security concerns around generative AI—from critical infrastructure vulnerabilities to misinformation, and the role of model providers, enterprises, researchers, and policy makers in addressing them
- AI governance best practices—and the contentious questions left to be answered, from tightly regulated models to open-source development
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.