2025 GenAI Security Readiness Report

A Practitioners’ View of Risk, Readiness, and Reality in the Enterprise

45%

of organizations are already implementing GenAI solutions.

15%

reported a GenAI-related security incident in the past year.

4%

rate their security confidence at the highest possible level.

39%

cite talent shortages as the #1 barrier to securing GenAI.

Preface

Since the emergence of generative AI—the so-called “ChatGPT moment” in late 2022—keeping pace with both the development of AI and the use of AI for development has been staggeringly difficult.

As someone who spent years in the trenches of traditional application security, I saw how the move to cloud technologies like Kubernetes transformed our world. The neat “castle-and-moat” perimeter became sprawling microservice cities where single points of failure could lurk anywhere. This forced us to redefine observability, with low-touch technologies like eBPF finally shining a spotlight on hidden systems and containerization bringing infrastructure and application security teams into the same conversation.

For a brief moment, it felt like we had a strategy: attack paths over static vulnerabilities, provenance in the supply chain, and real improvements in signal-to-noise.

And then it happened. In what felt like a blink, we were speaking in natural language to stochastic AI systems—connected to tools, data, and other AIs—whose “backbone” could be any number of LLMs, each with its quirks. Ironically, as “smarter” models emerged, they often proved more vulnerable to jailbreaks and manipulation.

Ask any security professional three years ago about their “readiness” and you’d likely get a brave “I think so.” Fast-forward to today, and that brave face has shifted closer to a wince. In this perfect storm of non-determinism, speed, and scale, what we need most is clarity, communication, and community.

This report delivers on that need. It shines a light on the realities of where we stand, helping ensure that as tomorrow approaches, the next steps we take are made with confidence.

Headshot of Steve Giguere
Steve Giguere
Principal AI Security Advocate at Lakera
“The window for proactive defense is closing. The cost of inaction far exceeds the investment in protection.”— Ken Huang

Executive Summary

“Our adoption has been faster than our ability to build safeguards.”—Survey respondent, IT manager

Generative AI has crossed the line from experiment to everyday reality—but enterprise security hasn’t kept pace. Adoption is no longer about if—it’s about how fast companies can scale safely. Yet in 2025, most organizations remain dangerously underprepared for the risks they’ve already deployed. Nearly half are implementing GenAI, but only a fraction have the guardrails or expertise to secure it.

Incidents are real: 15% of respondents reported a GenAI-related security incident in the past year. These cases most often involved prompt injection, data leakage, and biased outputs. Yet while incidents raise concern, they do not reliably translate into greater preparedness.

Confidence remains fragile. Only 4% of organizations rate their security confidence at the highest level. Preparedness shows a similar pattern with just 15% describing themselves as well-prepared for emerging threats.

Risks are diversifying. Privacy remains the most-cited concern (46%), but its weight has dropped sharply from 73% in 2024. In its place, adversarial misuse, agent risks, and offensive AI are climbing. Unauthorized access and novel vulnerabilities are flagged less often ( 34% and 29%, respectively), as organizations gain familiarity and redirect focus to newer risks.

Challenges are mounting. Talent shortages are now the #1 barrier (39%), and integration complexity has emerged as a major new obstacle. When these two pressures combine, preparedness drops sharply—a compound gap that many organizations struggle to close.

Perceptions also diverge by role and size. Developers often highlight misuse risks, while security analysts point to unauthorized access and researchers focus on novel vulnerabilities. Business users remain most concerned with privacy and regulatory fallout. Enterprises tend to report more structure and higher preparedness, but also more incidents; mid-sized firms often feel the least prepared.

Readiness ≠ absence of concern

19%

of organizations report high confidence, but 49% still report high concern.

Confidence with vigilance

Readiness means balancing assurance and alertness, not choosing one over the other.

Source: Lakera 2025 GenAI Security Readiness Report

Incidents ≠ maturity

15%

of organizations faced a GenAI security incident last year.

Experience does not equal resilience

Awareness increases after incidents, but real maturity requires structured defenses.

Source: Lakera 2025 GenAI Security Readiness Report

Skills and integration gap

39%

cite a skills shortage, and 27% struggle with integration complexity.

The bottleneck that defines readiness

Capability and cohesion determine whether security plans translate into action.

Source: Lakera 2025 GenAI Security Readiness Report

Fragmented risk perceptions

Multiple views

Developers focus on misuse, analysts on unauthorized access, researchers on new vulnerabilities, and business leaders on privacy.

Alignment prevents drift

Shared risk models keep teams moving in the same direction instead of securing different priorities.

Source: Lakera 2025 GenAI Security Readiness Report

Frontier shift

73% → 46%

Last year, 73% identified privacy as the top concern; this year, only 46% do, as focus shifts to adversarial and agentic threats.

The next era of AI security

Defenses move from compliance to confrontation, with misuse and offensive AI shaping tomorrow’s challenges.

Source: Lakera 2025 GenAI Security Readiness Report

Introduction

“We rushed adoption without skilled staff, and now we’re paying for it in security debt.”—Survey respondent, security professional

Unlike cloud or containers, GenAI security cannot rely on repeating patterns of failure. Models behave stochastically; no two runs are exactly alike, and risks emerge from unexpected combinations of instructions, data, tools, and user context. This is a domain that will never be completely “solved.”

For security leaders, the implication is clear: prevention and early detection alone are not enough. A strategy anchored in runtime safeguards and adaptive defenses is essential. Yet this shift can be disorienting for teams accustomed to deterministic systems. The danger lies not only in technical gaps, but also in strategic misalignment—as happened in the early days of cloud, when role-based responses often slowed adoption and inflated costs. GenAI risks repeating that cycle unless organizations adapt.

Against this backdrop, Lakera fielded the 2025 GenAI Security Readiness Survey to capture the view from practitioners. We set out to understand:

  • How organizations are adopting and securing GenAI/LLM systems
  • Which incidents they are encountering in production
  • Where they perceive the greatest risks and gaps
  • What obstacles are slowing their ability to adapt

Methodology (What to Keep in Mind)

Responses were collected mid‑2025 from professionals working directly on AI development, deployment, or security. Job titles were grouped into broader categories and results are unweighted. Margins of error for headline figures are non‑trivial and larger for subgroups, so small percentage differences should be treated as directional.

Open‑text answers were thematically coded—these voices appear as anonymized call‑outs throughout this report. In addition to survey responses, we also reached out to a select group of featured contributors from last year—recognized voices in cybersecurity and AI security—to reflect on their previous perspectives and update them for 2025.

Key Takeaways

  • Treat fine‑grained percentage differences as directional, not definitive.
  • Quotes reflect cleaned, practitioner input—not anecdote.

Who Took the Survey

“We don’t have the in-house expertise to keep up with adversarial threats.”—Survey respondent, mid-size company CTO

The respondent base spans security practitioners, developers, product leaders, researchers, and independents, and includes organizations ranging from startups to global enterprises. This diversity helps expose where perceptions diverge: enterprises report more structure and exposure; startups move faster with thinner guardrails; mid-sized firms feel the squeeze from both directions.

Roles Represented

    Organizations of All Sizes

      Adoption & Usage of GenAI

      Most organizations are beyond ideation: one in five remain in the planning stage, a third are actively exploring, and nearly half are already implementing across functions. As adoption matures, the nature of the challenge shifts: planning teams struggle to articulate ROI, exploring teams run into awareness gaps, and implementing teams grapple with benchmarking and cross-team alignment.

      Among adopters, vendor-hosted APIs dominate. Most organizations favor speed and ease of integration over full control, with self-hosting and fine-tuning remaining a niche.

      Frontiers like multimodal systems and autonomous agents are still early. For multimodal AI, roughly one in six organizations report having systems in production. Agent adoption is even further behind, with just 14% reporting agents live in production. The interest is unmistakable— agents in particular are seen as the next game-changing step —but it’s tempered by recognition that securing autonomous, tool-using systems is uniquely difficult. The result is a frontier advancing cautiously, with security and governance setting the pace.

      Key Takeaways

      • Challenges evolve with maturity: business case → awareness → secure scaling.
      • Speed favors vendor APIs; deeper control remains the exception.
      • Multimodal and agent adoption will pull tomorrow’s security debates into production.

      How Challenges Evolve with Maturity

      StageTop Challenge%
      PlanningUnclear ROI / business case15.9%
      ExploringEducation & awareness gaps16.4%
      ImplementingCross-team alignment10.2%
      Benchmarking & tooling gaps9.5%

      Deployment Models Among Adopters

      Multimodal Systems Adoption

        Autonomous Agents Adoption

          Confidence, Concern & Preparedness

          “Our red-teaming revealed vulnerabilities we hadn’t even considered.”—Survey respondent, security analyst
          “Confidence is cautious, rarely strong.”—Survey respondent, product lead

          Self‑rated confidence clusters around the middle, and preparedness follows the same pattern. These two measures tend to move together with confidence appearing to be a proxy for perceived preparedness. Crucially, neither reliably suppresses concern; even confident teams (fewer than one in five) remain wary. Organizations that experienced incidents report higher concern, but not reliably higher preparedness—evidence that pain alone doesn’t produce maturity without deliberate investment.

          This pattern repeats when viewed by company size. Enterprises lead overall, with about one in five rating themselves well-prepared. Mid-sized firms are the weakest, with over 40% underprepared and almost none rating themselves at the very highest preparedness level. Startups show the anomaly: highly polarized, with many underprepared but a small, agile minority rating themselves at the very strongest levels.

          When mapped together, three postures emerge:

          • Vulnerable (26%) → low confidence paired with high concern.
          • Mature (8%) → high confidence with high concern.
          • Complacent (4%) → high confidence with low concern.

          The quadrant that matters most is the mature stance: confidence balanced by vigilance.

          Key takeaways:

          • Confidence and preparedness move together, but concern remains high regardless.
          • Incidents raise concern more than they raise readiness — maturity requires deliberate follow-through.
          • Mid-sized firms feel most exposed; startups split between fragility and agility.
          Confidence vs. Concern: Organizational Postures

          Vulnerable

          (low conf • high concern)
          26%

          Mature

          (high conf • high concern)
          8%

          Unaware

          (low conf • low concern)
          2%

          Complacent

          (high conf • low concern)
          4%
          Types of Vulnerabilities Encountered (2025)

          The share of organizations reporting GenAI incidents rose from ~9% in 2024 to 15% in 2025. While still a minority, these cases provide a clear signal of what can go wrong once systems move into production.

          Top Risks & Challenges

          “Privacy concerns remain the #1 blocker for executive buy-in.”—Survey respondent, product owner
          “The novelty of attacks like prompt injection makes standard playbooks inadequate.”—Survey respondent, developer

          In this year’s survey, respondents were asked to select their top risks and top challenges from a predefined list, with the option to choose more than one. This means results can be described in two ways: as the share of respondents who selected an item, or as the share of total selections (votes) it received.

          Data privacy remains the most-cited risk, but its weight is shifting. In 2024, nearly three-quarters of respondents flagged privacy as their top concern. In 2025 that number dropped to just under a half. Privacy hasn’t become less important, it has normalized into structured compliance.

          Other risks are diversifying. Unauthorized access and novel vulnerabilities are cited less often than a year ago, as technical teams gain familiarity and strengthen controls. At the same time, adversarial misuse, agent risks, and offensive AI are rising in visibility. This reflects the industry’s pivot from well-defined risks to complex, evolving ones.

          Challenges tell a parallel story. The skills gap is now the defining bottleneck, cited by 39% of respondents (up from ~25% in 2024). Integration complexity emerged as a major obstacle this year, as GenAI collides with legacy infrastructure and governance processes. When skills shortages and integration pressures overlap, preparedness drops sharply. This compound gap is where many organizations feel most exposed, and where investment will matter most in the years ahead.

          Top 3 Risks and Challenges: 2024 vs 2025
          Category
          2024
          2025
          Risks
          Privacy73%
          Unauthorized Access46%
          Novel Vulnerabilities39%
          Privacy46%
          Unauthorized Access34%
          Novel Vulnerabilities29%
          Challenges
          Regulation Alignment38%
          Talent Shortage~25%
          IntegrationLow
          Talent Shortage39%
          Integration27%
          Regulation25%

          Privacy, unauthorized access, and novel vulnerabilities declined as “top concerns” between 2024 and 2025, signaling normalization. Meanwhile, talent and integration rose sharply as operational challenges.

          Key takeaways:

          • Privacy has stabilized into compliance; adversarial misuse and agent risks now define the frontier.
          • Skills + integration is the dominant compound drag on readiness.
          • Regulation is still complex, but less of a “black box” than in 2024.
          Prompt Injection Attacks Handbook Cover

          Deepen Your Defense

          Gain insights into GenAI security. Explore a full taxonomy of prompt attack strategies, and get practical knowledge to safeguard your AI applications.

          Download the Handbook

          Voices & Outlook

          “Delegated autonomy forces us to confront accountability gaps and blurred trust boundaries.”—Emmanuel Guilherme, Senior Information Technology Auditor
          “The novelty of attacks like prompt injection makes standard playbooks inadequate.”—Ads Dawson, Staff AI Security Researcher

          Practitioner and expert voices converge on the same reality: organizations are moving faster than their safeguards. The race to deploy AI-powered products is creating security debt that compounds over time. Nate Lee, CISO, notes: “Everyone feels the pressure to move so fast. Unless security is baked in from the start, it gets left by the wayside—and problems will be dealt with later.”

          Nathan Hamiel, Senior Director of Research, warns that the rush is surfacing vulnerabilities of unusual severity: “We’ve seen increased occurrences of relatively rare and severe issues, such as remote code execution, through poor coding practices and weak architectural choices.” His concern is that, in the drive to show value, developers are shipping products with too much access and too few controls—a trend attackers will continue to exploit.

          Alex Joliet, CDPO, observes how quickly the ground has shifted: “Twelve months ago we were still in the early stages of GenAI adoption. Since then, there’s been a growing acknowledgment of the expanded attack surface—and with it, rapid development of security tooling to wrap around the GenAI experience as organizations become more conscious of the risks.”

          Privacy is still the most-cited risk, but voices like Emmanuel Guilherme stress that the real challenge now lies in governance and accountability. Delegated autonomy forces companies to address blurred trust boundaries, escalation paths, and responsibility gaps.

          Talent and training emerge as another bottleneck. Precisely’s 2024 global AI readiness survey found that 60% of respondents cited a lack of AI skills and training as a significant challenge in launching AI initiatives. The World Economic Forum reinforces this, urging companies to make AI security an operating model, not a silo. Without skilled staff, governance frameworks remain paper exercises.

          On the attacker side, the tempo is accelerating. Ads Dawson emphasizes that defenders are now confronting a pace of offensive AI that outstrips traditional security cycles: “In 2025 the biggest shift is that companies are no longer treating GenAI security as an afterthought—they’re operationalizing it with governance, identity-first controls, and formal red-team exercises. Offensive capabilities have scaled quickly: models can now automate attacks at a tempo that changes the threat landscape.”

          Ken Huang, a leading AI strategist, warns about the convergence of new risks:

          • Deepfake attacks surged 680% year-over-year, with Q1 2025 alone seeing $200M in losses from AI-generated executive impersonations.
          • Multi-agent vulnerabilities are opening systemic failure paths, with CVEs published this year against MCP server implementations and agent orchestration frameworks. Once one agent is compromised, cascades can spread misinformation, execute malicious commands, or collapse entire workflows.

          Huang’s prescription is blunt: “The window for proactive defense is closing. The cost of inaction far exceeds the investment in protection.” His strategy centers on layered detection (voice/visual analysis, behavioral biometrics, blockchain verification), process redesign (multi-channel verification for high-value requests), and workforce training.

          The outlook is clear. Governance and privacy are stabilizing into compliance domains. Skills and integration are rising as the hard bottlenecks to scale. Offensive AI, deepfakes, and multi-agent systems are reshaping the threat surface. Success will depend on treating GenAI security as a continuous operating discipline—governance, runtime safeguards, red-teaming, and skills development built into the everyday fabric of enterprise.

          Conclusion: Readiness in Motion

          The readiness story in 2025 is not a scoreboard of small percentage changes; it’s a shift in posture. Mature organizations balance confidence with vigilance, invest in skills and integration, and anchor their strategy on runtime safeguards. Where incidents occur, they are treated as hard-won signals—translated into policy, controls, and muscle memory rather than merely heightened concern.

          Blind spots vary by role and size. Developers, business users, and researchers focus on different risks, while mid-sized firms often report feeling the most exposed. If these perspectives remain fragmented, they cascade into misaligned strategies and wasted investment.

          Readiness, as our data makes clear, is not about eliminating concern. The most mature organizations are those balancing confidence with vigilance: capable enough to innovate, cautious enough to anticipate threats.

          Practical Next Steps

          • Establish a living GenAI security policy with clear ownership.
          • Prioritize runtime protections (policy enforcement, prompt and response vetting, egress controls).
          • Close the skills + integration gap with targeted training and reference architectures.
          • Red-team continuously against modern threats (prompt injection, agent misuse, data leakage).
          • Align stakeholders early to avoid strategy drift across roles.

          Security readiness is not a finish line; it’s a moving frontier. The challenge for every organization is to adapt as fast as the technology itself.

          Appendix A: Methodology

          Survey Design

          The GenAI Security Readiness Survey 2025 was designed to capture perspectives from professionals directly involved in the development, deployment, or security of generative AI and large language model (LLM) systems. The questionnaire combined:

          • Multiple-choice questions for quantitative analysis and subgroup comparisons.
          • Open-ended prompts to gather qualitative insights, later coded into recurring themes.

          Data Collection

          Period: June–August 2025. Channels: Lakera’s professional network, industry mailing lists, and targeted invitations to Gandalf players in relevant roles. Responses: 189 responses collected; 183 validated after cleaning.

          Data Validation

          Responses were cleaned to remove duplicates, incomplete entries, and off-topic content. Some entries from Gandalf community participants were excluded if not relevant to professional contexts. The final dataset reflects genuine practitioner input.

          Respondent Profile

          To show how organizations engage with GenAI security, dozens of unique job titles were grouped into five categories:

          • Security practitioners (analysts, architects, CISOs, advisors)
          • AI/ML developers and engineers (application, model, or tooling builders)
          • Business & product leaders (owners, IT managers, strategists)
          • Researchers & academics (data scientists, professors, policy researchers)
          • Other roles (consultants, students, independents)

          Analysis Approach

          Quantitative results are presented in aggregate and segmented by role, organization size, and adoption stage. Qualitative insights are included as anonymized callouts or, where consent was given, as named expert perspectives. Year-over-year comparisons use directly comparable questions from the 2024 and 2025 surveys.

          Limitations

          This survey reflects a directional snapshot of practitioner perspectives rather than a statistically representative sample of the global market. Because of Lakera’s distribution channels, respondents may skew toward security-conscious practitioners and early adopters. The threat landscape also evolves rapidly, meaning findings should be read as time-sensitive indicators rather than permanent benchmarks.

          Appendix B: Additional Findings

          These additional findings expand on the main report by providing a closer look at the survey data. While the core narrative is in the main report, this appendix offers a deeper statistical view for readers who want more detail behind the trends.

          Adoption Stages of GenAI

          Nearly half of organizations (45%) are already implementing GenAI, while a third (35%) are still exploring, and one in five (20%) remain in the planning stage.

          Confidence in GenAI/LLM Security Measures

          Most organizations rate their security confidence at a moderate level. 40.6% landed in the middle (3/5), while another 40% reported low confidence (1–2/5). Only 19.4% expressed high confidence (4–5).

          Concern About GenAI/LLM Vulnerabilities

          Concern outpaces confidence. Nearly half (49.4%) of respondents report high concern (4–5/5), while only 5.3% said they are not concerned at all.

          Preparedness for Emerging Threats

          Preparedness mirrors confidence but remains uneven. 43% rated themselves “somewhat prepared” (3/5), while 33% felt underprepared (2/5). Only 15% described themselves as well prepared (4–5).

          Preparedness Gaps Among Adopters

          Even among adopters, formal GenAI security policies are uneven. Many report still developing frameworks, while a significant share have none in place.

          Preparedness Varies by Organization Size

          Mid-sized firms are the weakest, with over 40% underprepared. Startups show polarization between underprepared majorities and a small, agile elite. Enterprises perform better overall, with 22% rating themselves well prepared.

          Security Risks by Role

          Risk perceptions diverge across roles. Business users and IT managers emphasize privacy, while technical roles focus on misuse, unauthorized access, and novel vulnerabilities.

          Appendix C: Contributors

          The following individuals opted to be recognized for their participation in this year’s survey. We are grateful for their insights and perspectives that helped shape this report.