TL;DR

-db1-What’s happening

  • California will enforce new rules for customer-facing AI starting January 1, 2026.
  • Companion chatbots must actively prevent self-harm content and intervene when users are at risk.
  • Healthcare and wellness AI must stop implying medical expertise unless licensed professionals are truly involved.

Why teams should care

  • These rules apply to anyone serving California users.
  • Regulators are looking at live behavior, not policy documents.

Where Lakera fits

  • Lakera Guard lets teams define and enforce custom AI guardrails at runtime.
  • That makes regulatory requirements actionable instead of aspirational.

About the recent executive order

  • A December 2025 executive order has raised questions about federal intervention.
  • As of today, it does not change California’s January 2026 timeline.-db1-

January 2026 is when guardrails stop being optional

California lawmakers are not trying to regulate how models are trained or which architectures teams choose. They are regulating something far simpler and far harder to control: how AI behaves when it is already deployed.

The state’s approach is pragmatic. If an AI system speaks to users, influences decisions, or builds emotional rapport, then it needs boundaries that hold up under real-world pressure.

That thinking runs through both SB 243 and AB 489.

SB 243: When a chatbot becomes a companion

Companion AI systems are not customer support bots that answer a question and disappear. They stay. They talk. They remember. Over time, they can feel less like software and more like a presence.

That is exactly why SB 243, signed in October 2025, exists.

As outlined in Jones Walker’s analysis of SB 243, the law responds to a simple risk scenario: a vulnerable user turns to a chatbot at the wrong moment, and the system says the wrong thing.

The law addresses this through three concrete expectations.

AI disclosure that actually sticks

If a reasonable person could believe they are talking to a human, the system must say otherwise. Not once, but repeatedly during longer interactions.

For minors, the law goes further. The chatbot must regularly remind them that it is AI and encourage them to take breaks. The goal is to interrupt immersion before it becomes dependence.

What happens when things turn serious

SB 243 assumes that some conversations will drift into dangerous territory. When users express suicidal thoughts or self-harm intent, the law expects the system to recognize that moment and change course.

That means stopping harmful conversational patterns, triggering predefined responses, and pointing users toward real-world crisis support.

These protocols must be documented, published, and based on evidence rather than intuition.

Accountability after deployment

Starting in 2027, operators must report how often these safety mechanisms are triggered and how they work in practice. The law also introduces a private right of action, raising the stakes for getting this wrong.

The message is clear: good intentions are not enough if the system fails when it matters most.

AB 489: When AI sounds a little too much like a doctor

AB 489 targets a different but equally familiar risk.

Imagine a health or wellness chatbot that does not explicitly claim to be a doctor, but speaks with authority, uses medical language, and displays reassuring cues that feel clinical. Many users will not stop to parse disclaimers. They will assume expertise.

According to Smith Anderson’s breakdown of AB 489, California wants to close that gap.

Starting January 1, 2026, AI systems may not:

  • Use titles, phrases, or design elements that suggest licensed medical expertise
  • Describe outputs as “doctor-level” or “clinician-guided” unless that is factually true
  • Rely on subtle cues that could mislead users, even without explicit claims

Each misleading interaction may count as a separate violation, with enforcement power extending to professional licensing boards.

For teams building patient-facing or health-adjacent AI, this creates a familiar engineering challenge: language that feels helpful can also feel authoritative, and the line between the two matters.

Where regulation meets the real world

Taken together, SB 243 and AB 489 share a common assumption.

AI systems will encounter edge cases. Users will ask unexpected questions. Conversations will drift. Static rules written months earlier will not cover every scenario.

California’s answer is not to ban these systems, but to require mechanisms that intervene when things go off track.

That shifts AI governance from policy decks to production systems.

Making safeguards work when users are already talking to AI

For most teams, complying with these laws does not mean rewriting their entire AI stack. It means controlling how AI behaves at the moments that matter.

This is where runtime guardrails become practical rather than philosophical.

With Lakera Guard, teams can define precise policies for what AI is allowed to say in sensitive contexts, intercept unsafe or misleading responses before they reach users, and adjust guardrails as laws evolve without retraining models or pausing deployments.

Instead of hoping a model behaves, teams can decide what happens when it does not.

That matters when a conversation turns toward self-harm, when an AI starts sounding like a medical professional, or when a user mistakes fluency for authority.

In all of these cases, the difference is not intent. It is control.

A quick word on the federal picture

In December 2025, an executive order was signed aiming at limiting state-level AI regulation. The order directs federal agencies to review state laws and consider legal or funding-based challenges.

As reported by the Associated Press, the administration has indicated that it intends to focus on what it views as the most burdensome regulations, while leaving child safety measures largely intact.

What matters for teams planning their next quarter is simpler:

  • Executive orders do not automatically override state law
  • No federal AI statute currently preempts California’s rules
  • SB 243 and AB 489 are still set to take effect on January 1, 2026

For now, the operational reality remains unchanged.

January 2026 is closer than it feels

California’s AI laws are among the first to treat guardrails as something that must work under pressure, not just exist on paper.

Teams that invest now in controlling AI behavior in production will not only be ready for January 2026. They will be better prepared for the next wave of AI regulation, wherever it comes from.

If your AI systems already talk to users, this is the moment to decide what they are allowed to say, and what should never leave the system at all.