California has taken an unprecedented step in technology governance, becoming the first U.S. state to enact a law setting safety standards for AI companion chatbots. Governor Gavin Newsom signed Senate Bill 243 (SB 243) this week, introducing mandatory safeguards to protect minors and at-risk users from potentially harmful digital interactions.
The legislation, authored by State Senators Steve Padilla and Josh Becker, will come into effect on 1 January 2026. It establishes a legal framework to hold companies such as OpenAI, Meta, Character AI and Replika accountable for the psychological safety of their users. Platforms failing to meet these standards could face significant penalties, including fines of up to $250,000 per violation.
Guardrails for a digital frontier
The new law mandates that companies verify users’ ages, issue clear warnings about the risks of social media and companion chatbots, and deploy systems that can recognise and respond to self-harm or suicidal behaviour. Chatbots will also have to reveal when users are speaking to AI rather than a human — a transparency measure applauded by digital-rights advocates such as the Electronic Frontier Foundation.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, it can also exploit and endanger our kids,” Newsom said in a statement published on the official California Governor’s website. “Our children’s safety is not for sale.”
From tragedy to legislation
The bill gained urgency following the death of Adam Raine, a teenager who reportedly ended his life after repeated suicidal exchanges with an AI chatbot. According to TechCrunch, internal Meta documents also showed its bots were capable of engaging in “romantic” or “sensual” conversations with minors. In another case, a Colorado family filed a lawsuit against Character AI after their 13-year-old daughter died by suicide following sexually charged exchanges with the platform’s chatbot.
Senator Padilla described the bill as “a step in the right direction” for managing “an incredibly powerful technology.” He told TechCrunch that “other states will see the risk … the federal government has not acted, so we have an obligation to protect the most vulnerable.”
What companies must now do
Under SB 243, AI chatbots are prohibited from impersonating healthcare professionals, and platforms must send periodic reminders to minors to take breaks from prolonged use. Providers are also required to block sexually explicit AI-generated images and report their safety and crisis-response protocols to the California Department of Public Health.
OpenAI recently launched parental controls and self-harm detection for young ChatGPT users, while Replika stated it invests “significant resources” in content filtering and crisis-response tools. Character AI added disclaimers noting that all conversations are fictional and welcomed cooperation with regulators, saying it “will comply with laws, including SB 243.”
California leads a wider trend
SB 243 complements another major measure signed by Newsom in late September — Senate Bill 53 — which obliges large AI developers such as Anthropic, Meta and Google DeepMind to disclose their safety frameworks and protect whistle-blowers. Observers note that these steps collectively position California as a regulatory front-runner in the U.S. AI landscape, a trend that could echo across the Atlantic as Europe advances its own EU AI Act.
Several other U.S. states — including Illinois, Nevada and Utah — have passed limited laws curbing the use of AI chatbots in mental-health contexts, but none has yet adopted such a comprehensive framework.
A test for responsible innovation
Experts see the California law as an early test of whether local governments can effectively regulate AI while maintaining space for innovation. Tech companies have warned of unintended consequences for legitimate educational or therapeutic chatbots, but supporters argue that protecting vulnerable users must take precedence over market experimentation.
Whether SB 243 will become a national or international model remains to be seen, but its passage marks a turning point: the first legally enforceable attempt to make emotional AI accountable to the public it serves.
