California Governor Gavin Newsomsigneda landmark bill on Monday thatregulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions.
The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.
SB 243 will go into effect January 1, 2026, and requires companies to implement certain features such as age verification, and warnings regarding social media and companion chatbots. The law also implements stronger penalties for those who profit from illegal deepfakes, including up to $250,000 per offense. Companies must also establish protocols to address suicide and self-harm, which will be shared with the state’s Department of Public Health alongside statistics on how the service provided users with crisis center prevention notifications.
Per the bill’s language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as healthcare professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.
Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling outparental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized.
It’s the second significant AI regulation to come out of California in recent weeks. On September 29th, Governor Newsomssigned SB 53into law, establishing new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.
Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or fully ban the use of AI chatbots as a substitute for licensed mental health care.
TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
Source: Techcrunch



