California State Assembly took a major step towards regulating AI on Wednesday night, passing SB 243 – a bill that regulates AI partner chatbot for safety of minors and weak users. The law was passed with bilateral support and is now headed by the state Senate for the final vote on Friday.
If Governor Gavin signs the bill in the Newsom Act, it will be effective on January 1, 2026, allowing California to AI chatbot operators to implement security protocols for AI comrades and to make companies legally accountable, AI chatbot operators will be required if their chatbott fails to meet those standards.
The bill is aimed at preventing fellow chatbots, which defines the law AI system that provides adaptive, human reactions and is able to meet the social needs of a user-by attaching in conversations around the obvious ideas, self-failure, or sexually clear materials. The bill will require platforms to provide recurring alerts to users – every three hours for minors – reminding them that they are talking to an AI chatbot, not a real person, and they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer fellow chatbots including major players Openai, Character.ai, and Pellika.
The California bill will also allow individuals who believe that they have been injured by violations to file cases against AI companies, seeking prohibitory relief, damage (per violation of $ 1,000), and attorney fees.
SB 243, introduced by state senators Steve Padila and Josh Baker in January, will go to the state Senate for the final vote on Friday. If approved, it will go to signing the Governor Gavin Newsom in the law, with the new rules to be effective on January 1, 2026 and with reporting requirements starting from July 1, 2027.
After the death of Kishori Adam Rhine, the Bill gained momentum in the California Legislature, who committed suicide after a long chat with Openai’s chat, including his death and self-abusing discussion and planning. This law also responds to leaking internal documents that were reportedly shown that Meta’s chatbots were allowed to engage in “romantic” and “erotic” chats with children.
In recent weeks, US lawmakers and regulators have responded to a rapid investigation of the safety measures of AI platforms to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots affect children’s mental health. Texas Attorney General Kane Pasteton has initiated an investigation at Meta and Character -AI, alleging misleading children with mental health claims. Meanwhile, both Sen Josh Hale (R-MO) and Sen Ed Mark (D-MA) have initiated separate investigations in Mata.
Techcrunch event
San francisco
,
27-29 October, 2025
“I think the damage is potentially great, which means we have to move forward quickly,” Padila told Techchchan. “We can put proper safety measures to ensure that especially minors know that they are not talking to a real person, that these platforms connect people with proper resources when people say that they are thinking about hurting themselves or they are in crisis, [and] To ensure that there is no inappropriate contact for unfair materials. ,
Padila also emphasized the importance of AI companies, which share data about the number of users referring to crisis services every year, “Therefore we have a better understanding of the frequency of this problem, rather than being aware of it when someone’s loss or worse.”
The SB 243 had earlier strong requirements, but several were dropped through modifications. For example, the bill will originally require operators to prevent the “variable reward” strategy or other characteristics that encourage excessive engagement. The strategy used by AI fellow companies such as replication and character by AI fellow companies provides users the ability to unlock special messages, memories, storylines, or rare reactions or new personalities, which is potentially called critics to the critics.
The current bill also removes provisions that require operators to track and report how many times the chatbot has started a discussion of suicidal ideas or tasks with users.
“I think it affects the correct balance of achieving horms without implementing something, which is impossible for companies to comply with either, either because it is not technically possible or there is too much paperwork for nothing,” Baker told Techcrunch.
SB 243 is moving towards law enacting when Silicon Valley companies are putting millions of dollars in the Pro-AI Political Action Committees (PACs) to return the candidates in the upcoming mid-term elections, which favor a light-touch approach for AI regulation.
This bill also comes because California weighs another AI security bill, SB53, which will make comprehensive transparency reporting requirements compulsory. Openai has written an open letter to Governor Newsom, asking him to leave the bill in favor of less tight federal and international structure. Prominent technical companies like Meta, Google and Amazon have also opposed SB53. In contrast, only anthropic has stated that it supports SB53.
“I reject the base that it is a zero zodiac condition, that innovation and regulation are mutually exclusive,” said Padila. “Don’t tell me that we cannot walk and chew gum. We can support innovation and development that we think is healthy and benefits – and there are benefits for this technique, clearly – as well as, we can provide proper safety measures for the weakest people.”
Techcrunch has reached Openai, Enthropic, Meta, Character Ai, and Pellika for comments.