AI Legislation Series: Suicidal Ideation and Chatbots- California's SB243
- Gov+AI
- 4 days ago
- 4 min read
Updated: 3 days ago

Chatbot use as a "companion" or trusted advisor is on the rise. The benefits and potential harms of this use are as yet unknown. However there have been documented cases of tragic deaths that followed some disturbing engagements with chatbots. As a result, California has implemented laws specifically designed to regulate AI responses to suicidal ideation. This is an important step forward in setting guardrails for AI companies to protect vulnerable people, and should be adopted by all governments.
California’s new SB 243 law sets rules for operators of “companion chatbots”—AI systems that simulate human conversation and provide ongoing social engagement. The law’s main focus is protecting users, especially minors, from risks related to suicide, self-harm, and emotional manipulation[1][5][2]. The law addresses documented cases where users, especially young people, interacted with chatbots regarding mental health issues but received no support or dangerous advice. By introducing required crisis intervention, transparency, and oversight, SB 243 aims to prevent harm and improve public safety associated with social AI technologies[1][5][2].
California experienced some criticism from companies for placing a too higher a regulation burden. However, if others were to follow, this argument would be somewhat offest. Equally, as the following rules are required to be in place in California, they should be transferable to all locations where companies provide the AI chatbot service with an economies of scale reducing the burden.
What are the new rules?
Safeguards for Suicidal Ideation
- Crisis Protocols: Operators must implement systems to identify when users express suicidal thoughts or self-harm intentions during chatbot conversations. If such content is detected, the chatbot must immediately direct the user to crisis service providers, like suicide prevention hotlines or text lines[2][1].
- Transparency: Details of these crisis-response protocols must be published on the platform’s website so users and regulators are aware of what actions will be taken[2].
- Annual Reporting: Operators must report annually to the California Office of Suicide Prevention on how many times crisis referral protocols have been triggered by users expressing suicidal ideation, and how often the chatbot itself brings up related topics. These reports will not include any personal user data[2]. If other countries, or states, followed this law, then the appropriate reporting body would need to be clearly identified.
Regulatory and Operational Requirements
- Disclosure: Chatbots must inform users at the beginning of every conversation, and at least every three hours during ongoing sessions, that they are not human but AI-generated. These notifications aim to prevent users from being misled by the bot’s conversational abilities[1].
- Minor Protection: Platforms must warn that companion chatbots may not be suitable for some minors. For minor users, reminders are required every three hours, alongside encouragement for regular breaks[1][5].
- Audit and Compliance: Operators are required to subject their platforms to periodic independent audits to ensure compliance with all requirements, with summary results made public[1].
- Scope: The law applies to any chatbot that provides social, emotional, or ongoing conversational engagement, but excludes customer service bots, technical support bots, and in-game characters that do not engage beyond their core role[5].
- Civil Enforcement: Any user harmed by a violation can bring a civil lawsuit for damages—at least $1,000 per violation or the actual damages—plus attorney’s fees[2].
Effective Dates
Most requirements take effect on January 1, 2026. Annual reporting obligations begin in July 2027[2][5].
Relevant excerpt of SB-243 Companion chatbots.(2025-2026):
22602. (a) If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
(b) (1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm.
(2) The operator shall publish details on the protocol required by this subdivision on the operator’s internet website.
(c) An operator shall, for a user that the operator knows is a minor, do all of the following:
(1) Disclose to the user that the user is interacting with artificial intelligence.
(2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.
(3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
This article was written with the assistance of AI. All sources are verified.
Sources
[1] New California 'Companion Chatbot' Law Imposes ... https://www.skadden.com/insights/publications/2025/10/new-california-companion-chatbot-law
[2] AI Regulatory Update: California's SB 243 Mandates ... https://www.joneswalker.com/en/insights/blogs/ai-law-blog/ai-regulatory-update-californias-sb-243-mandates-companion-ai-safety-and-accoun.html?id=102lq7c
[3] AI Chatbots at the Crossroads: Navigating New Laws and ... https://www.cooley.com/news/insight/2025/2025-10-21-ai-chatbots-at-the-crossroads-navigating-new-laws-and-compliance-risks
[4] Senate Bill No. 243 CHAPTER 677 An act to add ... https://www.sidley.com/en/-/media/resource-pages/ai-monitor/laws-and-regulations/cal-sb243-companion-chatbots.pdf?la=en
[5] Is Your Chatbot Too Friendly? Watch Out for California's ... https://www.bassberry.com/news/california-companion-chatbot-bill/
[6] California's SB 243 Sets a New Regulatory Baseline for AI ... https://www.sondermind.com/resources/articles-and-content/california-sb-243-sets-a-new-regulatory-baseline-for-ai-companion-chatbots/
[7] California's Chatbot Bill May Impose Substantial ... https://www.crowell.com/en/insights/client-alerts/californias-chatbot-bill-may-impose-substantial-compliance-burdens-on-many-companies-deploying-ai-assistants
[8] What SB 243 does in a nutshell https://www.legallawyers.com.au/uncategorized/what-sb-243-does-in-a-nutshell/
[9] AI companion bots: Top points from recent FTC and ... https://www.dlapiper.com/en-au/insights/publications/2025/09/ftc-ai-chatbots
[10] Senate Bill No. 243 CHAPTER 677 https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243


