The legislation targets social media and AI platforms interacting with California users
California Governor Gavin Newsom has signed several landmark bills establishing new safeguards over artificial intelligence (AI) chatbots and social media platforms, with a particular focus on protecting minors from potential psychological and data-privacy risks.
The announcement came Monday from the Governor’s Office, confirming that Senate Bill 243 (SB 243) — co-sponsored by Senators Steve Padilla and Josh Becker — will take effect in January 2026. The law requires platforms using AI tools to clearly disclose when users are interacting with a chatbot rather than a human, especially for children and teens.
Padilla cited several troubling cases where AI companion bots allegedly encouraged self-harm or suicidal behavior among minors. “This technology can be a powerful educational and research tool, but left to their own devices, the tech industry is incentivized to capture young people’s attention at the expense of their real-world relationships,” Padilla said in a prior statement.
What the new laws require
Under the new framework, social media companies, gaming platforms, and websites serving California residents must implement:
- Age-verification systems to identify minors.
- Crisis-response protocols for self-harm or suicide-related interactions.
- Disclosure warnings when an AI chatbot is in use.
The legislation also tightens corporate accountability, limiting the ability of companies to claim their AI systems act “autonomously” to avoid legal liability.
These measures reflect growing national concerns over how AI companions and conversational bots influence young users.
National context: AI regulation spreads across states
California joins Utah, which enacted similar safeguards in 2024 requiring chatbots to inform users that they are not conversing with a human.
At the federal level, Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act earlier this year. The bill seeks to provide limited civil-liability immunity for AI developers in critical sectors such as healthcare, finance, and law — a move designed to balance innovation with ethical responsibility.
Experts say the California laws could set a precedent for other U.S. states and may even influence international digital-safety standards.
With the new measures, California — home to many of the world’s leading AI firms — becomes the first major state to enforce direct safeguards over AI-powered social interactions, reinforcing its reputation as a regulatory trendsetter in technology governance.
Disclaimer
This content is for informational purposes only and does not constitute financial, investment, or legal advice. Cryptocurrency trading involves risk and may result in financial loss.

