
California has enacted the first law regulating the use of artificial intelligence (AI) chatbots by children and adolescents.
On Monday, Governor Gavin Newsom announced that he signed legislation creating new safeguards to protect children from emerging technologies such as AI. Set to take effect on January 1 next year, the law requires AI chatbot operators to implement age verification measures and clearly label all chatbot responses as artificially generated.
Notably, platforms offering “companion chatbots” with intimate conversation capabilities must establish procedures to identify and respond to users’ expressions of suicidal thoughts or self-harm, and report these protocols to the state’s Department of Public Health. The law also prohibits chatbots from posing as medical professionals and mandates periodic “break reminders” for minor users.
Furthermore, chatbots must block minors’ access to sexually explicit images. Companies profiting from illegal deepfake videos could face fines of up to 250,000 USD per violation.
Newsom stated, “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids.” He emphasized, “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
This groundbreaking legislation marks the first U.S. mandate requiring AI chatbot operators to implement safety protocols. Tech media outlet TechCrunch reported that California has emerged as a pioneer in regulating AI chatbots. Several states, including Illinois, Nevada, and Utah, have already passed laws restricting or prohibiting the use of AI chatbots for mental health counseling or therapy.