A new bill proposed in California (SB 243) would require AI companies to periodically remind kids that a chatbot is an AI and not human. The bill, proposed by California Senator Steve Padilla, is meant to protect children from the “addictive, isolating, and influential aspects” of AI.

In addition to limiting companies from using “addictive engagement patterns,” the bill would require AI companies to provide annual reports to the State Department of Health Care Services outlining how many times it detected suicidal ideation by kids using the platform, as well as the number of times a chatbot brought up the topic. It would also make companies tell users that their chatbots might not be appropriate for some kids.

Last year, a parent filed a wrongful death lawsuit against Character.AI, alleging its custom AI chatbots are “unreasonably dangerous” after her teen, who continuously chatted with the bots, died by suicide. Another lawsuit accused the company of sending “harmful material” to teens. Character.AI later announced that it’s working on parental controls and developed a new AI model for teen users that will block “sensitive or suggestive” output.

“Our children are not lab rats for tech companies to experiment on at the cost of their mental health,” Senator Padilla said in the press release. “We need common sense protections for chatbot users to prevent developers from employing strategies that they know to be addictive and predatory.”

As states and the federal government double down on the safety of social media platforms, AI chatbots could soon become lawmakers’ next target.

Categories: digitalMobile