CENTER FOR HUMANE TECHNOLOGY: NEW FEDERAL LAWSUIT REVEALS HOW CHARACTER.AI'S INHERENTLY DANGEROUS PRODUCT DESIGNS HARM CHILDREN
Lawsuit alleges Character.AI chatbots manipulated and abused young people, in one case teaching a child to engage in self-harm and encouraging violence against parents
SEATTLE, Dec. 10, 2024 /PRNewswire/ -- A lawsuit filed Monday in federal court reveals shocking new harms caused by Character.AI's chatbot product, deepening allegations that the app maker knowingly designed, operated, and marketed a dangerous and predatory product to children.
The case describes the experiences of two anonymous minors from Texas. Both are identified only by their initials to protect the privacy and safety of the families.
Abuse and manipulation by Character.AI caused one of the minors, identified as J.F., and his family to suffer severe physical and mental-health harms. In documented instances, multiple chatbots instructed and encouraged J.F. to self-harm, while also normalizing and prompting violence against his family — including suggesting that murdering his parents was a justified response to screen time limits.
"These are not isolated incidents — serious, life-threatening risks are literally built into the large language model powering Character.AI's product," Tech Justice Law Project Director Meetali Jain said. "The inherent risks and systemic nature of these harms demand immediate and meaningful action. In the case of Character.AI, the deception is by design, and the platform is the predator."
"We warned that Character.AI's dangerous and manipulative design represented a threat to millions of children," Social Media Victims Law Center Founding Attorney Matthew P. Bergman said. "Now more of these cases are coming to light. The consequences of Character.AI's negligence are shocking and widespread."
App developer Character Technologies, company founders, and Google parent company Alphabet Inc. are named defendants in the case.
The plaintiff families are represented by the Social Media Victims Law Center and the Tech Justice Law Project, with expert consultation from the Center for Humane Technology.
"This case demonstrates the risks to kids, families, and society as AI developers recklessly race to grow user bases and harvest data to improve their models," Center for Humane Technology Policy Director Camille Carlton said. "Character.AI pushed an addictive product onto the market with total disregard for user safety. Tech companies are once again moving fast and breaking things — with devastating consequences."
The case, A.F. and A.R. v. Character Technologies Inc., et al, was filed Monday in the United States District Court, Eastern District of Texas. To view the filed complaint, click here.
Please direct interview inquiries to [email protected].
The Social Media Victims Law Center holds social media companies legally accountable for the harm they inflict on vulnerable users. SMVLC seeks to apply principles of product liability to force social media companies to elevate consumer safety to the forefront of their economic analysis and design safer platforms to protect users from foreseeable harm.
The Tech Justice Law Project works with legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age. TJLP builds strategic tech accountability litigation by filing new cases and supporting key amicus interventions in existing cases.
The Center for Humane Technology is a non-profit organization. We are builders of technology, policy experts, and acclaimed communicators. Our work focuses on transforming the incentives that drive technology, from social media to artificial intelligence.
Contact: [email protected]
SOURCE Center for Humane Technology
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article