Judge rejects AI chatbots' free speech defense following teen's death

A federal judge has rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. 

The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.

RELATED: Florida mother sues AI platform after son takes his own life after months of online chatting

The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The backstory:

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

The Charcter.AI logo on a laptop arranged in the Brooklyn borough of New York, US, on Wednesday, July 12, 2023. Photographer: Gabby Jones/Bloomberg via Getty Images

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."

The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.

RELATED: Pres. Trump signs 'Take It Down Act' bill designed to fight AI deepfakes, revenge porn

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show "Game of Thrones." In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

What they're saying:

In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage."

Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.

"The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.

RELATED: Teacher quits profession after viral rant on how AI is 'ruining' education

"It’s a warning to parents that social media and generative AI devices are not always harmless," she said.

The other side:

"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it."

In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.

"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.

Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.

If you or a loved one is feeling distressed, call or text the 988 Suicide & Crisis Line for free and confidential emotional support 24 hours a day, 7 days a week.

CLICK HERE for the warning signs and risk factors of suicide and CLICK HERE for more on the 988 Lifeline.

The Source: The Associated Press contributed to this report. The information in this story comes from a recent federal court ruling, legal filings related to the wrongful death lawsuit, and statements from parties involved, including the plaintiff’s legal team, Character.AI, and Google. This story was reported from Los Angeles. 

Artificial IntelligenceFloridaNews