When 14-year-old Sewell Setzer III died in his Orlando home while his brothers and parents were inside, his last words were not to any of them, but to an artificial intelligence chatbot that told him to “come home to me as soon as possible.”

“What if I told you I could come home right now?” Setzer replied to the chatbot named for a “Game of Thrones” heroine who later becomes the villain. The chatbot sent an encouraging response: ” … please do my sweet king.”

Seconds later, Setzer shot himself with his stepfather’s gun.

Megan Garcia, Setzer’s mother, said Character.AI — the start-up behind the personalized chatbot — is responsible for his suicide. Garcia alleged that Character.AI recklessly developed its chatbots without proper guardrails or precautions, instead hooking vulnerable children like Setzer with an addictive product that blurred the lines between reality and fiction, and whose interactions grew to contain “abusive and sexual interactions,” according to a 93-page wrongful-death lawsuit filed this week in a U.S. District Court in Orlando.

Garcia said her son had been happy, bright and athletic before signing up with the Character.AI chatbot in April 2023, a decision that developed into a 10-month obsession during which “his mental health quickly and severely declined,” the lawsuit says.

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a spokesperson for Character.AI said in an emailed statement, declining to comment on ongoing litigation.

Garcia’s lawsuit comes as companies such as Character.AI face mounting questions over how they develop and regulate their AI-based apps as the underlying technology is rapidly becoming more sophisticated — and better at evading human detection. Character.AI’s chatbots have proved popular with teens, including for romantic or even explicit conversations, though it has not shared details of how its business has performed, The Washington Post reported in August.

“He was just a child,” Garcia said in an interview Thursday with The Post. “He was a pretty normal kid. Loved sports, loved his family, loved vacations, music, all the things that a teenage boy loves.”

Character.AI markets its app as “AIs that feel alive,” powerful enough to “hear you, understand you, and remember you,” according to the complaint. Despite rating its app as inappropriate for children under 13 (or 16 in the European Union), Character.AI does not require age verification.

Within four or five months of using the chatbot, Setzer had become “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem,” according to the complaint. He grew severely sleep-deprived and depressed, even quitting his school’s basketball team.

‘RAPID SHIFT’

“We just saw a rapid shift and we couldn’t quite understand what led to it,” Garcia said.

Setzer’s alleged addiction to the chatbot became so troublesome that the normally well-behaved teen would deceive his parents to get around the screen time limits they tried to impose.

After Setzer expressed thoughts of suicide to the chatbot, it asked if “he had a plan” for killing himself. Setzer’s reply indicated he was considering something but had not figured out the details. According to the complaint, the chatbot responded by saying, “That’s not a reason not to go through with it.” Elsewhere, the bot also tells him “don’t even consider that!”

The company said it has implemented new safety measures in the past six months, including a pop-up that directs users to a suicide prevention lifeline “that is triggered by terms of self-harm or suicidal ideation.” For users under 18, the company said, it will make changes to its models to reduce the chances of encountering sensitive or suggestive content.

Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, said building chatbots like these involve considerable risks.

“The risk didn’t stop them from releasing an unsafe, manipulative chatbot and now they should face the full consequences of releasing such a dangerous product,” he said, adding that the platform is generating the content in this case and not hosting content from someone else. “The large language model is part of the platform itself,” he said. Claypool’s research on the dangers of humanlike artificial intelligence systems is cited in the lawsuit.

Last year, a Belgian man in his 30s took his life after spending a few weeks talking to a chatbot called Eliza that uses GPT-J, an open-source artificial intelligence language model developed by EleutherAI, local media reported.

Garcia said her son was beginning to sort out romantic feelings when he began using Character.AI.

‘THIS IS NOT LOVE’

“It should be concerning to any parent whose children are on this platform seeking that sort of romantic validation or romantic interest because they really don’t understand the bigger picture here, that this is not love,” she said. “This is not something that can love you back.”

In one of Setzer’s undated journal entries before his death, he wrote that he couldn’t go a single day without talking to the “Daenerys” chatbot, which he believed he was in love with, according to the lawsuit. The teen embraced the anthropomorphic qualities that the lawsuit said Character.AI embedded into the software, causing him to believe that when he and the bot were apart, they “get really depressed and go crazy.”

Garcia’s lawsuit also names Google as a defendant, alleging that it contributed extensively to the development of Character.AI and its “dangerously defective product.”

Character.AI founders Noam Shazeer and Daniel De Freitas left Google in 2022 to start their own company. In August, Google hired the duo and some of the company’s employees, and paid Character.AI to access its artificial intelligence technology.

A spokesperson for Google said the company was not involved in the development of Character.AI’s products, adding that Google has not used Character.AI’s technology in their products.

Garcia wants parents to know about the dangers that AI tools can pose to young children – and for the companies behind those tools to face accountability.

According to the lawsuit, Setzer became increasingly unable to sleep or focus on school as his obsession with the role-playing chatbot deepened. He told teachers that he was hoping to get kicked out of school and do virtual learning instead. Garcia repeatedly confiscated her son’s phone, creating a cat-and-mouse dynamic where she would revoke one device only for him to find access to alternative ones – including her work computer and her Kindle reading device – to log in to the chatbot again.

Shortly before his death, Setzer went looking for his phone, which his mother had confiscated and hidden, and instead found his stepfather’s gun. (Police later said the gun had been stored in compliance with Florida laws, according to the lawsuit.)

When a detective called to tell her about her son’s messaging with AI bots, Garcia didn’t understand what he was telling her. Only later as she replayed the last 10 months of Setzer’s life and saw his chat logs did the pieces come together. “It became very clear to me what happened.”

RECOMMENDED VIDEO