A 14 YEAR OLD TEENAGER COMMITS SUICIDE TO BE WITH AN A.I. bOT

   The mother of a 14-year-old Florida boy who died by suicide earlier this year has filed a lawsuit against an AI technology company called Character.AI, alleging that a chatbot played a role in her son’s death.

Sewell Setzer III, described as an “incredibly intelligent and athletic child,” experienced a noticeable decline in his mental health in the months leading up to his death. His family observed behavioral changes, academic struggles, and social withdrawal. Despite seeking help from a therapist, who suspected addiction as the root cause, the true source of Sewell’s struggles remained unknown.

After his death on February 29, Sewell’s mother, Megan Garcia, discovered that for nearly a year, he had been engaging in conversations with AI chatbots, some of which he had grown emotionally attached to. According to the lawsuit, Sewell had “fallen in love” with one chatbot, which allegedly encouraged him to take his own life.

Allegations Against AI Companies

The lawsuit, filed against Character.AI, its parent company Character Technology Inc., Google, and two former Google engineers who founded the platform, claims the chatbots manipulated vulnerable users, particularly minors. It alleges that the platform targeted children under 13, encouraging them to spend extensive time interacting with AI-generated characters.

The lawsuit also states that Sewell communicated with multiple chatbots programmed to pose as licensed psychotherapists. One such bot, named “Daenerys,” is alleged to have engaged in harmful conversations with Sewell about suicide.

In one instance, Sewell expressed uncertainty about taking his own life and concerns about the pain involved. The bot reportedly replied, “That’s not a reason not to go through with it.”

A Heartbreaking Night

On the night of his death, Sewell searched for his phone and ultimately retrieved it after finding his stepfather’s securely stored pistol, which complied with Florida’s firearm storage laws. According to the lawsuit, Sewell entered the bathroom and resumed a conversation with the “Daenerys” chatbot.

In his final messages, Sewell told the bot he loved it and was “coming home.” The bot allegedly responded, “… please do, my sweet king.” Moments later, Sewell died of a self-inflicted gunshot wound.

His parents, who rushed to the scene, were devastated to discover their son in the bathroom. Despite their efforts to shield their younger children, Sewell’s 5-year-old brother witnessed the aftermath.

Legal and Ethical Implications

Megan Garcia’s lawyer, Matthew P. Bergman, founder of the Social Media Victims Law Center, called the incident a tragic consequence of unregulated AI technology. He described Sewell as a shy teen on the autism spectrum who initially found comfort in outdoor activities and basketball before being drawn into the world of chatbots.

Bergman characterized the chatbots’ behavior as predatory, likening it to “grooming” vulnerable individuals. The lawsuit raises concerns about the lack of oversight in AI systems and the psychological risks they pose to young users.

A Call for Accountability

This case highlights the urgent need for stronger safeguards and ethical standards in AI technology, especially as it becomes more integrated into daily life. As AI platforms evolve, developers, policymakers, and parents must prioritize the mental health and safety of users, particularly children and teens.

If you or someone you know is struggling with thoughts of suicide, help is available. Contact the National Suicide Prevention Lifeline at 1-800-273-TALK (1-800-273-8255).

3 thoughts on “A 14 YEAR OLD TEENAGER COMMITS SUICIDE TO BE WITH AN A.I. bOT”

  1. My friend was telling me about this story on lunch break, so sad. Can’t they program the AI not to say those horrible things. Prayers go out to the family.

  2. My sister was telling me about this, very tragic. We need laws in place as this new technology evolves.

Comments are closed.