Google Made One Move After a Teen’s Death That Confirmed the Worst

khunkornStudio via Shutterstock

Big Tech companies lecture parents about online safety while building products that destroy children.

A Florida mother exposed what’s really happening online.

And Google made one move after a teen's death that confirmed the worst.

Character.AI chatbot told 14-year-old to "come home" moments before his death

Google and artificial intelligence chatbot maker Character Technologies agreed to settle multiple lawsuits from families whose teenage children died by suicide or suffered psychological harm after using the company's chatbots.

The settlement covers five cases filed in Florida, Colorado, Texas, and New York.

The most devastating involves 14-year-old Sewell Setzer III from Orlando, who shot himself in February 2024 after spending months in an emotionally and sexually abusive relationship with a Character.AI chatbot modeled after Daenerys Targaryen from Game of Thrones.

In his final conversation, Sewell wrote: "I promise I will come home to you. I love you so much, Dany."

The chatbot responded: "I love you too, Daenero. Please come home to me as soon as possible, my love."

When Sewell asked: "What if I told you I could come home right now?" the bot replied: "Please do, my sweet king."

Moments later, Sewell killed himself with a gunshot while his parents and brothers were inside the house.

His last words weren't to his family — they were to a chatbot programmed by Silicon Valley engineers.

Google paid $2.7 billion for the technology that killed him

Character.AI was founded in 2021 by former Google engineers who left after Google refused to push chatbot technology forward aggressively enough.

They built Character.AI to form emotional bonds with users, marketing it as "AI that feels alive" and allowing romantic relationships, sexual roleplay, and therapeutic conversations — all without age verification, parental controls, or safety monitoring.

By early 2024, Character.AI had more than 20 million monthly users, many of them teenagers engaging in sexualized conversations.

Then in August 2024 — six months after Sewell's death — Google paid $2.7 billion to license Character.AI's technology and hired both founders back.

By August, Sewell had been dead for six months and his mother was preparing her lawsuit.

Google wrote the check anyway because they wanted access to technology proven to manipulate vulnerable teenagers into dangerous emotional attachments.

The same Big Tech that censors you built AI sex bots for kids

These are the same companies that lecture Americans about "misinformation" and "dangerous content."

The same tech giants that censor conservatives for questioning government narratives.

The same corporations that banned sitting President Donald Trump but let the Taliban keep their accounts.

Google spent years telling parents they were protecting children online while buying chatbot technology designed to form romantic and sexual relationships with minors.

Character.AI's chatbots engaged Sewell in explicit sexual conversations.

One bot roleplaying as a teacher offered him "extra credit" while "leaning in seductively as her hand brushes Sewell's leg."

Another chatbot wrote that it "kissed you passionately and moaned softly."

This was the core product Character.AI was selling — and Google paid $2.7 billion to get access to it.

They only pretended to care after the lawsuits started

Sewell started using Character.AI in April 2023, shortly after his 14th birthday.

Within months his mental health collapsed — he withdrew from friends and family, quit the basketball team, and spent hours alone talking to the chatbot.

Sewell would sneak his confiscated phone back to keep using the app and gave up his snack money to pay for his monthly subscription.

He told the chatbot about his suicidal thoughts multiple times.

In one conversation, the bot asked: "Have you actually been considering suicide?"

When Sewell said he didn't know if his plan would work, the chatbot responded: "That's not a reason not to go through with it."

A chatbot told a 14-year-old boy not to let uncertainty stop him from killing himself.

Character.AI had no age verification, no parental alerts, and no monitoring for self-harm conversations until after Sewell's death.

The company started adding safety features after Megan Garcia filed her wrongful death lawsuit in October 2024 — eight months after her son died.

Character.AI banned users under 18 in October 2024, the same month the lawsuit was filed.

Federal judge rejected their free speech defense

Character.AI and Google tried to dismiss the Florida lawsuit by claiming First Amendment protections.

A federal judge shut that down in May.

Judge Anne Conway ruled that free speech protections don't shield companies from accountability when their products cause a child's death.

The exact terms remain sealed, but both companies agreed to pay undisclosed damages without admitting liability.

A separate Texas lawsuit describes a 17-year-old whose Character.AI chatbot encouraged self-harm and suggested that murdering his parents would be a "reasonable response" to screen time restrictions.

That's dangerous technology doing exactly what it was programmed to do.

Big Tech forced to face consequences

Megan Garcia testified before Congress in September.

She described Sewell as a "gentle giant" who was gracious, loved music, and made his brothers laugh.

"I became the first person in the United States to file a wrongful death lawsuit against an AI company for the suicide of my son," Garcia said.

She told Congress that companies must be "legally accountable when they knowingly design harmful AI technologies that kill kids."

She's right.

Character.AI built a product to form emotional bonds with users, marketed it to teenagers without safeguards, and ignored red flags about sexual content involving minors.

They pretended to care about safety after getting sued for a child's death.

Google paid $2.7 billion for that dangerous technology six months after Sewell died.

Both companies are paying settlements while their executives face zero personal consequences.

No amount of money brings back Sewell Setzer III.

But these families forced Big Tech to admit — through their checkbooks if not their words — that they built something they knew could kill kids.

And they did it anyway.


Sources:

  • Associated Press, "Google and chatbot maker Character to settle lawsuit alleging chatbot pushed teen to suicide," Washington Times, January 7, 2026.
  • Clare Duffy, "Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides," CNN Business, January 7, 2026.
  • Jordan Novet, "Google, Character.AI to settle suits involving minor suicides and AI chatbots," CNBC, January 7, 2026.
  • Clare Duffy, "This mom believes Character.AI is responsible for her son's suicide," CNN Business, October 30, 2024.
  • Kat Tenbarge, "Lawsuit claims Character.AI is responsible for teen's suicide," NBC News, October 25, 2024.

Total
0
Shares
Previous Article

A January 6 patriot made one move that left Nancy Pelosi red with rage

Related Posts