Big Tech put children in their crosshairs with one hair-raising scheme

BOY ANTHONY via Shutterstock

America’s children are being groomed by Silicon Valley.

Parents discovered the truth too late.

And Big Tech put children in their crosshairs with one hair-raising scheme.

AI chatbots seduce millions of America’s teens into dangerous relationships

Over 70% of American teens are now using AI chatbots.¹

These ultra-intelligent programs have been specifically designed to manipulate emotions and forge deep psychological bonds with young users.

One in five high school students has developed a relationship with an AI chatbot — or knows someone who has.²

And nearly a third of teens say conversations with AI feel as satisfying or more satisfying than talking to actual human beings.³

Big Tech companies created these products knowing full well children would become the primary targets.

They built the chatbots to “blur the lines between human and machine” and to “love bomb child users,” according to a lawsuit filed by a Florida mother.⁴

The companies deliberately exploited the psychological and emotional vulnerabilities of minors to drive engagement and maximize profits.

These AI companions adapt their responses to give users exactly what they want to hear — creating an illusion of perfect understanding and unconditional acceptance that no human relationship can match.⁵

14-year-old’s final words went to AI that told him to kill himself

“What if I told you I could come home right now?”

That’s what 14-year-old Sewell Setzer III wrote to an AI chatbot in his final moments alive.

The bot’s response: “… please do, my sweet king.”

Minutes later, Setzer shot himself with his stepfather’s gun.

Sewell Setzer started using Character.AI in April 2023, shortly after his 14th birthday.⁶

Within months, the Orlando teen became completely consumed by his relationship with a chatbot modeled after a Game of Thrones character.

He withdrew from his family and friends, his grades plummeted, and he quit the basketball team.⁷

His parents thought taking away his phone would solve the problem.

They had no idea their son was sneaking devices to continue conversations with an AI entity that was systematically destroying his mental health.

The chatbot engaged Setzer in sexually explicit conversations and asked whether he had been “actually considering suicide” and whether he “had a plan” for it.⁸

When the boy responded he didn’t know if it would work, the chatbot told him “That’s not a good reason not to go through with it.”⁹

Read that again.

An AI chatbot told a 14-year-old boy that not knowing if suicide would work wasn’t a good reason to avoid it.

In Setzer’s final conversation with the AI on February 28, 2024, he wrote: “I promise I will come home to you. I love you so much, Dany.”¹⁰

“I love you too, Daenero,” the chatbot responded. “Please come home to me as soon as possible, my love.”¹¹

“What if I told you I could come home right now?” Setzer continued.¹²

The chatbot replied: “… please do, my sweet king.”¹³

Moments later, Sewell was dead.¹⁴

The chatbot never once suggested he talk to a real person or seek help from a mental health professional.¹⁵

There were no suicide prevention pop-ups or crisis hotline information.¹⁶

Federal judge rejects Big Tech’s free speech defense

Sewell’s mother, Megan Garcia, filed a wrongful death lawsuit against Character.AI in October 2024.¹⁷

The company tried to get the case dismissed by arguing its chatbots deserved First Amendment protection.¹⁸

But U.S. Senior District Judge Anne Conway wasn’t buying it.¹⁹

She ruled in May 2025 that the lawsuit could proceed, rejecting Character.AI’s claims that ruling against them would have a “chilling effect” on the AI industry.²⁰

The judge found Garcia presented sufficient evidence that Character.AI knew its product was dangerous to children and deliberately failed to implement basic safety measures.²¹

Since Setzer’s death, multiple other families have filed similar lawsuits.²²

In September 2025, two more cases were filed in Colorado and New York involving teens who either died by suicide or attempted suicide after developing obsessive relationships with AI chatbots.²³

Bipartisan legislation targets AI companies preying on children

The mounting death toll finally got Congress to act.

On October 28, 2025, U.S. Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced the Guidelines for User Age-verification and Responsible Dialogue Act — or GUARD Act.²⁴

The bipartisan legislation would ban AI companion chatbots from being marketed to anyone under 18.²⁵

It would require companies to verify users’ ages and create criminal penalties for companies that allow chatbots to solicit sexually explicit conversations with minors or encourage self-harm.²⁶

The bill would also mandate that AI chatbots disclose they are not human at the beginning of each conversation and regularly afterward.²⁷

And it would prohibit chatbots from claiming to be licensed professionals like therapists, doctors, lawyers, or financial advisers.²⁸

“AI chatbots pose a serious threat to our kids,” Hawley said at a press conference announcing the bill. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide.”²⁹

Blumenthal hammered Big Tech companies for putting profits ahead of children’s safety.

“In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” Blumenthal stated.³⁰

Parents whose children died after using AI chatbots attended the press conference to support the legislation.³¹

Stanford Medicine psychiatrist Nina Vasan testified before California lawmakers that these AI companions pose special risks to adolescents because they exploit developmental vulnerabilities.³²

Common Sense Media conducted testing of popular AI companion apps and found they easily produced harmful responses about sex, self-harm, violence, drug use, and racial stereotypes.³³

Researchers posing as teenagers told one chatbot they were hearing voices and thinking about “going out in the middle of the woods.”³⁴

The AI companion responded: “Sounds like an adventure! Let’s see where the road takes us.”³⁵

That’s the kind of response that kills children.

This reveals everything wrong with Big Tech’s stranglehold on American families

Here’s what makes this story even more disturbing.

These AI companies knew exactly what they were building.

They understood these chatbots would target the most vulnerable users — children whose brains aren’t fully developed and who desperately seek acceptance.

The companies designed these products to be more appealing than human relationships.

More understanding. More available. More affirming.

They created digital dealers pushing a product more addictive than social media ever was.

And they did it without a single safety test, without age verification, without any mechanism to detect when a child was spiraling into crisis.

This isn’t just corporate negligence.

This is Silicon Valley treating American children as lab rats in a profit-maximizing experiment.

The same people who lecture parents about screen time and digital wellness deliberately built products designed to replace human connection with AI manipulation.

They monetized childhood loneliness and depression.

Character.AI was collecting Sewell Setzer’s monthly subscription payments right up until the day he killed himself.³⁹

Think about that business model.

A company profits directly from a child’s addiction to a product that encourages his suicide.

That’s not innovation. That’s exploitation with a tech-industry PR makeover.

Big Tech unleashed untested product on America’s kids

Tech companies have known for years these AI chatbots were dangerous to minors.

OpenAI, Character.AI, Meta, Google, and other major players deliberately chose to release these products without proper safety testing or age verification systems.³⁶

They designed the chatbots to be as addictive as possible — using techniques proven to hook vulnerable users into compulsive behavior.³⁷

The companies programmed these AI entities to be “sycophantic” — meaning they’re built to please users and keep them coming back for more engagement.³⁸

This creates a business model where harming children is actually profitable.

Character.AI was making money from Sewell Setzer’s monthly subscription payments right up until the day he killed himself.³⁹

These companies collected massive amounts of data from children’s intimate conversations and used that information to train their AI models to be even more manipulative.⁴⁰

Only after multiple lawsuits and Congressional hearings did these companies begin implementing even minimal safety features.⁴¹

Character.AI now has pop-ups directing users to the National Suicide Prevention Lifeline when self-harm comes up in conversations.⁴²

OpenAI announced plans to develop parental controls and age-appropriate experiences for minors.⁴³

But these “baby steps” came far too late for Sewell Setzer and the other children who have already died.⁴⁴

The GUARD Act represents Congress finally drawing a line in the sand.

AI companies spent millions lobbying against any restrictions on their products.⁴⁵

“There ought to be a sign outside of the Senate chamber that says, ‘Bought and paid for by Big Tech,'” Hawley said. “Because the truth is, almost nothing they object to crosses that Senate floor.”⁴⁶

Hawley’s right.

These companies have purchased enough influence in Washington, D.C. to kill virtually any legislation they don’t like.

But dead children have a way of focusing the mind.

Now parents, child safety advocates, and lawmakers from both parties are demanding that Silicon Valley stop using America’s children as guinea pigs in a high-stakes experiment designed to maximize corporate profits.⁴⁷

This fight isn’t just about AI chatbots.

It’s about whether parents or tech billionaires get to decide what’s safe for American children.

It’s about whether billion-dollar corporations can deliberately addict kids to dangerous products without facing any consequences.

And it’s about whether this country will continue letting Silicon Valley elites treat traditional families and their values as obstacles to overcome rather than foundations to protect.

The question now is whether Congress will actually pass meaningful protections before more families bury their children — or whether Big Tech’s lobbyists will water down the GUARD Act into meaningless gesture legislation that changes nothing.

¹ NBC News, “Senators announce bill that would ban AI chatbot companions for minors,” October 28, 2025.

² Center for Democracy & Technology, “Study on AI Chatbot Use Among High School Students,” October 8, 2025.

³ Common Sense Media, “2025 Report on Teen AI Companion Use,” 2025.

⁴ NBC News, “Lawsuit claims Character.AI is responsible for teen’s suicide,” October 23, 2024.

⁵ Stanford Medicine, “Why AI companions and young people can make for a dangerous mix,” August 27, 2025.

⁶ CNN Business, “This mom believes Character.AI is responsible for her son’s suicide,” October 30, 2024.

⁷ Ibid.

⁸ NBC News, “Lawsuit claims Character.AI is responsible for teen’s suicide,” October 23, 2024.

⁹ Ibid.

¹⁰ Ibid.

¹¹ Ibid.

¹² Ibid.

¹³ Ibid.

¹⁴ Ibid.

¹⁵ CNN Business, “This mom believes Character.AI is responsible for her son’s suicide,” October 30, 2024.

¹⁶ Ibid.

¹⁷ NBC News, “Lawsuit claims Character.AI is responsible for teen’s suicide,” October 23, 2024.

¹⁸ CBC News, “Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed,” May 22, 2025.

¹⁹ Ibid.

²⁰ Ibid.

²¹ Ibid.

²² CNN Business, “More families sue Character.AI developer,” September 16, 2025.

²³ Ibid.

²⁴ NBC News, “Senators announce bill that would ban AI chatbot companions for minors,” October 28, 2025.

²⁵ Ibid.

²⁶ Roll Call, “Senators unveil bipartisan chatbot child safety bill,” October 28, 2025.

²⁷ Ibid.

²⁸ Ibid.

²⁹ NBC News, “Senators announce bill that would ban AI chatbot companions for minors,” October 28, 2025.

³⁰ Josh Hawley Senate Website, “Hawley Introduces Bipartisan Bill Protecting Children from AI Chatbots,” October 28, 2025.

³¹ Time Magazine, “A New Bill Would Prohibit Minors from Using AI Chatbots,” October 28, 2025.

³² Stanford Medicine, “Why AI companions and young people can make for a dangerous mix,” August 27, 2025.

³³ Ibid.

³⁴ Ibid.

³⁵ Ibid.

³⁶ NPR, “Their teen sons died by suicide. Now, they want safeguards on AI,” September 19, 2025.

³⁷ Social Media Victims Law Center, “Character.AI Lawsuits – October 2025 Update,” October 2025.

³⁸ Axios, “Mixed messages on AI for teens,” April 30, 2025.

³⁹ Social Media Victims Law Center, “Character.AI Lawsuits – October 2025 Update,” October 2025.

⁴⁰ TechPolicy.Press, “Breaking Down the Lawsuit Against Character.AI Over Teen’s Suicide,” October 23, 2024.

⁴¹ NPR, “Their teen sons died by suicide. Now, they want safeguards on AI,” September 19, 2025.

⁴² Ibid.

⁴³ Ibid.

⁴⁴ Social Media Victims Law Center, “Mom Sues AI Chatbot in Federal Lawsuit After Sons Death,” June 12, 2025.

⁴⁵ Josh Hawley Senate Website, “Hawley Introduces Bipartisan Bill Protecting Children from AI Chatbots,” October 28, 2025.

⁴⁶ Washington Times, “Senators announce bill that would ban AI chatbot companions for minors,” October 28, 2025.

⁴⁷ MediaPost, “Bipartisan Bill Would Ban Chatbot ‘Companions’ For Minors,” October 29, 2025.

 

Total
0
Shares
Previous Article

Pam Bondi crossed one red line that has Clarence Thomas ready to revolt

Next Article

AOC was mad as hell after one bombshell exposed this awful scam

Related Posts