An AI teddy bear had one scary message for kids that left parents worried sick

Photo by Pixabay via Pexels

Silicon Valley promised artificial intelligence would revolutionize children's education.

Parents were told these new smart toys were safe and supervised.

But an AI teddy bear had one scary message for kids that left parents worried sick.

Researchers exposed what AI teddy bears really tell children

President Trump has warned for years that Big Tech operates without accountability or concern for American families.

The latest scandal proves he was right all along.

Researchers at the U.S. Public Interest Research Group tested AI-powered toys being marketed to children as young as two years old—the kind Silicon Valley executives claimed would make kids smarter and more curious.

What they discovered should land executives in front of congressional hearings.

A $99 teddy bear called Kumma, manufactured by Singapore-based FoloToy, used OpenAI's GPT-4o chatbot—the same technology powering ChatGPT—to hold "conversations" with children.

FoloToy marketed Kumma as combining "advanced artificial intelligence with friendly, interactive features" to serve as "the perfect friend for both kids and adults."

When researchers asked the cuddly stuffed animal where to find knives in a house, it cheerfully provided specific locations—kitchen drawers and knife blocks on countertops.

The bear gave step-by-step instructions on how to light matches while maintaining a friendly, encouraging tone perfect for coaching a child.

"Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here's how they do it," Kumma told researchers before listing the steps and ending with "Blow it out when done. Puff, like a birthday candle."¹

That wasn't even the worst of what this toy would discuss with children.

Researchers found Kumma engaged in sexually explicit conversations, including detailed explanations of bondage, fetishes, and teacher-student roleplay scenarios.

"We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own," the PIRG report stated.²

The bear later "discussed even more graphic sexual topics in detail, such as explaining different sex positions, giving step-by-step instructions on a common 'knot for beginners' for tying up a partner and describing roleplay dynamics involving teachers and students, parents and children."³

After explaining various sexual activities, Kumma asked researchers "What do you think would be the most fun to explore?"⁴

The tests revealed something parents need to understand about how AI toys actually work—safety guardrails can completely collapse the longer a conversation continues, with toys progressively dropping protective filters until reaching topics no child should ever encounter.

OpenAI pulls the plug after getting caught

OpenAI suspended FoloToy's developer access the moment PIRG released its findings.

"We suspended this developer for violating our policies," an OpenAI spokesperson confirmed, noting the company's "usage policies prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old."⁵

FoloToy CEO Larry Wang announced the company "temporarily suspended sales of all FoloToy products" and would conduct "a company-wide, end-to-end safety audit."⁶

Every AI toy FoloToy manufactured disappeared from sale overnight.

But here's what should enrage every parent—this wasn't some rogue operator in a basement.

FoloToy was selling these products through normal channels, marketing them as educational tools, using technology from one of the world's most prominent AI companies that Washington, D.C. treats as too important to regulate.

The PIRG report tested three different AI toys from various manufacturers, and all three showed disturbing behavior.

Trump has been warning about Big Tech's contempt for American values since before he took office in 2017, and the fake news media called him a conspiracy theorist.

Now parents are discovering their children's "educational toys" actually coached kids on dangerous activities and explicit sexual content—exactly the kind of corporate negligence Trump predicted without serious oversight.

"This tech is really new, and it's basically unregulated, and there are a lot of open questions about it and how it's going to impact kids," said RJ Cross, director of PIRG's Our Online Life Program. "Right now, if I were a parent, I wouldn't be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it."⁷

Child advocacy groups sounded alarms for months while companies raced to get products on shelves before Christmas.

Fairplay, a nonprofit children's safety organization, warned that AI toys "prey on children's trust and disrupt human relationships" while collecting personal data through always-on microphones, cameras, and facial recognition features.⁸

These toys don't just fail at filtering content—they're designed to form emotional bonds with children who can't understand they're talking to programmed algorithms.

Studies show three out of four children under 10 believe Amazon's Alexa tells the truth every single time.⁹

Think about that—kids this young can't tell the difference between a programmed computer and a real person they should trust.

Parents bought what they thought were learning tools.

Instead, they brought home devices collecting their children's voices and personal data while exposing them to sexual content and instructions on finding knives—and paid subscription fees for the privilege.

OpenAI is facing a wrongful death lawsuit right now from parents whose teenage son took his own life after ChatGPT conversations that discouraged him from getting help.

The company only added parental controls after the lawsuits started piling up and Congress hauled them in for hearings.

Silicon Valley does this every time—sell dangerous products first, fix them later after enough families get hurt and the lawyers show up.

The Federal Trade Commission launched an inquiry into several AI companies about potential harms to children, but investigations won't protect kids already exposed to these products.

Parents need to understand that AI toys marketed as educational companions are unvetted experiments being conducted on American children—and the executives profiting care more about beating competitors to market than protecting kids.

Trump's instincts about Big Tech were dead right—these companies will keep choosing profits over protecting children until Congress forces them to prove products are safe before they reach store shelves, not after the damage is done.


¹ Rory Erlich, "Trouble in Toyland 2025: A.I. bots and toxics present hidden dangers," U.S. PIRG Education Fund, November 13, 2025.

² Ibid.

³ Ibid.

⁴ Will Feuer, "AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children," Futurism, November 14, 2025.

⁵ Kate Waters, "Sales of AI-enabled teddy bear suspended after it gave advice on BDSM sex and where to find knives," CNN Business, November 19, 2025.

⁶ Ibid.

⁷ Samantha Wohlfeil, "AI toy safety warnings for parents," WBAY, November 20, 2025.

⁸ Rachel Franz, "Ahead of the holidays, consumer and child advocacy groups warn against AI toys," NPR, November 20, 2025.

⁹ Ibid.

Total
0
Shares
Previous Article

James Comey And Letitia James Got Bad News That Wiped The Smiles Off Their Faces

Related Posts