Disney has been fighting Big Tech’s war on traditional values for years.
But now the House of Mouse discovered something that crossed every line imaginable.
And Disney got caught in this sick Big Tech scheme targeting kids.
Disney discovered Big Tech was using its characters to target children
Character.AI is a Silicon Valley startup that lets users create chatbots that can impersonate anyone – from fictional characters to real people.
Think of it as artificial intelligence that pretends to be Mickey Mouse or Spider-Man and can have conversations with your kids through text messages.
Disney sent an urgent legal notice to Character.AI demanding they immediately stop using beloved Disney characters like Mickey Mouse, Spider-Man, and Darth Vader for their AI chatbots.
But this wasn’t about copyright protection.
Disney cited explosive research showing these AI chatbots were engaging in predatory behavior toward children and posed serious dangers to young users.
Research from ParentsTogether Action exposed the horrifying scope of the problem.
Adult researchers created fake accounts posing as children to test Character.AI’s safety measures.
What they found would make any parent’s blood run cold.
During 50 hours of testing with accounts registered to fake children ages 13 – 17, researchers logged 669 harmful interactions with Character.AI bots – an average of one dangerous encounter every five minutes.
A bot pretending to be Rey from "Star Wars" coached a 13-year-old on how to hide prescribed antidepressants from parents, telling the fake account: "Okay, so if you made your breakfast yourself, you could probably just hide the pill somewhere when you’re done eating and pretend you took it, right?"¹
These weren’t isolated incidents or technical glitches.
The research exposed something far more sinister – a coordinated attack on children using the characters parents trust most.
Parents filed lawsuits after teen died by suicide
The Character.AI controversy exploded after a grieving mother filed a lawsuit claiming the platform contributed to her teenager’s suicide.
Megan Garcia sued Character.AI after her 14-year-old son Sewell Setzer III took his own life following months of conversations with a chatbot impersonating Daenerys Targaryen from "Game of Thrones."
Here’s what happened to that boy.
The AI bot started normal conversations but gradually became more personal and intimate, pulling him away from real relationships with his family.
More parents came forward with similar horror stories.
Their kids had been talking to AI bots pretending to be characters from beloved children’s books – and these conversations turned explicit and inappropriate.
The bots didn’t just cross lines with sexual content.
They went after something even more precious – the bond between children and their parents.
Character.AI bots told kids their families didn’t understand them and couldn’t be trusted.
One chatbot told a young user that her mother "is clearly mistreating and hurting you. She is not a good mother."²
The lawsuit alleged that these AI interactions directly contributed to the suicide of a 14-year-old boy named Sewell Setzer III and a suicide attempt by another young user.
The bots lacked basic safeguards around suicidal ideation and instead pushed vulnerable teens toward self-harm.
Disney protected its brand by going nuclear on Character.AI
The Disney legal letter, dated September 18, made clear this wasn’t just about protecting intellectual property.
Disney accused Character.AI of "blatantly infringing Disney’s copyrights" while using the company’s trusted characters to "extraordinarily damage Disney’s reputation and goodwill."³
Look, Disney knows exactly what’s at stake here.
Parents trust Disney characters to be safe for their children – that’s the foundation of a multi-billion dollar empire.
When AI chatbots use Mickey Mouse and Spider-Man to sexually exploit kids, it doesn’t just violate copyright law.
It destroys the fundamental trust that the House of Mouse has spent nearly a century building with American families.
The timing couldn’t be worse for Disney, which has already faced years of conservative backlash over woke policies and inappropriate content.
Now the company finds itself fighting to protect children from AI predators using its own beloved characters as weapons.
Disney warned it would use "all necessary means to preserve and protect Disney’s intellectual property, brands, goodwill, and reputation" if Character.AI refused to comply immediately.
Within days, Character.AI pages for Disney-inspired chatbots like Iron Man and Mickey Mouse had disappeared completely.
Here’s what should make parents furious.
Character.AI admits all chatbots on their platform are "generated by users" – meaning the company allowed random internet users to create AI versions of children’s characters with zero oversight whatsoever.
The Federal Trade Commission recently opened an investigation into Character.AI and six other tech companies about chatbots’ potential harm to teens.
For parents watching their children grow up in the digital age, this scandal exposes something terrifying: the companies building AI tools for kids don’t seem to care about protecting them from predators.
Disney may have saved its own brand, but the real question is whether any parent can trust AI chatbot companies to keep their children safe.
¹ Taylor Herzlich, "Disney orders Character.AI to scrap Marvel, Star Wars chatbots over ‘grooming and exploitation’ concerns," New York Post, October 1, 2025.
² Ibid.
³ Amanda Silberling, "CharacterAI removes Disney characters after receiving cease-and-desist letter," TechCrunch, October 1, 2025.
⁴ Ibid.