Mark Zuckerberg's company has been hiding something from parents for years.
A former safety chief just exposed the truth in the most personal way possible.
And a Meta executive made one sick confession about his teen daughter that horrified a courtroom.
A Father's Nightmare Became His Daughter's Reality
Arturo Béjar helped his 14-year-old daughter create her first Instagram account.
As Meta's former safety chief, he thought he could protect her.
Days later, predators bombarded her with unsolicited explicit pictures and propositions for sex.
"I didn't know that was going to bring predators to her door, people who attacked her to her door, people who would ask her to sell nude photos of herself when she was a teenager to her door," Béjar told jurors in the landmark New Mexico trial, choking up several times.
Béjar led Meta's safety team from 2009 to 2015 and returned as a consultant in 2019 specifically because of what happened to his daughter.
What he discovered should terrify every parent in America.
Zuckerberg Knew Exactly What Was Happening
Meta's internal research showed 500,000 children received sexually inappropriate messages every single day on Facebook and Instagram — just in English-speaking countries.
The worldwide number? Meta didn't bother tracking it.
"The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls," Béjar testified.
Instagram's algorithm was designed to maximize engagement by connecting users with "shared interests."
Meta executives knew predators were exploiting that feature to find victims.
Béjar's daughter tried reporting the messages.
Instagram gave her no way to explain what was happening.
"When my daughter got a message saying, 'Do you want to have sex tonight?' — first message she gets from a stranger — there was no option," Béjar said.
Meta made reporting predators nearly impossible because taking action would hurt engagement numbers.
Meta Chose Profits Over Protecting Children
Béjar testified that Zuckerberg and Instagram chief Adam Mosseri stonewalled safety researchers who tried implementing protective features.
Meta was scrambling to compete with TikTok and Snap.
Executives decided child safety was getting in the way of growth.
Internal documents show Meta knew making teen accounts private by default would prevent 5.4 million unwanted direct messages daily.
They refused because it would cost them 1.5 million monthly active teen users per year.
Meta's algorithms made the problem worse by efficiently connecting adults seeking children with potential victims.
This was a calculated decision to let predators operate freely.
The Numbers Meta Tried to Hide
Internal research called the "Bad Experiences and Encounters Framework" found 16.3% of users saw nudity or sexual activity on Instagram in a single week.
Meta's public Community Standards Enforcement Report claimed only 0.02% to 0.03% of Instagram views contained that content.
Deliberate deception to hide the predator problem from parents and regulators.
In 2022, Instagram's "Accounts You May Follow" feature recommended 1.4 million potentially suspicious adult users to teenagers in one day.
The algorithm was serving children to predators on a silver platter.
Meta researcher Frances Andrus warned in a 2020 email that sexually inappropriate messages were being sent to "~500k victims per DAY in English markets only."
She added: "We expect the true situation is worse."
Meta executives received these warnings repeatedly and did nothing.
Parents Thought Teen Accounts Would Fix Everything
Meta announced Teen Accounts in 2024 as the solution to child safety concerns.
Independent testing found 64% of the safety features either don't work or no longer exist.
Only 17% function as Meta described to parents and regulators.
"Parents should know, the Teen Accounts charade is made from broken promises," Béjar stated. "Kids, including many under 13, are not safe on Instagram."
Meta created the appearance of protecting children while continuing to expose them to predators.
New Mexico Puts Meta on Trial
New Mexico's case is the first standalone state trial against a social media company over child safety.
Success would bypass Meta's usual legal shields — Section 230 protections and First Amendment defenses.
A verdict against Meta would open the floodgates for thousands of similar cases.
California is running a parallel trial right now with similar claims about Meta and YouTube deliberately harming children.
Meta's spokesperson claimed the company invested heavily in safety tools.
The internal documents tell a different story.
The evidence is damning.
500,000 children victimized daily on Meta's platforms.
Algorithms designed to connect predators with kids.
Simple fixes available that would drastically reduce harm.
Zuckerberg chose profits over protecting children every single time.
Béjar watched his own daughter become a victim of the system he helped build.
He's making sure every parent knows what Meta tried to hide — Instagram and Facebook are hunting grounds for predators, and Zuckerberg knew it all along.
Sources:
- Thomas Barrabi, "Ex-Meta exec says Instagram exposed teen daughter to 'predators' in bombshell testimony," New York Post, February 11, 2026.
- Lauren Feiner, "Trial against Meta in New Mexico focuses on dangers of child sexual exploitation on social media," The Verge, February 9, 2026.
- Emma Roth, "Meta researcher warned of 500K child exploitation cases daily on Facebook and Instagram platforms," Fox Business, February 9, 2026.
- Charlie Warzel, "The Allegations Against Meta in Newly Unsealed Court Filings," TIME, November 23, 2025.
- Andy Burrows, "Instagram Teen Accounts fail to protect children, first-of-its-kind testing of safety tools reveals," Molly Rose Foundation, September 25, 2025.
- Senate Judiciary Subcommittee, "Transcript: Senate Hearing on Social Media and Teen Mental Health with Former Facebook Engineer Arturo Bejar," TechPolicy.Press, November 7, 2023.

