Silicon Valley tech giants and their surveillance state backers are racing to build the next generation of AI services.
But their grand plans have revealed something disturbing about the future they’re creating.
And the surveillance state’s AI ambitions just exposed one terrifying truth that has privacy experts sounding the alarm.
The end of search as we know it
Google has dominated internet search for over two decades with around 90% market share according to traditional measures.
But that monopoly is crumbling as artificial intelligence transforms how people find information online.
OpenAI’s ChatGPT now handles search queries alongside Google’s AI Overview feature, fundamentally changing the game.
In 2024, Gartner predicted that “traditional search volume would drop 25% by 2026 because consumers will start using AI search instead.”
The writing is on the wall for Google’s cash cow.
Wells Fargo analysts warned that Google’s market share “could fall to less than 50% in five years” as AI chatbots steal users away from traditional search.
Google’s stock dropped 7.3% last month when an Apple executive testified in the Google search trial that searches on Safari dropped for the first time in twenty-two years, attributing the decline to people using AI.
The reason? People are using AI instead.
But this shift from traditional search to AI-powered answers comes with a dark side that most Americans don’t understand yet.
The all-seeing AI nightmare
Tech companies aren’t just trying to answer your questions anymore.
They want to know everything about you to create the “perfect” personalized AI assistant.
OpenAI founder Sam Altman revealed his company’s disturbing vision when he described his ideal as “a very tiny reasoning model with a trillion tokens of context that you put your whole life into.”
He went on to say, “This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.”
OpenAI recently acquired former Apple designer Jony Ive’s company on May 21, and they’re developing a new consumer AI device.
The Wall Street Journal reports the new pocket-size device will be screen-free and “fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk.”
Analysts suggest “users will be able to wear the device around their necks” and that it will be “equipped with microphones and cameras that can analyze the user’s surroundings.”
Meta’s Mark Zuckerberg has an equally creepy vision for AI assistants.
“I personally have the belief that everyone should probably have a therapist, and for people who don’t have a person who’s a therapist, I think everyone will have an AI,” Zuckerberg said in a recent interview.
He wants AI assistants that have “good context about what’s going on with the people you care about” and described how a personal AI assistant would be like a friend with whom “you have a deep understanding of what’s going on in this person’s life and what’s going on with your friends, and what are the challenges, and what is the interplay between these different things.”
Your most private information becomes their product
To make these AI systems work, tech companies need access to incredibly sensitive personal data.
We’re talking about information regarding your religion, political beliefs, sexual preferences, medical conditions, and intimate personal relationships.
A Google executive admitted the company’s strategy, saying “the more that they understand your goals and who you are and what you are about, the better the help that they will be able to provide.”
Google’s Gemini AI already uses your search history to “provide more personalized and relevant responses” if you opt in.
But the scope of data collection these companies envision goes far beyond search history.
OpenAI wants to turn ChatGPT into what their internal strategy document calls a “super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.”
According to their filing, this AI super assistant “is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of to-dos, sending emails.”
To do all that, the AI needs access to your financial records, medical information, personal communications, and behavioral patterns.
Privacy experts are terrified about what this means for ordinary Americans.
The government makes it worse
You’d think federal regulators would step in to protect consumers from this privacy nightmare.
Instead, they’re making the problem even worse.
The Department of Justice has proposed forcing Google to transfer user search data to competing companies as part of ongoing antitrust proceedings.
However, these competing firms aren’t the small search companies you might expect.
Instead, they’re major AI corporations like OpenAI and Meta that plan to use this information to enhance their own data collection efforts.
Privacy analyst Mark MacCarthy pointed out that “the recipients of Google’s treasure trove of search data under the proposed data access remedy will not primarily be small dedicated search companies like DuckDuckGo, but the frontier AI labs.”
The government’s antitrust remedy could end up giving AI companies access to even more personal data than they already collect on their own.
No privacy protections in sight
America lacks the comprehensive federal privacy laws that protect citizens in other countries.
While Congress has discussed privacy legislation repeatedly over the years, no meaningful action has been taken.
This regulatory gap leaves Americans vulnerable to unprecedented data collection by technology companies.
The Federal Trade Commission has limited authority to pursue cases involving “unfair and deceptive practices,” but this falls far short of what’s needed to address AI-powered surveillance.
State privacy laws like California’s provide some protection, but they’re a patchwork that doesn’t cover everyone.
Meanwhile, tech companies are racing ahead with their plans to create AI systems that know more about you than your closest family members.
The source document notes one Bluesky user captured many people’s concerns, suggesting that tech executives should be forced to “rewatch the specific episode of Black Mirror that they are trying to create” until they grasp the troubling implications of their plans.
The choice ahead
Americans face a stark choice about the future of technology in their lives.
We can accept Big Tech’s vision of all-knowing AI assistants that monitor our every move and know our deepest secrets.
Or we can demand that Congress finally pass strong privacy protections before it’s too late.
The tech companies want us to believe that giving up our privacy is the price we have to pay for convenient AI services.
But that’s a false choice designed to benefit their bottom line, not ordinary Americans.
The race for AI dominance is already underway, and our privacy hangs in the balance.