America’s culture of free speech is under assault.
Concerns over censorship are rapidly increasing.
Now emerging technology is being used to shut people up in a frightening way.
The “arms race” over artificial intelligence (AI) has begun.
Numerous companies are developing AI platforms in an attempt to corner the market.
Tech entrepreneur Elon Musk warned that AI had the capability to be more dangerous than nuclear weapons, so vigilance regarding the technology is a necessity.
One disturbing use for AI is censorship.
Call of Communist Duty
The Chinese Communist Party (CCP) uses AI as a tool to monitor its 1.7 billion subjects and implement social credit scores.
And Western countries are following in China’s footsteps.
Now video game companies are using AI to monitor “toxic” speech via their games’ chat functions.
So speech deemed problematic could lead a game user in trouble.
Activision outlined this new feature for its popular series Call of Duty in the following statement:
“Call of Duty is taking the next leap forward in its commitment to combat toxic and disruptive behavior with in-game voice chat moderation beginning with the launch of Call of Duty: Modern Warfare III this November 10th. Activision will team with Modulate to deliver global real-time voice chat moderation, at-scale, starting with this fall’s upcoming Call of Duty blockbuster.”
Activision will deploy a feature called ToxMod to oversee gamers.
It’s not the AI so much as the monitoring of speech that everyone should fear
The statement continued:
“Call of Duty’s new voice chat moderation system utilizes ToxMod, the AI-Powered voice chat moderation technology from Modulate, to identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more. This new development will bolster the ongoing moderation systems led by the Call of Duty anti-toxicity team, which includes text-based filtering across 14 languages for in-game text (chat and usernames) as well as a robust in-game player reporting system.”
Derogatory language during gaming may be a problem in some isolated instances, but using AI to police everyone is a cure that’s worse than the disease.
One problem is that the definition of “harmful” is murky and ever-expanding.
On top of that, the people programming the AI to monitor language will undoubtedly have their own political biases, as seen with the popular ChatGPT platform.
For instance, ChatGPT was prompted to write a complimentary poem about both Joe Biden and Donald Trump; the AI refused to write anything praiseworthy about Trump.
If left unchecked, AI could help authoritarians implement CCP-style social credit monitoring in all avenues of life.
Stay tuned to Unmuzzled News for any updates to this ongoing story.