Even video game companies are jumping on the censorship bandwagon, and they’re even utilizing artificial intelligence.
Popular game Call of Duty (from Activision) announced Aug. 30 that its “anti-toxicity team” is now using artificial intelligence (AI) to listen to in-game conversations. The company Modulate AI, an artificial intelligence company, will provide “real-time voice chat moderation, at-scale,” according to the announcement. It will do so through its ToxMod, voice chat moderation AI tech, beginning with the November release of Call of Duty: Modern Warfare III.
The gaming company announced it unleashed a beta rollout on the existing games Call of Duty: Modern Warfare II and Warzone. The policy announcement begs multiple questions, including who determines what counts as “hate speech,” and will ToxMod become the latest agent for unfairly censoring speech that goes against the leftist narrative?
Call of Duty proudly declared that ToxMod will “identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more.” No definition of these terms was provided.
PC Gamer noted on Friday that Riot Games and Blizzard are two other video game companies that have also implemented voice chat moderation. The outlet also reported that Modulate AI recently added a “violent radicalization” category to ToxMod for “terms and phrases relating to white supremacist groups, radicalization, and extremism.” But this isn’t the first time someone has taken this route. Leftists have absurdly identified Betsy Ross flags and black Republicans as white supremacists, raising serious questions about Modulate’s potential bias in defining “extremism.”
Activision claimed on its website that the voice chat moderation AI just reports “toxic behavior” and that only the company decides how to act. The current rollout is just for English-language, but other languages will follow later, according to its announcement.
“The system helps enforce the existing Code of Conduct, which allows for ‘trash-talk’ and friendly banter,” the company stated. “Hate speech, discrimination, sexism, and other types of harmful language, as outlined in the Code of Conduct, will not be tolerated.” The code, however, does not give a clear definition of hate speech. Violations can result in text chat bans, temporary suspension or even permanent suspensions.
Modulate AI claims that ToxMod can even process the “nuance of voice” to “understand emotion.” AI companies usually make grandiose claims about their technology, PC Gamer reported, although the operation in reality does not always match the advertisement.
AI can be programmed to be biased. For instance, popular ChatGPT has been exposed as programmed with a leftist and anti-Christian bias, with one study showing ChatGPT demonstrating leftist bias for 14 out of 15 questions. Could the same be true of ToxMod?
Conservatives are under attack. Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment and provide transparency and an equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.