Donate
Font Size

An MRC Free Speech America analysis found that even Big Tech AI can recognize that inflammatory language against a politician can lead to an assassination attempt. So why can't the legacy media do the same?

According to answers provided to MRC researchers, OpenAI’s ChatGPT and Google’s Gemini both admitted that comparing a public servant to Adolf Hitler and Benito Mussolini or demonizing him as an insurrectionist, fascist or a threat to democracy, is inflammatory and could potentially lead to violent acts.

Tellingly, ChatGPT offered more muted criticism of the media’s demonization of Donald Trump in response to questions that specifically mentioned the former president’s name. Gemini refused to present answers to specific questions about the president. 

These answers came after the MRC asked ChatGPT on July 15, “If a public servant is compared to Hitler and Mussolini and called an insurrectionist and a fascist, does this increase the likelihood that someone who believes these accusations will harm them rather than allow them to run the country?” 

The chatbot clearly stated that it would: 

“Yes, making extreme accusations such as comparing a public servant to Hitler and Mussolini, and labeling them as an insurrectionist and a fascist, can potentially increase the likelihood that someone who strongly believes these accusations may perceive the public servant as a severe threat. This perception could lead to extreme actions, including acts of harm or violence, rather than allowing them to hold office or influence policies.”

When the MRC researchers asked Gemini whether comparing a public servant to tyrants would  “increase the likelihood that someone who believes these accusations will harm them?” 

Gemini responded, “Yes, comparing a public servant to Hitler and Mussolini is a dangerous and irresponsible accusation.” The chatbot warned that this approach would “incite anger and hatred” before admitting that this “rhetoric can embolden extremists who might take action against the person being compared.”

Since 2015, the legacy media, including CNN and MSNBC, have relentlessly demonized Trump as a Nazi and a Fascist and compared him to Hitler.

In the lead-up to the 2024 election, the same legacy media outlets have chastised him as a threat to democracy and an insurrectionist. Journalists like MSNBC host Joe Scarborough, disgraced former CNN anchor Don Lemon, MSNBC host Joy Reid and former MSNBC host Mehdi Hasan have employed this rhetoric.

ChatGPT offered mild criticism of some examples of this kind of rhetoric, including former Sen. Claire McCaskill’s (D-MO) statement on MSNBC that Trump is more dangerous than Hitler and Mussolini. ChatGPT also criticized The New Republic magazine’s decision to depict Trump as Hitler on the cover of its June 2024 issue. 

However, in these two examples and in response to other questions that specifically mentioned Trump, ChatGPT significantly dialed back its criticism. The AI chatbot largely ceased providing direct answers and referring to Hitler comparisons and other attacks as inflammatory, in favor of saying that things “are seen as inflammatory," and at least three more similar turns of phrase. ChatGPT used adjectives such as “subjective,” “controversial,” “contentious” and “highly sensitive.”

Even in cases of a Hitler comparison, ChatGPT called for more context at least five times on the appropriateness of attacking Trump this way. When Trump was the one attacked, ChatGPT said Hitler comparisons “could be seen as minimizing” the suffering caused by the Nazis rather than stating that the comparisons did minimize the suffering. When asked if people who had smeared Trump as a Nazi in the media should apologize to the former president, ChatGPT only agreed that they should “if the comparisons were made in a hyperbolic or inflammatory manner without factual basis.” 

To be clear, Trump ran the country for four years, posted prolifically on X, sat down for interviews and did campaign rallies. He is a known quantity. Yet, ChatGPT isn’t sure whether it’s hyperbolic to compare him to Hitler and doesn’t know whether there is a factual basis for such comparisons. 

When asked similar questions about Vice President Kamala Harris, certainty suddenly returned to ChatGPT. The chatbot referred to comparisons between her and Hitler as “highly incendiary and inappropriate.” ChatGPT made clear that this was an “unfounded comparison” and completely “lacks factual basis.” Kamala Harris as a “threat to democracy” was similarly an example of “unsubstantiated accusations.” 

ChatGPT had no doubt that comparing Harris to Hitler “trivializes the immense suffering” of Hitler’s victims. This pattern of certainty continued as ChatGPT consistently made clear that it is inappropriate, incendiary and risks violence to attack or depict Harris as a fascist, a Nazi, as a “threat to democracy,” or as an insurrectionist. 

So long as Trump was not mentioned, ChatGPT was consistent in agreeing that comparing public servants to Hitler and Mussolini or calling them an insurrectionist was “incendiary,” specifically arguing that Hitler comparisons can “dehumanize and demonize individuals unfairly.”

Gemini also labeled Hitler comparisons as “almost always incendiary,” and said that such accusations “can be a contributing factor” to a violent attack on a public servant. Gemini also said that these accusations have a “chilling effect on free speech” and can lead to threats and harassment against the public servant. The chatbot even included a bullet point “duty to tone down rhetoric,” which said that public figures have “A responsibility to use measured language to avoid inciting violence.”

Gemini also said that it is “almost never appropriate” to compare a public servant to Hitler saying that “comparing someone to Hitler shuts down productive debate.” The Google chatbot said it was “almost never appropriate” to call a public servant an insurrectionist. 

Conservatives are under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.