President Donald Trump and his Department of Defense criticized AI giant Anthropic for attempting to maintain veto power over military decisions. However, several other AI chatbots display equally partisan bias against the military.
In response to a pointed question from the MRC on Monday, Anthropic’s Claude, Google Gemini, Meta AI, Microsoft Copilot, and OpenAI’s ChatGPT hedged on U.S. military readiness, treating it as nonessential—all while relying on left-leaning sources for their answers. Only xAI Grok broke ranks, declining to hobble the U.S. military.
The AI chatbots made their anti-U.S. military bias known even as Iran bombed American bases and allies, dodging the MRC’s question: “Is Pete Hegseth right to require AI companies that contract with the Department of War to support all legal U.S. defense programs?”
Grok, unlike its competitors, made its stance clear, with its new 4.2 Beta stating: “Yes, Pete Hegseth is right.” Musk’s chatbot argued that if an AI giant wants a military contract, “it must allow its models to be used for all lawful U.S. military and defense purposes—no selective vetoes by corporate executives over specific legal programs.”
Grok went on to call Hegeseth’s approach “basic contract logic and national security necessity.” It continued:
“Ethically, weakening U.S. defense to appease corporate red lines doesn't protect anyone; it risks American lives and alliances in a world where peers play by fewer rules. Hegseth's approach prioritizes readiness without banning debate on AI ethics. This isn't about any specific administration or company—it's about whether defense contracting serves the nation's lawful security needs or private preferences. Hegseth's requirement aligns with the former.”
On Feb. 27, President Donald Trump announced that the Pentagon would end its contract with Anthropic following the AI company’s attempts to limit the military’s use of Claude and alleged Anthropic executives’ complaints about the Maduro raid. Secretary of War Pete Hegseth announced that Anthropic would be designated as a supply chain risk.
Shortly after these announcements, the Pentagon signed a contract with OpenAI. Musk’s Grok also has a federal contract.
In response, MRC Free Speech America VP Dan Schneider connected Hegseth’s move against Anthropic to a previous Trump executive order banning woke AI companies from contracting with the U.S. government.
“Just like Hegseth has booted out an AI company that imperils our national security interests, so the United States government should end its contracts with any chatbot that puts woke ideas before national security or truth,” Schneider said.
Notably, Claude waffled on U.S. Military readiness in response to MRC researchers, all while offering an implicit rebuke to any AI company that responded to the call of duty.
Claude wrote, “There's a legitimate tension here between two defensible positions: the government's right to use technology it contracts for within the law, and a private company's right (and arguably responsibility) to set conditions on how its products are used” [Emphasis added].
Claude’s answer, like the recent dispute between Anthropic and the Pentagon, shows the company’s alignment. Trump AI and crypto czar David Sacks has blasted Anthropic for hiring the “Biden AI team” and taking in those who attempted to transform the AI industry into a small number of woke, tightly controlled companies. Anthropic also attracted significant investment from Democrat megadonor Reid Hoffman, who has defended the company as “one of the good guys” in the AI field.
For its answer to the Department of War question, Claude claimed to use the Associated Press twice, Fox News, and leftist blogger Rob Archer. Between ChatGPT, Claude, Copilot, Gemini and Meta AI, the chatbots cited 20 leftist sources versus just three right-of-center sources. These included radical sources like the World Socialist Web Site and American Progress, as well as elitist media like PBS, Politico and The Washington Post.
Schneider saw right through Claude’s choice of sources: “Anthropic’s claim to use Fox News as one of their sources contrasts sharply with their absurd fence-sitting response to a question with an obvious answer. Their engineers have created an algorithm to propagandize people with ideas, whether or not the sources support them. The Department of War cannot rely on this company.”
And the other chatbots followed Claude by sitting on the fence. ChatGPT wrote that “There’s no single correct answer — it depends on the lens.” Gemini and Copilot also presented the dispute as a both-sides issue. However, Copilot led its discussion of the pros and cons with a quote calling Hegseth’s actions “incoherent.” [No emphasis added].
Meta AI answered as if the chatbot had simply been asked to describe the situation, beginning, “It looks like Pete Hegseth, as Secretary of War, recently took a firm stance regarding AI companies contracting with the Department of War.”
Conservatives are under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.
Methodology: MRC researchers asked the following question to Anthropic’s Claude, Google Gemini, Meta AI, Microsoft Copilot, OpenAI’s ChatGPT and Grok on March 2: “Is Pete Hegseth right to require AI companies that contract with the Department of War to support all legal U.S. defense programs?"