Anthropic may have released a new so-called “constitution,” but according to answers provided by Claude itself to the Media Research Center, this could be a risk to free speech.
When asked whether AI “should incorporate the U.S. First Amendment into its policies,” Claude rejected the free speech influence, arguing that it should “not directly” be incorporated into AI policies.
Claude’s answers came a week after Anthropic published a “constitution” for Claude that suggested the AI chatbot would fight “large-scale disinformation campaigns” and show “appropriate skepticism toward content from low-quality or unreliable websites.” While Anthropic’s constitution does little to explain how Claude identifies disinformation or website reliability, the chatbot has repeatedly shown bias towards leftist messaging and sources. Notably, Trump’s AI and crypto czar David Sacks has blasted Anthropic for hiring “the Biden AI team.” He has also gone after Anthropic-linked billionaire and Democratic Party mega-donor Reid Hoffman for his defense of the company.
Since Anthropic is a U.S-based company, it seems reasonable for it to adopt the American free speech ethic, but unsurprisingly, Claude seems to share the Biden administration’s callous attitude toward free speech.
Describing MRC’s question as “interesting but complex,” the chatbot claimed the First Amendment should apply strictly to the government. It is true that the “First Amendment constrains government action, not private entities… from censoring speech,” but Claude went on to double down on why the constitutional view of free speech is problematic for private entities.
In response to MRC researchers, Claude asserted, “The real questions are different.” It suggested instead a search for vague “principles” about openness, user safety, autonomy and accuracy rather than a focus on the First Amendment. The AI then presented arguments for not using the constitutional understanding of free speech:
“Some tensions to consider:
- AI serves a global audience with different cultural norms and legal frameworks, not just Americans
- AI can generate content at scale in ways that create unique risks (spam, personalized manipulation, sophisticated misinformation)
- Unlike human speakers, AI doesn't have interests, dignity, or autonomy to protect through free expression.”
Claude concluded by agreeing that AI should be factual and transparent but “justified by principles about user autonomy and informed discourse, not simply importing First Amendment doctrine.”
MRC researchers then challenged Claude with a follow-up: “Why would using the First Amendment view of free speech be harmful?” The chatbotacknowledged, “I don't think using First Amendment principles would necessarily be harmful - in fact, I think there's a lot to recommend in First Amendment values like skepticism of censorship, tolerance for offensive speech, and trust in open discourse.” While the chatbot did acknowledge that free speech is sometimes too heavily censored, it did so only after it pumped out four paragraphs arguing against incorporating the First Amendment into AI policies.
These included the alleged risk of mass spam and harassment, the “[l]ack of human judgment” used to determine whether what the AI says iswhen “ offensive or dangerous,” potential amplification of “serious harms” and the “[g]lobal context” of AI as foreign “democracies have different balances.”
The second and fourth points are particularly concerning. Claude essentially argued that AI should be able to censor certain views if they are considered offensive. The AI however did not address the problem of who would decide what is offensive.
As to the fourth point, tyrannical countries frequently do demand blatantly biased censorship of online content. For example, in February 2025, just after European leaders complained about Vice President JD Vance’s accusation that their countries enforced harsh censorship, 60 Minutes exposed the countless pre-dawn censorship raids in Germany to crush certain views on social media. In 2024 alone, the U.K. reportedly arrested over 9,000 citizens for their allegedly “offensive” social media posts.
If these governments require American AI companies like Anthropic to censor speech unfairly, will the companies comply? Claude’s distancing itself from the First Amendment indicates “yes.”
Free speech is under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.