Donate
Text Audio
00:00 00:00
Font Size

The legacy media are pushing artificial intelligence (AI) companies into so-called licensing agreements, allowing the left to control what Americans hear from chatbots. David Sacks, President Donald Trump’s AI czar, isn’t going to stand for it. 

Sacks pushed back against lawsuits from authors targeting AI firms like Anthropic and Meta, noting how badly this will handicap American AI during the June 28 edition of the All-In podcast. Instead of embracing legacy media’s so-called licensing agreements with AI companies, Sacks made an excellent analogy that showed the importance of AI models being allowed to draw information from the widest variety of sources possible. 

Sacks explained that what “AI models do in pre-training is they take millions of texts, millions of documents and they understand the positional relationship of the words and they translate that into math, into a vector space called positional encoding. That is a transformation of the underlying work. In the same way that a human can read a body of works and then come up with their own point of view, that is basically what AI models are doing.” [Emphasis added.] 

Trump’s AI Czar went on to explain that AI companies like Anthropic and Meta need to be allowed to use as many sources as possible. Sacks said, “So, if AI models violate someone's copyright by outputting something that's identical, then, obviously, that's a violation, but if all they're doing is transforming the work, they're doing positional encoding and then coming up with their own unique work product. This judge said that that is not a violation of copyright.” 

Sacks and the All-In crew were responding to a pair of cases where AI companies were sued by authors for training their models on the authors’ works. In a July 1 decision siding with Anthropic against the suing authors, U.S. District Court Judge William Alsup, like Sacks, called Anthropic's use of copyrighted books for pre-training "quintessentially transformative.” 

Alsup wrote in his decision, "Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition." Meta also successfully fended off a lawsuit from major authors. 

While Sacks argued that putting too many restrictions on what AI could learn from would weaken AI models and make American companies fall behind China, he did not mention the disturbing trend towards “exclusivity contracts.” Some AI companies, such as OpenAI, are potentially violating antitrust laws by signing what amount to exclusivity contracts with left-leaning publications like The Associated Press, The Washington Post, Axios, Time, Vox, The Atlantic, Future and The Financial Times, among others. While the details of the contracts are kept secret, public comments by legacy media indicate they induce the AI developers to exclude their new media competitors from being included in AI platforms. 

MRC Free Speech America Senior Counsel for Investigations Tim Kilcullen condemned these agreements, arguing, “Legacy media and Big Tech disguise their exclusivity contracts as ‘licensing’ agreements. This is because copyright law is a rare exception to the antitrust statutes that prohibit this type of anti-competitive misconduct. Judge Alsup’s ruling blows a hole in legacy media’s disingenuous argument by acknowledging that LLM development is the epitome of constitutionally-protected fair use.”

The choice by AI companies to use these companies to train their data, especially if paired with restrictions on what other data AI companies can use, means that chatbots will use biased information from sources like The Associated Press to respond to users. Recent MRC studies have shown that leftist bias is already a huge problem with AI models. 

And even the less articulate Democrats are aware of the potential advantage of controlling what sources AI chatbots learn from. In July 2023, then-Vice President Kamala Harris said the quiet part out loud concerning AI: that it can be used to determine opinions of those who use AI if the algorithms are fed certain information during the input process. Harris said, “And so, the machine is taught—and part of the issue here is what information is going into the machine that will then determine—and we can predict then, if we think about what information is going in, what then will be produced in terms of decisions and opinions that may be made through that process.”

Conservatives are under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.