
As generative A.I. tools continue to proliferate at a rapid pace, lawsuits from content creators concerned about how these systems are trained have followed just as swiftly. While two rulings this week favored Anthropic and Meta, upholding their use of copyrighted books to train large language models (LLMs), they also spotlighted unresolved issues, including the use of pirated materials and whether a new legal framework may be needed for this emerging technology. And uncertainty remains about how A.I. companies will fare in future lawsuits.
“Both cases are broadly positive,” Brandon Butler, executive director of Re:Create, a coalition focused on balanced copyright, told Observer. “But these are District Court decisions, so there will be more steps down the road and there’s a lot of other cases out there.”
Judges debate over generative A.I.’s “transformative nature”
On June 23, a federal judge ruled in favor of Anthropic in a lawsuit filed last year by a group of authors who claimed the company’s Claude models were trained on copyrighted books without permission or compensation. Judge William Alsup found that Anthropic’s use was protected under the “fair use” doctrine, citing the transformative nature of how the company used the material.
“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different,” Alsup wrote in his decision. However, he criticized Anthropic’s decision to download millions of copyrighted books from pirate websites. A separate trial scheduled for December will determine whether the company owes damages.
In 2024, Meta was also sued by a group of authors, including comedian Sarah Silverman and writer Ta-Nehisi Coates. A ruling from Judge Vince Chhabria on June 25 sided with the tech giant—though with some caveats. While Chhabria found that Meta’s use of copyrighted books to train its Llama models qualified as fair use, he noted that the plaintiffs had made flawed arguments, failing to show that Meta’s actions harmed the market for authors.
Chhabria also criticized Judge Alsup’s earlier ruling for focusing “heavily on the transformative nature of generative A.I. while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on” He suggested that market impact will become increasingly important in future fair use rulings. Generative A.I., he warned, has the potential to “flood the market” with an endless stream of images, songs, articles, and books created with far less effort than by humans—undermining incentives for people to create “the old-fashioned way.”
(Except for the headline, this story has not been edited by PostX News and is published from a syndicated feed.)