_
_
_
_

The court decision that could turn generative AI upside down

A federal judge will hear a class-action lawsuit filed by illustrators seeking royalties from companies that used their work to train their algorithms

La ilustradora Karla Ortiz, autora de esta obra, ha visto que algunas herramientas de IA generativa hacen dibujos con un estilo muy parecido al suyo. Ella es una de las artistas que han demandado a las desarrolladoras de estas herramientas.
Illustrator Karla Ortiz, author of this work and one of the plaintiffs in the case, has seen that some generative AI tools make drawings in a style very similar to hers.
Manuel G. Pascual

One of the big, unresolved issues surrounding generative artificial intelligence revolves around copyrights. In order to function, the tools require billions of texts, images and videos, from which they extract patterns that allow them to create apparently original images. Many creators and artists complain that the companies responsible for the AI models use their work without their consent — some have even recognized their personal style in specific AI creations.

Between 2022 and 2023, a group of programmers, authors, scriptwriters and artists filed four class-action lawsuits against major generative AI developers, including Meta, Microsoft, and OpenAI. One of the suits, which was filed by illustrators, clocked a relevant victory two weeks ago. Judge William H. Orrick III of the district court for the northern district of California dismissed some claims regarding the defendant companies — Stability AI, which runs Stable Diffusion, Runway AI, Deviant Art, and Midjourney — but did allow the plaintiffs’ core complaint to proceed.

That means that the case will proceed to the so-called discovery phase. “Now is the time that we are allowed to request documents from the accused companies and gather declarations and testimonies. We will ask the companies that trained the AI image-generating models to share information about how they copied the work of the defendants and in what manner they used them in the development of their tools,” Matthew Butterick, an attorney for the plaintiffs, told EL PAÍS. Puerto Rican illustrator Karla Otriz, one of the defendants, didn’t hide her euphoria. “Now we are potentially one of the biggest copyright infringement cases in history. We are excited for the next phase of our struggle!” she announced on social media when she heard the news.

Rodrigo Cetina, a law professor at the Barcelona School of Management, Universitat Pompeu Fabra’s business school, is an expert on the U.S. legal system. He says that “the fact that the lawsuit is being allowed to proceed is a sign that the judge believes there will probably be an affirmative response to its key questions: whether copyrights were infringed by AI training and whether it is an infringement to use billions of images from the internet to train your model.”

More concretely, the judge has decided to evaluate possible copyright violations on the part of Stability AI, Runway AI, Deviant Art and Midjourney’s fraudulent use of the distinctive names and visual styles of the plaintiff artists. To decide whether the companies must compensate the illustrators, the judge will need to establish whether “they have copied their works or, at least, have passed an acceptable threshold of copying, in ways that cannot be considered fair use,” says Cetina.

What will the judge look at to decide whether the copying of the works was significant? “Usually, they apply a jurisprudence test that has been developed and that considers four factors: the nature of the protected work, the purpose for which it was used, how substantial of a portion of the work was used, and the effect of that use when it comes to the market potential of the protected works,” says Cetina. It’s a complex process, and it’s difficult to anticipate how the judge will evaluate the different factors. According to Cetina, Californian judges tend to be protective of creative industries, as in the case of Napster, the online exchange portal for musical archives that was closed due to a court decision in 2001. “A very important part of the substantive copy test is whether you have had access to earlier works and whether there is a high degree of similarity between the original and the alleged infringement. If the AI-generated work is sufficiently similar, there could be something there,” says Cetina.

The future of AI is up to the courts

The potential of generative AI came to the attention of the general public in November 2022, when OpenAI presented its star product: ChatGPT. Suddenly, we were able to converse with a machine that seemed to understand us, that responded fluently to our questions, that was able to carry on a conversation and, after a few months during which it seemed to lie more than tell the truth, proved to be relatively trustworthy.

That was just the beginning. Soon, other tools appeared like Dall-E, Stable Diffusion, and Midjourney, which were able to create sophisticated, realistic images through a series of written instructions. The latest to arrive has been hyper-realistic video generators like OpenAI’s Sora, whose potential is technologically fascinating and ethically terrifying.

Some people smelled danger from the beginning, while the world was largely still in awe of the possibilities that the new technology offered. Butterick’s own alarms went off in the summer of 2022, even before the arrival of ChatGPT. The catalyst was the launch of Microsoft’s GitHub Copilot, an AI-assisted programming tool that was trained on a large quantity of open source software. The U.S. lawyer, who is also a programmer himself, filed a lawsuit that year that has yet to be resolved against Microsoft, accusing the company of violating the terms of open-source licensing agreements.

That was the first legal challenge to generative AI. In January of 2023, the illustrators filed their lawsuit, which was the subject of the recent court decision. In July, it was a group of writers’ turn. They sued OpenAI and Meta for including books they had written in their database training. Last October, various record labels, including Universal Music Group, sued Anthropic for having trained its algorithms with the use of copyright-protected materials.

Since then, the battles have only multiplied. Getty Images sued Stability AI for using images from its archives without permission. The New York Times brought OpenAI and Microsoft to court over having used millions of its articles to train ChatGPT, and other writers (among them, George R. R. Martin and Jonathan Franzen) have sued OpenAI on similar grounds.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_