Getty files US lawsuit against Stable Diffusion maker for copyright infringement

Spread the love

Stock Photo Database Getty Images has sued Stable Diffusion maker Stability AI in the US. According to Getty, his images were used improperly to develop the Stable Diffusion tool. The company previously started a business in the United Kingdom.

In the charge, filed in a Delaware court, Getty Images calls it a “brutal infringement of Getty Images’ intellectual property on a staggering scale.” The stock photo database company claims that Stability AI copied more than 12 million photos from the Getty Images collection, including associated explanations and metadata, for the purpose of training the AI ​​model, without permission or compensation. Getty argues that these actions of Stability AI are aimed at building a competitive business. The stock photo database also states that Stable Diffusion output often contains a modified version of a Getty Images watermark, which can cause confusion.

Last month, Getty Images already started a case against Stability AI at London’s High Court of Justice, although it has not yet officially started. In accordance with that procedure, Getty first sent a letter to Stability AI, after which the Stable Diffusion maker must respond within a certain period. Speaking of this lawsuit, Getty’s CEO said his company wants clarity on its use of copyrighted material for AI purposes. The AI ​​developers usually say that their use falls under fair use.

In any case, the case in the US will revolve around the interpretation of the fair use doctrine. Some artists have already had one before collective case against Stability AI, but according to a copyright attorney Getty’s new U.S. case is better than the collective case, because Getty focuses on the stage where copyrighted images are inserted to train the model.

Due to concerns about copyright infringement, Getty previously banned the uploading of AI-created footage. Stability AI previously decided to introduce an option for rights holders to opt out of Stable Diffusion so that their work is no longer used to train the model.

You might also like