AI creative robots hit legal challenges: copyright owners sued infringement one after another.


AI Legal

With the recent fire of artificial intelligence chatbot ChatGPT, the valuation of software development company OpenAI in the primary market has soared to nearly $30 billion. However, such applications based on deep learning networks are facing a calamity that has finally arrived: the models of these robots are often programmatically modified from existing works (images, text, code), and now copyright owners have come to the door.

Comprehensive media reports, including Getty Images and multiple videographers and artists, have taken a series of image-generating AI developers to court, accusing them of knowingly copying and unauthorized use of millions of copyrighted images.

With OpenAI’s image robot DALL· For example, the user can say to the robot, “Get a moxican head for the Mona Lisa,” and the software will generate a suitable image within seconds.

If the Mona Lisa is a more recognizable work, then there are also a large number of pictures carried from the Internet in this type of software, such as the example of “giving the monkey P a computer and hat”, the photos of the monkey, computer and hat are obviously the material recognized and automatically found by AI robots. In addition to DALL· In addition, Stable Diffusion and Midjourney are also the more prominent products at present.

Although Van Gogh and Leonardo da Vinci themselves are unlikely to personally consult these software companies for explanations, many living artists, photographers and copyright owners will certainly not be able to swallow this breath, and the copyright issue of “deep learning AI” also urgently needs a clear legal explanation.

AI robots face several legal challenges

Getty Images, in a case suing the developer of Stable Diffusion in London’s Royal Judicial Court, alleges that the startup made commercial gains by infringing copyright.

Multiple artists also filed a lawsuit in San Francisco federal court last week, accusing a series of AI image developers of “infringing the rights of millions of artists.”

In fact, these commercial AI bots do charge users a fee, for example, Midjourney charges users a subscription fee of $10/month after the end of the free trial period, while enterprise users can charge up to $600 per month; OpenAI also offers users a free monthly credit, which they also need to pay when they run out.

Regarding the legal challenge to the business model on which the company depends, Stability AI issued a statement saying that anyone who believes that [AI] is not used properly is neither aware of the technology nor misunderstood the law. Midjourney CEO David Holz also said in an interview late last year that the company’s mapping service is like “sort of a search engine” and that AI does things that are no different from real people.

Holz said: “Can humans learn from other people’s paintings and then create similar paintings? Obviously, this is allowed, otherwise the entire professional art industry will be destroyed. To some extent, the learning mode of AI is the same as that of humans, and if the final output image is inconsistent, there is actually no problem. ”

But copyright issues aren’t the only ones facing the AI software industry; bots that can generate images, text, and programming code may even raise deeper legal concerns, such as the “face-changing” AI that sparked heated controversy earlier. Brexit and the 2016 US election have also shown that these AI tools can generate targeted mass content that sooner or later will be used to spread rumors and create social division.

Wael Abd-Almageed, a professor of computer engineering at the University of Southern California, explains that once we lose the ability to distinguish between real and false, everything suddenly becomes fake because people lose confidence in everything.

Chirag Shah, a professor at the University of Washington’s School of Information Sciences, also said that once these AI-generated fake images go viral on the Internet, it will also become difficult to debunk rumors because it is difficult to find the source of information and specific tools in reverse. If people are familiar enough with these tools, they may be able to guess that some photos are synthesized by AI, but there is no scientific or easy way to tell them apart.

Exit mobile version