Artists Share AI Experiences And Anxieties In New Video Essay: ‘AI Vs Artists – The Biggest Art Heist In History’
Martin Perhiniak, certified Adobe design master and instructor, has posted a new video that ponders the many legal and ethical questions around generative AI models.
The video, AI vs Artists—The Biggest Art Heist in History, features clips of dozens of artists discussing how AI has already affected them and their anxieties about the tech’s future. It’s description reads:
Generative AI can be called many things depending on your point of view: machine, thief, tool, medium, collaborator, muse and even artist. In this video I will try to find answers for a lot of complex things and I will attempt to judge this technology with an open mind. In the last couple of weeks I spoke to many amazing artists and scientists about my mixed feelings of generative AI. Join me to hear their thoughts, my advice to creators and predictions on what’s to come.
The full video is available above, but we’ve picked out a few talking points that stood out to us.
How AI Models Are Trained
Perhiniak starts the video by explaining some very basic information about artificial intelligence that is probably not widely known but should be a part of any serious conversation about the technology.
At the moment, the most effective and most widely used data set, Layon 5B, includes 5.85 billion uncurrated images crowed from publicly available websites and cloud storages. This dataset was meant for research purposes only, but companies started to utilize it commercially within a few months after its release.
The data set includes public domain images, copyrighted work, and explicit images. Perhiniak says that the creator of the set didn’t get consent from anyone while collecting the images because, since it was put together for research purposes only, it didn’t seem to violate anyone’s rights. That said, the set has been used by major, for-profit AI models for training purposes and clearly crossed ethical and likely legal lines in doing so.
AI Style Mimicry
AI Models scraping copyrighted data is hugely problematic. The video also showcases another major issue with generative models: their ability to copy styles and techniques that individuals spend years perfecting. Jon Lam, storyboard artist on Invincible and Star Trek: Prodigy, says:
I’ve seen a lot of my friends just suffering because their work is so prominent and they’ve struggled for 10 years or more just to see someone else wearing their face and parading it.
Tall Poppy Syndrome
Perhiniak explains clearly how these models punish artists for finding success with their work and developing unique styles:
The unfortunate paradox is that the more distinct an artist’s style is, the more successfully AI can mimic it. Additionally, the more complex and detailed a piece of artwork is, the more likely people will think it’s made by AI, even when it’s not.
But Everyone is a Target
It’s not only popular artists threatened by generative AI, though. Lesser-known artists whose work wasn’t used for training AI models still have plenty to fear, as many of the programs now let users provide an input image that the models can copy when creating output images.
Perhiniak shows four images created by his friend George Tonks and four by Midjourney in the “style of George Tonks” that took only minutes to create. The results are shockingly similar. Perhiniak argues:
What that means is that even if his work wasn’t scraped in the first place and used to train the AI model, the uploaded style references will help to identify similar artwork in the database and use that to generate something very similar.
Perhiniak says he doesn’t know if providing a reference image to an AI model adds it to the model’s training data set. But if it does, he argues, anyone uploading an image to the models is “complicit in data laundering without their knowledge.”
Ethical AI?
Perhiniak makes a distinction between programs such as Midjourney and Dall-E, which are trained on data sets including millions or billions of copyrighted images, and those from companies like Adobe and Getty, which were trained only on materials owned by the companies. He admits that the results are often better from AI models that did steal millions or billions of copyrighted images but that ethically, models trained on public domain and owned stock images hold the upper hand.
Near the end of the video, Perhiniak wonders, “Is there a way to make a completely ethical generative AI tool?”
What I hope will happen in the future is that the law will define the requirements for generative AI data sets.
Rather than only offering speculation, Perhiniak lays out what he believes should be minimum legal requirements for generative AI models:
- Training datasets to be 100% transparent
- Only use work in datasets by artists who gave their consent
- Royalties/compensation to be paid out to artists who opted in
- Generated images need to be cataloged and easy to track
- Direct/exact style mimicry should not be allowed
- Generated images can only be used for reference purposes