In history, the nightshade plant was used to poison kings and emperors. So it’s only fitting that a new tool used to poison AI art generators is named Nightshade. Created by Ben Zhao, a computer science professor at the University of Chicago, the tool is designed to help artists combat copyright infringement by AI art generators that are trained using their artwork.
Nightshade allows artists to inject an invisible pixel into their artwork before they upload it online. If that artwork is then placed into an AI training set, it will infect the AI model and cause it to break. The ingenious tool, which is currently under peer review but was previewed by the MIT Technology Review, could be the saving grace of artists who are rightly concerned about AI infringing on their copyright.
So what happens when an image is injected with Nightshade? Based on tests by the developers, the poison data manipulates the AI models. It can fool the system into thinking an image of a cat is an image of a balloon or that cakes are a toaster. This results in unusable output; and, once the uploaded image is infected, it’s very difficult to remove. This means that tech companies will need to invest quite heavily in finding the infected samples in order to remove them.
Currently, AI image generators like Midjourney, DALL-E, and Stable Diffusion do not compensate artists for their work, and most do not offer an opt-out option. Recently, OpenAI began allowing artists to opt out of training sets for DALL-E, but some artists have found the process quite difficult. The website ihavebeentrained.com, which is run by Spawning, also allows artists to see if their work is in training sets and works to eliminate them. According to a recent tweet, their efforts have gotten 78 million artworks opted out.
Still, while some large companies like Shutterstock have said that they’ll abide by opt-out requests, it’s unclear if everyone will jump on board. This is what makes Zhao’s efforts so intriguing. In addition to Nightshade, Zhao’s team has also developed Glaze, which is designed to prevent AI from stealing an artist’s style. It also works by injecting an invisible pixel into artwork. Eventually, the team wants to fold Nightshade into Glaze and allow artists to decide which tool they wish to use.
Together, Zhao hopes that these tools will help tip the balance back in favor of the artist in the race to keep up with AI. Of course, Nightshade won’t help artists who have already had their work used to change existing models; however, it can help artists feel more comfortable going forward.
Nightshade is a new tool with the ability to poison AI art generators.
What is Nightshade? It’s a tool that performs a data poisoning attack against generative AI image models. Poisoning is not new. Poisoning genAI models at scale is new. You can read the MIT TR article for the high level story. For details, here’s the paper: https://t.co/0mIZgOl1Fp
— Glaze at UChicago (@TheGlazeProject) October 24, 2023
More importantly, only large model trainers have any incentive to consider opt-outs or respect robots.txt. Smaller companies, individuals, anyone without a reputation or legal liability, can scrape your data without regard.
— Glaze at UChicago (@TheGlazeProject) October 24, 2023
The Nightshade paper is public. Any one is free to read it, understand it, and do followup research. It is a technique, and can be used for applications in different domains. We hope it will be used ethically to disincentivize unauthorized data scraping, not for malicious attacks
— Glaze at UChicago (@TheGlazeProject) October 24, 2023
ok, 1 more tweet.
Super important to note this is a big (and growing) team effort at @UChicagoCS, with absolutely amazing and critical help from so many amazing artists. You know who you are. Those of you who took that first Glaze survey might remember the last few questions 😉— Glaze at UChicago (@TheGlazeProject) October 24, 2023
ok ok, 1 last tweet I promise.
I realized the most surprising result was not included in the MIT TR article. You can read the details in the paper (fig17), and I will just leave the figure here. FIN/ pic.twitter.com/zeDDlHbVEO— Glaze at UChicago (@TheGlazeProject) October 24, 2023
This tool is looking to corrupt AI scraping and give power back to artists over their own work, and many creatives are thrilled by this prospect.
Artists, best tool we could have asked for to fight this unprecedented exploitation of our labor is here!
Coming from the creators of @TheGlazeProject, Nightshade lets us poison datasets, damage models and teach AI companies to ask for permission first.https://t.co/WurjNDpvsl pic.twitter.com/iEP3TJBhVh
— Katria (@katriaraden) October 23, 2023
AI bros seem to be misunderstanding the point of Nightshade.
No, we aren’t trying to somehow stuff poison in your AI model datasets ourselves. We poison our own work. And if YOU choose to scrape our work, YOU poison your own dataset.
It’s a retaliation to you offending first.
— Paloma McClain (@palomamcclain) October 26, 2023
Others mentioned a great analogy so I deleted mine in favor of this one.
You can’t be mad if you’ve been stealing your coworker’s lunch every day, find that it’s now been labeled “WARNING: VERY SPICY” only for your mouth to burn when you continue to eat it.
— Paloma McClain (@palomamcclain) October 27, 2023
h/t: [Gizmodo]
Related Articles:
Artist Recreates His Own Work With an AI Art Generator
Art Trend of 2022: How AI Art Emerged and Polarized the Art World
Getty Images Releases Commercially Safe AI Image Generator Based on Its Own Media Library
Celebs Warn About AI-Powered Deepfake Videos of Them Advertising Products on Social Media