Ai Software Affecting Artist’s Lives

If you are a part of social media, you probably recall seeing a wave of Ai images people posted of themselves in December, 2022. I admit many of the pictures were interesting, but a deeper look at the process shows a frightening development for artists and their careers.

Sarah Andersen, a cartoonist and the illustrator of a semiautobiographical comic strip, Sarah’s Scribbles, wrote an opinion essay featured in the New York Times (12/31/2023). I have included a link to the essay, below. I remembered Andersen from a student’s presentation in one of the Illustration classes I taught at the Art Institute of Indianapolis.

In her essay, The Alt-Right Manipulated My Comic. Then A.I. Claimed It. Sarah described as feminist leaning and was originally targeted by Ult-Right groups. I’m not focusing on politics; I never want to talk politics with you. But politics is part of the title. The point is that Sarah had to deal with humans copying and altering her work, before Ai came into the picture.

Andersen learned to regularly alter the details of her drawings, keeping the Ult-Right groups several steps behind her. She understood that it’s time-consuming for a human to duplicate a drawing style, so her strategy worked well, on people.

Example of a comic strip by Sarah Andersen. She describes her comics, “I keep things simple — punchy drawings, minimal text — because I like my ideas to be immediately accessible. I write a lot about day-to-day life. My pets, their personality quirks and my love for them are common themes.”

Ai programs are the genuine threat to Andersen, as well as other artists. This isn’t the augment “only a human being who has been trained can honestly create art.” It’s about software companies legally destroying artists’ careers, by mimicking their art work.

According to Anderson; “A.I. text-to-image generators such as Stable Diffusion, Midjourney, and DALL-E exploded onto the scene this year and, in mere months, have become widely used to create all sorts of images, ranging from digital art pieces to character designs. Stable Diffusion alone has more than 10 million daily users. These A.I. products are built on collections of images known as “data sets,” from which a detailed map of the data set’s contents, the “model,” is formed by finding the connections among images and between images and words. Images and text are linked in the data set, so the model learns how to associate words with images. It can then make a new image based on the words you type in.”

You type in the artist’s name, the images you want to be associated with the style, and in seconds, the software produces it. Scary!

Sarah looks at how the software companies legally get away with taking artists’ work without their consent. In the case of Stable Diffusion, they use a data set, LAION, which is a not-for-profit. With that status, Stable Diffusion has legally entered numerous artists’ copy-righted portfolios into their software without artists agreeing or necessarily knowing.

Andersen adds, “It gets darker. The LAION data sets have also been found to include photos of extreme violence, medical records, and nonconsensual pornography.”

Before you leap into a harmless Ai image generator (and yes, they are fun), consider how skipping using the software is a simple way to support artists.

 

The Alt-Right Manipulated My Comic. Then A.I. Claimed It

Leave a Reply

Your email address will not be published.