The ugly truth about using AI to create images
A lot of people using AI to create images don't understand how it works and they need to so they understand the legal risk every image potentially represents.
A note on what I’m doing and why. I am a freelance writer who’s been writing since print magazines in the 90s. Borked is slang for broken, malfunctioning and if that doesn’t describe the world we live in today, I don’t know what does. I started Borked as a place for civil discussion of cultural issues that affect all of us. Your support is appreciated.
When Jean-Léon Gérôme died in 1904, his maid found him slumped over one of his paintings of Truth. He painted her over and over. She was always naked. The Naked Truth. Stepping out of a well, or holding up a mirror to humanity.
Or dead at the bottom of a well, like the painting above, from 1895.
Naked truth, killed by falsehood, he called it.
I love to use classic art on my posts. There’s just such an endless supply of it and there’s always something that fits. Plus, it allows me to celebrate the work of an actual artist in a world where actual artists are appreciated less than ever before.
Jean-Léon Gérôme was a painter and sculptor whose work was so widely reproduced that by 1880 he was called world’s most famous living artist. 145 years later, most people don’t even know his name. They know the phrase “naked truth” though, even if they don’t have any clue what the hell the naked truth is anymore.
I’m an editor of four publications on Medium. Here’s what people tell me.
They tell me they create AI images because they couldn’t find anything to properly capture the essence of their writing.
Self delusion at its finest. I’m sorry, the internet is filled with public domain images. Hundreds of thousands of them and you want me to believe that? Gerome alone painted over 300 paintings. And he’s one of thousands of artists whose work has moved into the public domain. Absolutely free to use. No strings attached.
No, you’re using AI generated images because it makes you feel like you can do something you couldn’t do before AI existed. It’s really that simple.
And they think “they” made the end result. Which is also not true.
A while ago, I saw a Note that made me laugh. I wish I’d re-stacked so I could re-find it but I’ll get the context right, if not the exact details.
A fellow said someone (a friend? a teacher? I forget) showed him something he’d made with AI and proudly said “Look at this! Before AI I couldn’t do this.”
So he laughed and said “you low-key still can’t.”
The person looked at him, closed the computer and said “you’re low-key not wrong.”
I had a little laugh and moved along.
A few days later, I’d read a story that brought that Note back to mind.
I ran across the story of a designer who had truth shoved in his face in the most stark and shocking way and still failed to see the truth because of his own delusion.
I am not naming the designer or linking to the story because I don’t name and shame private figures. If I link, it’s to public figures only. Name and shame is one of the ugliest parts of the internet and I don’t play that game here, ever.
He’d landed a nice gig, doing book covers for a small publishing house. For close to a year, he’d been creating book cover art for the publisher. And each time, the publisher would ask — this is your work, right? Yes, of course it’s my work the designer would assure them, each time. Until the day the crap hit the proverbial fan.
One day, the publisher got a letter from a law firm representing a photographer.
It was a demand letter. A cease and desist letter.
The letter claimed one of the book covers “substantially reproduced” elements from the photographer’s copyrighted work, which the photographer alleged was used to train AI without permission. The letter demanded $75,000 in damages and that the book be pulled off the shelves until the infringing cover was replaced.
You said this was your work, that you created it, the publisher said.
But, but… I did, said the designer.
Using Midjourney.
The designer protested that he’d spent untold hours working on the image. He said he used literally dozens of prompts to refine the image to get it to look the way he wanted. He truly believed the covers he was creating were “his” work. Because he wrote the prompts. He told Midjourney what to create, how to modify it.
When the publisher forwarded him the letter, he freaked out. Called a lawyer.
He wanted his own lawyer, to look after his own interests. Imagine his shock when his lawyer said I’m sorry, but they have a case against you.
According to the United States Copyright office, machine generated works can not be copyrighted. And in the event that an AI generated image is found in violation of copyright, the person operating the AI is legally responsible for the violation.
So, his lawyer told him these cases are being settled on a case by case basis. You can fight it out in court, the lawyer said. If you have the money for the legal fight. Or you can settle this out of court. Those are your options.
The designer got lucky. So lucky. There was one more option the lawyer didn’t say.
The publisher removed him from the equation entirely. They fired him.
No more book cover jobs. The publisher turfed him, pulled the book, hired another designer to redo the cover without AI and settled the case out of court. They paid the photographer some small percent of what they’d asked for and the photographer accepted the offer to avoid the cost of the legal battle.
And the designer? Learned nothing. Sour grapes all over the place.
He said the “sad truth” is you never know when some photographer or artist is going to “lawyer up” and go on a witch hunt looking for images that are a little “too close” for their liking and then sue. Cost people their jobs. That’s his version of the truth.
I sat back and just shook my head.
Dude. Seriously. You think you made an image. And despite working on it for hours, and “tweaking” the prompts a dozen times, when you were done a photographer saw the cover you made and said holy crap, that’s my photo. Did it occur to you that maybe there’s more going on than meets the eye here?
But no. Because “AI” told him he can make something new. And that thought is so attractive, so good for his ego, that he can’t see farther than his own interest.
Truth doesn’t care what that guy, or anyone, thinks of it.
Believing a lie doesn’t change the truth.
People think when they type a prompt into AI that it’s making something brand new. They think AI can look at all the photos it was “trained” on and learn to be an artist.
And in all fairness, they tried to get AI to work that way back when AI images were created using GAN systems. GAN is a type of AI. It stands for generative adversarial network and it’s a machine learning model designed to generate images by learning patterns from existing datasets without outright stealing copyrighted work.
A GAN network has two parts. The “creator” part and the “discriminator” part. Hence the term adversarial. So the first part, the “creator,” makes something from scratch. Then the “discriminator” part looks at all the training data and gives a score to what the “creator” part made based on everything in the training data.
Basically, the “creator” would create an image. Then the “discriminator” would look at all the images in the training dataset and give it a score based on how it looked compared to all the “real” art AI training models had been fed.
Suffice to say it didn’t work. It failed. It was unable to create anything that looked realistic. But also? That’s old, old technology. AI image generation switched to the diffusion model by 2020. So if you weren’t creating AI images back in 2019 and before, it has never worked the way you think it works.
The diffusion model, which all AI image creators use now, dates back to 2015, before there was Midjourney, ChatGPT or any image creation programs available to most of the general public.
If you’ve ever added “noise” to an image using photoshop, that’s a very basic description of how the diffusion model works. The diffusion model doesn’t try to “learn” how to do art. It takes existing art and adds noise to it. And then slowly removes the “noise” to end up with an end result.
By using the diffusion process, AI can cobble together parts of different images in the training data and come up with a “new” image, cobbled together of existing images.
Think of it like collage, without rough edges. This sky, that sand, this face, that hair. That’s a very, very oversimplified version of how diffusion works and if you’d like a deeper understanding, there’s a decent walkthrough of AI image creation here.
Let’s talk about the “training” part a little bit, okay? Because this is super important.
Most of the “training” in AI image generation is understanding the words people type into their prompts. Here’s a really simple beginning. Go to Google images and search red fox. You’ll see something like this...
That’s just the first few rows. There’s more…
And more…
There was more. Hundreds of images that came up in a search for red fox.
Now look at your tower, if you have a desktop computer. This was the task. To get a machine to understand what a “red fox” is. Notice that some of the photos are black? And some of the photos are not a fox, but a wolf? Some of the photos are very light orange in color. Some are deeper. Some have a white belly, others do not.
And for AI image generation to work, it must understand what a red fox is. What a sandy beach is. What a palm tree is and what a white woman is. Every word that can be typed into an image prompt is another word the computer must understand.
And to top it off, machines don’t operate in words.
Maybe you’ve heard that computers deal in ones and zeroes. That is correct. Computers are doing math. So when you type words into an AI image generation program, it’s going to render those words into the corresponding numerical calculation and then dig into it’s training dataset to find what you asked for.
AI was “trained” on massive datasets. Literally billions of images scraped from the internet without permission. If it was online, they took it. The LAION-5B dataset alone contains 5.8 billion images and that doesn’t begin to count the images scraped from art sites, photo sites, social media and anywhere else they could grab images.
When AI was “trained” on all those images, it was fed both the images and the descriptions that accompanies the images. The captions or alt tags used. We all know how humans work, right? We all know people have been stuffing keywords into alt tags since search engines came along in the nineties, right?
So some of the words associated with the images by way of captions or alt tags might be red fox, or red fox in Algonquin Park, but they might also be “best quality canvas prints” or “shocking truths about foxes.”
And “training” the AI doesn’t mean teaching it how to paint a red fox. It means how to understand what is and is not a red fox. So they know which images to use when you type red fox into your AI prompt. Make sense?
And if your prompt asks the AI to make an image of a red fox wearing a space helmet and flying through space, it’s going to have to grab several different images and add noise to them and then remove the noise to cobble together what you’ve asked for without it looking like a hot mess of copy and paste.
But if you ask it for a red fox jumping in a snowbank, maybe there’s something pretty close to that in it’s dataset. And thanks to the ‘training’ that lets it understand your words, it know exactly which images to pull no matter what your prompt is.
Because training AI doesn’t mean teaching it how to make art. It means training it to understand your prompts. So it can work with the billions of images in it’s belly. Stolen without consent or compensation. So if you ask for a red fox, you don’t get a black wolf. So if you ask for a deer, you don’t get an elk. Make sense?
It took some seriously intense “training” to understand that humans have five fingers. Because AI doesn’t know what five is outside of a dictionary definition. Do the same image search. You’ll get everything from five men to five pin bowling to five fingers and 3-D digits. If you ask AI to draw five flowers, maybe you get five. Maybe three.
You don’t really think they they trained AI to understand what all those words mean AND taught it to make art like a professional without using the billions of images they trained it on, do you? Because if you think that, I know of a bridge that’s for sale.
So if some photographer, artist or illustrator happens along and sees the image someone put on the cover of a book or a maybe a blog post on Medium or Substack and says hey, wtf, that’s my image — they’re probably not wrong.
Now let’s talk about what the United States Copyright office says.
First, the USA Copyright office says machine generated work may not be copyrighted. Copyright applies to human generated work only. And that’s a win for actual creators.
It also states that there is no copyright on AI generated images, therefore they are automatically in the public domain unless the work is deemed to be in violation of an existing copyright. I want you to think real hard about why it would say that. And the reason is says that is simply because copyright violation happens more often than most people actually realize given how the images are created.
Additionally, it states that copyright cannot be assigned to an AI system, so in the case of a copyright infringement or violation, the person operating the AI shall be legally held responsible for the violation of a pre-existing copyright.
Which means that designer has no idea how lucky he got.
It also means the correct caption on your AI image is not “image by author, using ChatGPT” because the image is not by you according to legal copyright. The correct caption would be “public domain image created by ChatGPT” and that designation stands unless someone sues you for a copyright violation.
And for the record? AI writes the exact same way.
AI has not “learned to write” the way a human does. It does literally the same thing. Grabs a sentence here, a phrase there. The difference is really just that there are far more sentences in a stolen book than pixels in a stolen piece of art.
It’s much less likely that someone is going to come along and say that’s my sentence, asshole. No one is going to scan half the writing on the internet looking for their phrases, their sentences, but that’s how all AI works. It creates nothing new.
It cobbles work together from existing work. That’s how it works. And it does it astoundingly rapidly albeit at a great environmental cost.
Frankly, I expect that as tools get better and AI companies look for more sources of profit, there’ll be more artists, illustrators and photographers using tools to find their art in AI generated images. When that happens, some law firm will make the cost of demand letters faster and cheaper and more people start getting them.
If you’re using AI generated images, the question isn’t whether you are violating someone’s copyright. It’s only a matter of whether they ever find you.
As the law says — Ignorantia juris non excusat. It’s Latin, and it means a person who is unaware of a law may not escape liability for violating the law by being unaware of its content. And the Copyright Office agrees.
Which circles right back around to the anecdote I started with.
“Look what I made. I couldn’t do this before AI.”
You still can’t.
The question isn’t whether it’s your work. That’s already been established. The question is whether it’s worth the risk. Only you can answer that.






I’ve decided to embrace my inner Luddite and avoid using AI entirely. I realize that is a curmudgeonly thing to do but I have spent a lifetime leaning into authenticity and it all seems too tragic to me. As an artist and a career musician and photographer it is hard to not be personally impacted every time I see something fake. Recently I have been volunteering to collaborate with fellow substackers to shoot photographs for their stories. It has led to some amazing work and new friendships. I highly recommend it.
Colleges have added AI courses, but not many of them have the content of your observations and posts. Thank you! I am sending this to some people who head up programs where AI courses have been added as part of the standard curriculum.