This may come as a shock to some, but I’m not a very good artist. I know, I know - I’m a fraud. Subscribers to this blog have probably thought for months that I’m great at drawing “Colourful minimalistic [insert object]. Add stars and galaxies in the sky”. However, that’s just what I’ve been asking a robot to do. The banner images you see on Mind Meandering were made by GPT, and in reality my skill with a pen starts and ends with drawing a smiley face on my hand when I’m bored.
Ahh, so this is that “art” I’ve heard so much about.
To some this is a turn off. Just recently I had a friend tell me they hoped the images I use weren’t AI generated, and it prompted a conversation with the usual talking points. I’ll admit, I hadn’t put an enormous amount of thought into it and was (still am!) open to being convinced not to use them. If they really are morally problematic, I’d give up the style consistency, and settle for something publicly available instead. Given a sizeable focus of the blog is ethical philosophy, it’d be dumb to start every post with a image that undermines that.
As you can probably gather from the banner image in this article, though, I’ve not found the moral reasons to boycott AI images particularly compelling. At least none of the reasons I’ve heard so far.
AI Art is Stolen
I’ll start with what I think is the best argument against using AI art. It’s all basically stolen. AI models are trained on the internet (including this blog, probably) without anyone’s consent. This includes the billions of images that were painstakingly created by artists before the advent of AI. Off the bat, it seems pretty bad to take people’s work without them saying it’s okay to do so. Unless you think intellectual property itself is unjust, but that seems a bit radical, and I’m happy to concede it’s fine for the sake of argument.
So, it seems wrong to train on artwork without permission. This puts the likes of Sam Altman, and whoever made Midjourney into the wrongdoer bin. However, you and I are not Sam Altman or Dr Mid von Journey - we are humble mortals that simply use the model instead of training it. I think this distinction is relevant.
It may surprise people that I have a laid back attitude to the use of AI images, when a lot of my work centres around taking consumption ethics quite seriously. A decent chunk of this blog is just me manically pleading with people to stop eating animals. You’d think someone who spends so much time on his high horse (the view is great up here), would at least be consistent across the board. However, the reason I’m less convinced that AI image use is wrong, is because the relationship it has with supply and demand is less clear.
Depending on the animal product you buy, you can be causing weeks and weeks of torturous conditions for a very small benefit to oneself. This is because when you buy animal products, it signals producers to create more to meet demand, and so additional animals down the line are factory farmed (this is also why the “It’s already dead!” excuse doesn’t work. Sure, you didn’t kill that animal, but you’re killing another one later on). AI image generators don’t work this way. It’s not as if every time you generate an image, they steal an extra bit of artwork. Once a model is created, it’s done all the stealing it’ll ever do, and each new prompt won’t result in further stealing.
In this sense, the people using AI models aren’t directly responsible for the harm being caused, unlike the people that buy animal products. It’s as if instead of killing a cow, a farmer killed a magic cow that would output infinite meat. Eating the meat wouldn’t result in additional cows being killed, and so I’m not sure eating magic beef would be wrong (assuming it’s not an elaborate lie to cover up a soylent green operation).
Now, you might say there is a relationship between supply and demand, as using a model incentivises AI labs to make newer, better models (which will mean more stealing). However, I’d wager the expected value of your contribution to that is tiny. Billions of AI images are generated every year, and there is only one threshold to tick over - the decision to train a new model or not. The chances that your images are the one that ticks the AI labs over the threshold is going to be tiny. Unlike animal products, where the thresholds will be more sensitive, and the total harm caused is much greater than everyone’s art being stolen (Billions of living things kept in cages, having body parts cut off, and being gassed to death).
So, the consequentialist case against using AI images isn’t particularly strong. This leaves us with an adjusted critique then - while using AI art doesn’t result in measurably worse outcomes, it’s still wrong to use a service that is the product of wrongdoing. The fact is, you can’t get AI images without stealing, and so we shouldn’t use them on principle. I think this isn’t obviously false, and reckon it’s the best argument against using AI. There’s certainly some unease around benefitting from something bad, even if your benefit doesn’t result in further wrongdoing.
However, I also fear that a principle like this keeps the bar way too high for consumers. We use a lot of things that have bad origins. Anyone who’s taken the family to the Natural History Museum has wandered through halls of stolen artifacts. We still use buildings that were made by slaves. Many products on our shelves were tested on animals in cruel and barbaric ways. The chances your clothes were made by someone in less than ideal working conditions are basically 100%. We use fossil fuels for frivolous things like playing PlayStation, or keeping our lawns mowed. If we’re going to not consume something with unethical origins purely on principle, even if it doesn’t result in further harm, AI art would be out, but it’d also be the first on a very long list. We have some responsibility as consumers, but I imagine the sort of monastic lifestyle this principle inspires would be too much to expect. Seems more plausible to me that we’re only required to consider actual consequences when deciding to use something or not.
AI is Bad for the Environment
The next biggest talking point is the environmental impact of AI. If you’ve seen people criticise AI before, you’ve probably seen someone claim AI is an electricity sink, and guzzles more water than Shaq does when he wakes up in the morning.
It’s true that AI uses up water and electricity. Some estimates I’ve found say a single generated image uses the same amount of electricity as a phone charge, and ~25 text prompts uses 500ml of water (although the latter number has been disputed). AI images are more taxing than text prompts, so let’s steelman and say an AI image takes 2L to generate.
These sound like a lot, but compared to other parts of our life, they’re quite trivial amounts. If we start with electricity, charging a phone every day for a year uses about 1.85kwh of electricity. It depends where you live, but that generates about 0.5kg of CO2e in the UK (averages for the year if you scroll down a bit). To put that into perspective, you generate 10kg of CO2e when you buy 1kg of chicken. So generating an AI image every day for a year produces the same amount of CO2e as buying 50g of chicken. If you do Meatless Monday once, you’ve probably offset the emissions of your AI image use for years. You can also offset it buy driving 4 fewer miles in a Ford Fiesta. Seems like the electricity use of AI is unfairly scrutinized when we do so many other things that are 100x worse. Scolding people for AI use when you eat meat or drive a car is like Jeff Bezos scolding us for not paying enough in taxes.
The story is similar for water. We gave the steelman estimate for an AI image at 2L. 1kg cheese uses up 5600 Litres. Generating an AI image uses the same amount of water as staying in the shower for an extra 10 seconds. Those of you that, like me, do all their renditions of One Last Breath by Creed in the shower, are probably doing way more harm performing for empty shampoo bottles than you are by prompting AI. If you’re worried about your water use, 99.9% of the gains can be realised from outside your GPT tab.
It Steals Work Opportunities From Artists
The other concern people have is that by using AI images, valuable work opportunities are being taken from artists. I’m not sure of the impact yet, but I can’t imagine we’re very far away from AI taking a big bite out of the pie. People will start to prompt AI for any image they need, and artists will need to either become ultra competitive in a way that AI can’t beat, or find work elsewhere. This sucks. People work very hard to learn these skills and build careers off of them. To have their life’s work automated away is, understandably, heartbreaking.
I think there’s two responses to this. First, if successful, this would still only apply to AI images used in place of human work. That’s not always the case. For example, if AI ceased to exist tomorrow, I would not start paying a graphic designer for banner images. I’d probably just steal stock images, or find old paintings or something. So, in this scenario (and I wager a lot of them) AI images aren’t actually stealing work opportunities from artists. They’re stealing work opportunities from Google image search.
The second reply is that while it does suck that people’s work is getting automated away when they don’t want it to, it’s also unreasonable to expect people not to use a new tool so your industry can stay alive. Horse trainers lost their jobs when cars were invented. The internet killed newspapers. Artists themselves have displaced other artists who couldn’t keep up with new disciplines, like 3D animation. If we always stopped using new technology in the effort to preserve existing jobs, we’d have stunted progress a long time ago. While it’s human to lament the loss of something important to you, I also think it’s unfair to blame other people for using something that’s quicker and easier. We all do that all the time, and it’s part of the reason why life is better than it was 100 years ago.
Now, maybe you have a wider concern that AI as a whole is going to automate us all away, and we should push back. AI image generation is just a battlefield in a wider war against humans being useless. This is just the start, and we’re all at risk of losing our jobs to AI. This assessment is probably right, and I reckon my own job will be automated by the end of the decade.
While there’s a resistance to that scenario I understand, I’m not convinced it’s a war worth fighting. I don’t know about you, but I actually don’t like work. For most of us, work has instrumental value, because it provides us with the means to live within our current economic system. However, it seems like rather than persisting in a world where we have to daily justify our right to live by doing shit we don’t like, it would be better to automate the boring drudgery and change the system instead. It says a lot about how ill-equipped our world is for the future, when the promise of liberating us from labour is seen as a bad thing. We can’t conceive of a world where people just deserve to live in dignity as a baseline, and don’t need to give up half of their waking life to avoid vagrancy. I think this is a shortcoming of ours, and our energy is better spent trying to change that attitude (i.e, through something like UBI) than by trying to hold back the tide of technological progress with a broom.
AI Images aren’t Real Art
AI images aren’t created by people, and AI models aren’t sentient (probably). As a result, it’s hard to say they’re really “art”. Art is notoriously hard to define, but it feels like it should be reserved for pieces that intend some sort of emotion or insight. Not the sort of slop that’s output by the likes of DALL-E.
I’m inclined to agree that, unless AI models become sentient and develop a similar relationship to their work that we do, AI images can’t really be classified as art.
Although some are still pretty. Friend of the blog, Talis, always manages a nice aesthetic.
I’m happy to concede that because I’m not sure why this counts against using it on moral grounds. A lot of images we use aren’t art! Diagrams, selfies, screenshots - it’s not obvious why something not being art means it’s wrong to use. You could use a picture of Frank Stallone as your profile picture for all I’d care.
Maybe it’s not prudential to use AI art for this reason. Some people say they won’t read something with an AI generated image on it. I also assume if you made an AI “art” gallery, no one would show up and it’d be crap. However, I wouldn’t consider it a moral error, in the same way I wouldn’t consider it a moral error if you filled an art gallery with copies of your drivers license photo. It’d just be a bit weird.
So for the time being, I’m not convinced it’s immoral for us to use AI generated images. AI labs are probably erring when taking people’s work, but I don’t think we’re in the wrong for benefitting from it, and the environmental harm is way overblown. If I had to guess, I’d say in 20 years the stigma around AI images will probably have lessened in the same way the stigma around pirating movies or reading free news did. I bet by then I’ll have really colourful minimalistic images with stars and galaxies in the sky. Then I’ll make it big, just you wait.
Brilliantly argued, as usual.
Honestly, I don't think the anti AI movement really hinges on any of these arguments. People don't like change (status quo bias and all that). The looming future of AI overlords running society scares people, so they're grasping at straws to justify their opposition.
Thanks for the shout out!
I’m actually practicing to become an artist at the moment (self teaching). AI art certainly has its place, but it’s far too wishy washy if you want something very specific. And if you try to get something very specific you end up spending hours slaving over it anyway.
Though, it’s very good for inspiration.