I’ve mentioned before here that I’m a Moral Realist. I have this really insane view that genocide is objectively bad, and not that I just don’t like it. As a result, I’ve been locked in a years long war with some Subjectivist friends of mine (You know who you are! Cowards!).
One argument they make is that when we say “X is wrong”, what we really mean is “I have a preference against X”. The word “preference” sounds a bit soft, but of course, you can have very strong preferences. They don’t believe we just have a trivial distaste for murder, but that we really despise it. However, strong though it may be, it’s still just a preference - and moral language is just us describing our tastes. Murder has no intrinsic property of disvalue, and were we to run into someone who really loves it, they’d not be making any kind of error. Much like how people who like mushrooms aren’t making an error (however much it pains me to say it).
Slimy ugly little things. Smeagol hates nasty Elf shrooms!
Now, of course I’m going to disagree with this assessment. One reason is that murder is just obviously bad, and saying that Jeffrey Dahmer didn’t make any moral error because he had a preference for eating people seems like a massive cost to a theory. They would never accept my appeals to the self evidence of moral properties though, so I’ll try to pick it apart in different ways.
Preferences and Moral Beliefs
The first thing to say about the Preference Theory is that were it true, it’s awfully weird that we have moral attitudes in the first place. There’s something different about thinking an act is wrong and simply having a preference against it. Take my rugby team being kicked out of the URC quarter finals last week. As I watched us get battered into oblivion, I very much had a preference against us losing - but I didn’t think it was morally wrong that we lose. I didn’t walk away from that game thinking that James Lowe was a moral monster for scoring what really was a beautiful try.
If we’re going to accept the Preference Theory, we’re going to need to explain exactly why some preferences have a moral character to them, and some of them don’t. You might think that it’s the intensity of the preference. We think murder is wrong because we strongly dislike it, and our other preferences are relatively minor. However, I think this fails. There are absolutely some things that I think are wrong, that I don’t have an intense preference against. It’s wrong for my brother to steal a pound from me - but my preference that we won those quarter finals was much stronger than my preference he not do that.
Of course, there’s another way out which is to simply deny that there’s a difference in the experience our moral attitudes and our other preferences - but all I can say is that seems obviously false. To deny that would just mean denying we think that pound stealing is wrong, while losing the rugby game isn’t, which seems crazy. I’m not sure how the Preference Theory can be considered more plausible than “I think stealing the pound is wrong, but losing the rugby game isn’t”.
Saw Scenario
Imagine you wake up in a Saw scenario. You’re in a cage, and beside you is another person in a cage. They share all the relevant moral similarities to you - they’re not really old and close to death, they’re not Hitler, and they also subscribe to my blog. There’s no plausible reason for a stranger to value you differently. Across the room is a pair of buttons with a stranger standing beside them. Button one tortures you for 1 hour. Button two tortures the other caged person for 1 hour and 1 second. For some reason, the button pusher is obligated to choose an button (let’s say, for example, if they don’t, the world will implode. Or maybe they’ll be forced to use a Stairmaster for 30 seconds. Haunting). What should they do?
Were I in the cage, it seems obvious that they should torture me. There’s no relevant difference between me and the other caged person, and while 1 hour of torture is bad, it’s incrementally better than an hour and a second of torture. However, do I have a preference that they torture me? No! I’d really like not to get tortured please! If they slipped and tortured the other person by mistake, I’d be so relieved - which isn’t what usually happens after a preference of yours is frustrated. So, it seems there’s a disparity between our moral attitudes and our preferences.
One possible out is to say that our moral statements don’t describe our preferences, but our second order preferences. To which I say, “Ah hah! So they don’t just describe our preferences then!”. My men have risen from the trenches and driven you back an inch. A fine victory that I will presumably win a medal for. Perhaps, some kind of hat.
By second order preference, I mean the preference we have for our preferences (yes, this is a bit confusing, but I promise I won’t start talking about third order preferences). So, for example, I have a first order preference against eating mushrooms. However, my second order preference is in favour of eating mushrooms. Why would I prefer to prefer mushrooms? Because then I’d have more options on the menu, and would have another ingredient to play with! I’m also pretty sure they’re good for you. Alas, I do not have a preference for mushrooms, and that is because they’re the devil.
So, are moral attitudes like that? Are they the sort of thing that were we able to mold our first order preferences, we would choose to prefer? Well, I think no. Let’s take a strong moral conviction of mine - it’s wrong to eat animals. If you gave me a button that would allow me to un-veganify myself, would I press it? Well, I’m a bit uncertain. I think I probably wouldn’t press it, but it’s certainly tempting. After all, it would be nice to not have such a strong conviction that everyone disagrees with. The actions required to be vegan are very trivial, but the feeling of alienation when you realise you’re in such a minority can be a drag.
Compare this to my moral attitude on the matter - is there any of this sort of uncertainty? No, it’s one of the philosophical beliefs I have that I am most certain of! If moral attitudes merely reported second order preferences, you would think I’d be less certain of the wrongness of meat eating. But I’m not, so they probably don’t.
So, I don’t think when we make moral assessments we’re just talking about our likes and dislikes. We’re probably doing something else. Could it be that we sometimes describe the value of the actions themselves, and not just our attitudes towards them? Could it be that we think murder is bad because it really is bad? Scary to even think about. Even scarier than seeing slices of shiitake in your lasagna.
You say, "One reason is that murder is just obviously bad, and saying that Jeffrey Dahmer didn’t make any moral error because he had a preference for eating people seems like a massive cost to a theory."
What massive cost do you have in mind? I'm a moral antirealist, and I do not think there are any costs to rejecting moral realism.
I've already saved three of your articles for future reference. I rarely do that!
Fantastic article, and I feel like this very popular lay-theory of morality as either being preferences or as being caused by preferences is an underaddressed position. Thanks!