Why Utilitarianism is plausible.

Utilitarianism has the awkward property of seeming entirely obvious to its proponents, and clearly wrong to its opponents. This can make for discussions with much more heat than light. If it already seems obvious to you that utilitarianism is right, by all means skip this section.

There are no ethical first principles which are agreed on by everyone. On the other hand, there is a striking level of agreement about what is actually right and wrong, in concrete cases. Of course, there are disagreements; anthropologists have turned up some pretty surprising ones. But there is something pretty close to a consensus that (in most cases) murder, lying, rape and theft are bad, and that (in most cases) generosity, healing, truthfulness and loyalty are good.

One obvious thing that these things have in common is that most of the things near-universally agreed to be good are things which make people happy, and most of the things near-universally agreed to be bad are things which make people miserable. And in most exceptional cases, there is a clear recognition that they are exceptional cases: excuses are made.

Furthermore, the actions usually reckoned to be the worst are often the ones which cause the most suffering. Rape, for instance, which causes lasting psychological trauma as well as involving physical injury, is generally reckoned to be morally much worse than theft.

So, utilitarianism seems to do a pretty good job of giving the right answers. There is also a theoretical justification for (at least) something rather like utilitarianism. It seems clear to me that, all else being equal, something which makes me happy is better than something which doesn’t. After all, that’s one way in which I make decisions (though, to be sure, I wouldn’t in such cases call them moral decisions). Since it seems plausible that all people are ethically equal, this means that anything which makes anyone happy is (all else being equal) better than something which does not. This seems to lead naturally to something very like utilitarianism.

if 49.9% of the population were slaves to the other 50.1% more utility would be achieved.

-Imp

Hi, apple. Utilitarianism suggests or requires some operating principles that any ethical system would share in practise, I think. But you describe here a false analogy. What makes the individual happy is never what unitiliarianism seeks. “Society” can’t be happy, anyway. That’s just weird. Imp’s example is to the point. There is no necessary transference, or extension, from the individual’s desires to the group’s. It’s like Hobbes’ “body” of government. The analogy holds up only as poetry.

But any society must have at its disposal techniques with which to employ a moral system. Utilitarianism can provide some of these techniques - anyone who has tended to a large group of children will employ Utilitarian techniques. As a ground for social policy, again, as Imp points out, it is sorely lacking.

Yes, something like Utilitarianism, in some ways. That’s a far cry from a justification for Utilitarianism itself as a guiding principle. Further, you make the point yourself that happiness may not be the goal of morality. You may insert whatever goal you wish for Utilitarianism, I think, which is another of its flaws. It is a better modus operandi than a generator of moral principles, in my view.

f

Interesting post, Apple.

Impenitent, that’s a bad, naive example. More utility than what? Is the maximum exactly at 49.9 / 50.1? You very clearly based the example on intuition, rather than on precise thought. It’s actually a very difficult counterargument to defend.

How would such slavery be Utilitarian? First, 49.9% of people would be, presumably, miserable. If you have one person as a slave and one as a master, does the master’s enjoyment of his power and his gain equal to what the slave has lost in freedom? It seems highly doubtful. Thus, ever single master/slave relationship is worse than not having such a relationship, and so for each pair comes a decrease in relative Utility.

Faust,

I think you may be missing Apple’s point. (Apple, please feel free to correct me.) Utilitarianism does care about the individual’s happiness - it just cares about group happiness a lot more. But it takes the happiness of individuals to form the happiness of a group. So if you agree that (all other things equal) the happiness of an individual is a good thing, then you can reasonably extrapolate to saying that the happiness of a group is a good thing. If all are equal, then it’s a short step to say that greater sum happiness is better than lower sum happiness. And that’s Utilitarianism. You can’t derive it from the idea that happiness is good, but you can give a damn good argument.

In general, it’s very very difficult to generate a good counterexample to Utilitarianism. Most counterexamples try to make a Utilitarian say that something clearly bad is, in fact, good. And these counterexamples are almost universally flawed. For a good discussion of Utilitarianism, check out this url:

http://www.ushacommunity.com/philosophy/philosophy.php

apple_shampoo: (I like your user name)

I agree with you in some ways but I have some trouble with some other things you say. Perhaps I’ll start with the things I agree with and go from there:

  1. I think being happy, and doing things that make ourselves and others very happy is important and worthwhile

  2. We do have something pretty close to a consensus on the sorts of things that are good and bad in practice.

  3. I think that there is alignment between that which is good, and that which is pleasureable (in the rewarding sense.)

However, as far as disagreements go, I have a couple of them too:

  1. Although there are no ethical principles that are agreed on by everyone that doesn’t mean that there are no ethical first principles, or that the notion that there may be certain things which are always wrong is incorrect. After all, those who do not agree with the ethical first principles (should we happen to enumerate some in the near future) may be incorrect, or immoral; a lack of unanimity proves nothing.

  2. You invoke the notion of all else being equal. But a fair question is whether or not everything else is actually equal. Certainly in complicated cases it is very difficult to see “all the other things” to ensure that they are indeed equal.

  3. There is a disagreeable odour of accountancy around Utilitarianism. I can’t understand how we can possibly come up with propositions that meaningfully describe complex human interactions using measurements of how Bob’s quality of life increased but Mary’s decreased, but Bob’s increase was more than Mary’s decrease so on the balance it was morally right.

Twiffy, I would like to take up your challenge about coming up with a counter-example against Ut. that isn’t deeply flawed.

I think I’ll try torture, not Ut. condoning torture itself, but the slippery slope that places us on.

Let us imagine a situation where one man plans to detonate a nuclear weapon over a city of one million people. He has acted alone (ie, contracted out all the work of actually gathering materials), has no family who will miss him and no friends. The bomb is in place and he sends a message to the mayor of the city proclaiming his intent to detonate the weapon in twelve hours. An hour later he is captured alive and taken to a holding cell.

Should he be tortured to get the information to stop the device from being detonated? I firmly believe that the Utilitarian would cry “yes, absolutely!” citing the massive loss of happiness/quality of life that many if not all in the city would undergo if the bomb detonated.

We can safely conclude that in Utilitarianism that to torture the man in this circumstance is the morally right thing to do.

Once we accept that there are situations that exist where it is morally right to torture we have placed ourselves on an extremely slippery slope.

Let’s change the initial story. There are now two men. Then three, etc.

Then, let’s expand the situation to one where there are thousands of people who are endangering the lives of a million others, is it still right to torture them?

And so on, but we get to a point where we are at equilibrium between the collective damage we do the group we are torturing and the interest we serve in torturing them. Where is this equilibrium point for this example?

If you feel my example is fatally flawed I’d appreciate your thoughts as to why, otherwise, I’d like to hear how many people you think its ok to torture to protect the lives of one million people.

Cheers,
gemty

so it comes down to a question of numbers? morality by statistics? how much slavery is acceptable then? not 49.9%? 30%? or if you want to use the hedonic calculus I am certain that you will find all sorts of excuses for slavery, medical experiments on humans, killing sick people, the handicapped and the elderly… maximizing happiness and reducing suffering and providing the most utility for everyone…

screw that

-Imp

So, if I can pursuade, say, 51% of people that the state is justified in killing whoever it wishes in the name of advancement of the human race, all the time implying to them that they wouldn’t be killed, then by utilitarian principles the state would have license to kill everyone, including itself.

You might say that this is a highly unlikely situation but we live in a time when a certain South American nation that will remain unamed recently staged a mock invasion by US troops to prepare its civilians for war. Good old crazy left-wing South American dictators, they keep my interest in politics alive.

The main problem I’ve already had with Utilitarianism is that I feel like the ‘outcome’ can never be labelled as beneficial or detrimental in any sort of examined temporal sense. This is sort of critical as that is the main point of a moral system.

Why?

Well, I feel like when you expand Utilitarianism to it’s macroscopic extent we see the shifty shadow of Karma dancing around subversively. Let’s take Gempty’s nuke example but stretch out the ‘net happiness’ quantification to say… 10 years. We come to find out that the bomb going off actually would have helped humanity as a whole, even if that one city did get nuked.

Or we find that the one guy we tortured in order to save the 100 had a vindictive brother who will kill 200 to avenge his death. etc.

See where I’m going with this? The variables (at least in my head) simply overwhelm whatever sort of conceptual theory you throw at it, -especially- when you drag out the timeline. I’m of the personal opinion morality should stem from the internal on as much of an individualistic front as it can as I think trying to expand to the macro is just too much of a task. Anyways… this isn’t a new argument, nor mine alone, but I forget the counterargument so I bring it up now.

This is all in theory though… obviously Utilitarianism has it’s benefits in school and places like that.

absolute freedom is an excellent example of that

I think you are missing the point of utilitarianism.

If everybody had your beliefs, then the ‘utilitarian’ act would be to not torture.

of course its ‘plausable’. We could create a ‘moral judgement machine’ that gave its results based on the rate of decary of a chunk of uranium. How long it would last before people took utilitariansim and chucked it is a better question. I dont think many people are big fans of ‘the greater good’. Unless they are part of that greater good. I dont think it would last long.

Gemty, good post. Because you discuss the slippery slope, I think your argument is a lot more reasonable and potent than if you had cried “torture is wrong” at the outset. I’ll address it more specifically below.

Imp, you may not agree with Utilitarianism, and you might find its hedonic calculus offensive - that’s fine. I was just pointing out that your objection, as stated, wasn’t a good one. According to my argument, no slavery is acceptable, as I’m sure you will understand if you re-read the very last line of what you quoted.

Someone, your example is entirely equivalent to Imp’s. Just because either of you generate a situation in where a majority of people are power dominant, and have the ability to enslave / kill / whatever, doesn’t mean Utilitarianism would advocate it. The misery of the rest of the population would in just about all cases extremely overbalance the equation, and render the situation immoral.

Old_Gobbo, it’s quite true that to consider all the variables in a Utilitarian calculation, even one as simple as “should I tie my shoes today?”, is impossible for us poor limited humans. But this isn’t a good objection for two reasons. First, Utilitarianism doesn’t SAY that we should consider all the factors. In fact, since most people do better things with their time than consider the long-term effects of tying their shoe-laces, trying to consider all the useless factors is generally non-Utilitarian, since it’s a waste of time and a decrease in happiness. Second, there are ALWAYS these extra factors to consider in almost ANY moral system. Utilitarianism is NOT unique in this. In everyday morality, we are taught to consider the consequences of our actions as far as is reasonable, and to not worry about the rest. This is exactly what Utilitarianism asks - it just asks that you try to evaluate “consequences” more precisely than in other moral systems.

Delboy, she isn’t missing the point of Utilitarianism at all - her example is right on. Utilitarianism isn’t about “if everybody had your beliefs”. Utilitarianism doesn’t even necessarily claim that all people should be Utilitarian. Presumably it would advocate that those with lower analytic abilities adopt a preset group of imperatives, like “don’t kill” and so on, and do their best to stick to those.

Gemty,

Yes, this is certainly true. Of course, this is by no means unique to Utilitarianism. I know many non-Utilitarians who would agree that, in the case of the individual, torture is the correct action. I know only a few individuals who would claim otherwise.

Gemty, I agree with everything in your post. However, I don’t think this constitutes a valid objection to Utilitarianism, or any other moral theory in which slippery slopes exist. It merely poses a note of caution to those who use such theories: be careful, lest misjudgment allow you to cause pain.

Really, our society (and our lives) are filled with slippery slopes. We jail those cause harm to us. Where does this stop? Do we jail everyone who offends us? Those whose religion we don’t agree with? There’s a big slippery slope there, but we’ve drawn the line somewhere, and it’s a place that most are comfortable with. We allow abortion. Where do we draw the line? Can a woman abort in her last month of pregnancy? Do we allow post-birth abortions? Do we not allow abortions, because of this slippery slope?

In many cases like this, like with the jailings, we don’t think of there being a slippery slope, because we’re all pretty sure that it’s a good thing to draw the line somewhere near where it’s already drawn. Your torture example is more extreme because there is no official line drawn; torture is officially condemned in all cases. One of your objections is, if you allow one torture, you must draw the line somewhere. This is true; if you imagine a larger and larger group of terrorists about to nuke a city, how big would the group have to be before I wouldn’t condone torturing them?

I think your example is great, but that it isn’t a valid objection to Utilitarianism - just a valid concern.

While it’s true that a line seems to exist somewhere, you shouldn’t expect me to be able to tell you WHERE it is. A Utilitarian might well say, “abortions make for intentional, happier families, and significantly reduce crime rates; thus, they are good things. However, post-birth abortions are terrible ideas, and essentially amount to murdering a newborn.” This Utilitarian is saying that a line must be drawn; however, it is a difficult situation to say WHERE it must be drawn, and that cannot be established without significant analysis. Even then, the conclusion will be very debatable. One trimester? Two, three? It’s difficult to say with any certainty.

However, in this case, I think I can confidently give you an answer. If the question is, “how many terrorists responsible for this atomic bomb would I torture to save a million innocents”, the answer is “all of them”. It’s true that the happiness of the terrorists is an important Utilitarian calculation. However, when performing any Utilitarian calculation, it should be remembered that you maximize happiness over ALL TIME, not just now. Because of this, if you establish a precedent of not tolerating certain behavior (like terrorism) and punishing it severely, fewer people are likely to become terrorists in the future. (Of course, you have to balance this with a foreign policy that doesn’t make Muslims hate us… unfortunately our current administration doesn’t seem to have realized that at all.)

This is precisely the reason that if Person A punches Person B, and A is made happier by the punch than Person B was made miserable, it is STILL most likely not a Utilitarian outcome, because allowing the punching encourages the behavior of punching, which generally causes more harm than good.

Now, in a Utilitarian sense, you may disagree with my assessment. If you find the idea of torturing five million terrorists just to save one million innocents intolerable, remember that I may have made a mistake, overlooked an important factor, and there may be a good reason to not torture all or any of those five million. Conversely, you may be overly sentimental, caring too much about the well-being of murderers and not enough about innocent people. It’s a difficult question in any sense, but not one that really argues against Utilitarianism.

Twiffy,

So how do we then come to the conclusion of what constitutes this net happiness? We presume it to be there in all it’s quantitive glory but how do we get to it?

Quite true… Kant would say they are largely removed through deontology though.

Well… that’s mighty brave of it, but it’s that request that I’m investigating here.

Oh, certainly we get to the conclusion by considering factors. But the point is, we can reasonably omit small considerations. If I consider whether or not to tie my shoelaces, I don’t worry about the hypothetical psychopath who will kill anyone he sees that have untied shoelaces. It’s a possible factor, but so unlikely that it’s literally not worth taking the time to factor it in.

It sounded more like you were arguing against the consideration of all factors, rather than the precise consideration of the more important factors.

It’s true that considering ALL factors is generally impossible, and usually not necessary. However, you’ll have a hard time arguing that considering a particular factor PRECISELY isn’t good. Precision simply means a lower likelihood of mistake. Generally speaking, it’s an excellent thing. In any moral system, it’s better to understand that you shouldn’t murder innocent people because you’re causing needless misery, because others depend on that individual, because you’re probably damaging your own future, and so on. This is better than the vague assertion “killing is wrong”. If you have the precise understanding, you can be confident that killing a murderous intruder in your own home is a good and moral thing, instead of blindly clinging to the imprecise maxim “killing is wrong” and thus letting him kill you.

DietCoke, Utilitarianism is generally considered to be the most popular moral philosophy (as opposed to religious morality) outside of academia. “The Greater Good” is a phrase that appeals to a fairly large number of people, and reasonably so, since anything besides “The Greater Good” is worse for everyone, on average, than what Utilitarianism would advocate. If you don’t find Utilitarianism appealing, I can’t prove to you that you’re wrong - but chances are, if you disapprove, you simply haven’t thought about it as much as some others. You can disagree axiomatically with Utilitarianism, but it’s virtually impossible to generate a good argument against it, since “the greatest good” is really the prime dictate of society, and we all tend to have socially-based morality.

Twiffy,

I happen to like Utilitarianism… I’m just playing devil’s advocate here.

I guess what I’m trying to say is that while we sort of look around and cautiously nod our heads to assertions like ‘If I kill the robber and save my family it’s justified…’ how do we really know that it is?

It can’t be put into numbers.

Just like the slavery example; we see the misery and the compared ‘happiness’ but it’s just a psychological assumption. Who’s to say my misery isn’t 1000 times more important?

How do we -get- to the quantity? We can’t… it’s an illusion created (in my mind) but the psychological tendences we’ve developed socially, as a species, up until this point. A biological attempt to weed out, as I said before – the temporal Karmic factors that would plague any sort of decision making.

So are we creating a moral theory, or merely detecting fairly effective biological one innate (and so flawed) in all of us?

[quote=“Twiffy”]

Unless your not a resonable individual like me, and id rather look out for my own good and the people close to me. If it just so happens that our own good happens to conincide with the greater good 99% of the time. Great, but im not about ot lie down my own head on the chopping block when i find that its for the ‘greater good’.

Your right you cant prove to me im wrong, and i cant prove to you that im right. Sucks arguin about moral philosophy. Cause we aint gonna have no evidence to back anything we say up.

OldGobbo,

Well, the big question is, CAN it be put into numbers? It’s easy to be skeptical about that, since any attempt to put it into numbers, however well defended, still seems like assertion.

“We all know there are different levels of happiness and sadness, and so presumably something that makes me really happy, and you a little sad, is better than nothing.”

Although that statement is reasonable, it’s still a big issue how you quantify these extremes. How do we know that my happiness really would be greater than your sadness? Me saying “I’m REALLY happy” and you saying “I’m a little sad” isn’t enough - maybe you’re always what I would consider “really sad”, and so you saying “I’m a little sad” is the same as me saying “I’m REALLY REALLY MISERABLE!”

That’s the problem, of course - but it still seems like a PRACTICAL problem, and not a FUNDAMENTAL problem. It seems quite plausible that fMRI and brain imaging technology will give us the means to precisely compare levels of emotion between individuals someday. And of course, with Utilitarianism, practical objections are factored into the Utilitarian calculations themselves - only fundamental objections could possibly pose a problem. When confronted with any practical difficulty in Utilitarianism, the correct response is, “well, do the best you can.” We all have our ways of estimating the emotional state of someone else - it’s not 100% reliable, but in many cases it’s the best we have, and it’s what we use with Utilitarianism - or, for that matter, just about any moral theory.

Utilitarianism is certainly a created moral theory, but also fairly innate in a lot of ways. Our application of the theory may often be flawed, but I still haven’t seen any well-formed argument that the theory ITSELF is flawed. Difficult to apply, sometimes, yes. Impossible to apply with 100% certainty, of course. Flawed? I don’t think so, but I’ll keep my eyes open.

DietCoke,

Yep, looks like the best we can do is just smile and nod, since at this point it’s all “I believe X!” “Oh yeah, well I belive Y!”

Every theory is flawed.

You should know better than to believe nice-sounding catch-all phrases like this. Your statement is easily disproven.

A theory is a body of information ABOUT something, so the conjecture “every theory is flawed”, a statement about theories, certainly counts as a theory. Thus, if your statement is true, the statement “every theory is flawed” is flawed. Thus, there must be at least one theory that is not flawed.

But you can see this conclusion from a different angle, as well. There are facts about the universe; most physics statements seem to fall under that category (and a few will no doubt be disproven later on). Regardless of what these facts ARE, it is certain that SOMETHING exists, even if it is only me. You can take the collection of true statements about whatever exists. That collection is a theory, and it is flawless, since it is abstractly constructed from pure truth.

Of course, we’re constantly trying to discover what this theory is, and most of our attempts have noticeable flaws. However, there are good examples of flawless theories - and these examples are math. Math is 100% flawless. Now, it may not apply to the real world, but that’s an issue of applicability, not perfection. Math is consistent and pure, and is a flawless theory with debatable (although not that debatable) application. It could be the case that Utilitarianism is another flawless theory. I could also be wrong, but I’m still waiting for evidence.