Things everyone should believe

Hi everyone!

I posted something similar to this a while ago, but I wanted fresh answers, and possibly to attract the attention of any newbies.

I’m going to list some things I think everyone should believe. To not believe any one of these things is, according to me, to be flawed.

  1. Logic, and all its consequences (namely, math).

  2. Utilitarianism. We can nit-pick about details, but in the end, if you could choose between whatever optimizes quality of life for all organisms that HAVE quality of life, over all time, and your other choice was something less than the optimum, why would you ever choose something less? Sometimes we know it is right to sacrifice immediate happiness at the expense of something more important (can’t buy the new car because I need to pay the mortgage), but that is only a local sacrifice in happiness. We justify that because our quality of life is higher overall – in the end, we’re happier with the house and without the new car, than vice versa, even if we’re less happy on the day we had to make the choice.

Those who don’t buy Utilitarianism, in my experience, are usually either

a) stupidly concerned with other moral absolutes, like human rights. Why on god’s stinky polluted earth would you believe absolutely in human rights over something like utilitarianism? Idiotic. See further below.

b) hacking away at straw men, or using incorrect examples to justify the dismissal. (Let’s say Bob is a happy, kind, brilliant person dying of kidney failure, and Joe is a miserable self-absorbed stupid jerk who walks into the hospital because he has a small cold. Should the hospital staff kill Joe and give his kidneys to Bob? Utilitarianism seems to say yes! – No it doesn’t, obviously. It pains me when people dismiss good ideas because they’re too stupid or impatient to think them through correctly.)

c) less-stupidly concerned with other moral absolutes. Morality is obviously relative, so you can believe that acting selfishly is morally right. It’s pretty much impossible to argue against this, except to note that few people really believe this. Most people who propound selfishness don’t believe that it’s right of itself, but rather that selfishness “works” - or to put it in utilitarian terms, if each individual acts in his best interest, everyone will be happiest on average. Most selfish-theorists, if you really get into it, don’t believe in selfishness axiomatically, which is good since that would be impossible to disprove. Instead they believe, essentially, that selfishness is the optimal utilitarian strategy. This is trivially false, as exemplified in the prisoner’s dilemma, in general by game theory, and in fact by the very existence of the human tendency towards friendship.

If someone axiomatically believes in selfishness (or almost any moral theory), you can’t really argue against it, because of course morality is subjective. (See further below.) Why, then, do I bash those who advocate the moral absolute of human rights? Because all such views are not only poorly thought out, but they are almost always held in an inconsistent manner. Why is it wrong to kidnap someone? Because they have a right to freedom. Oh really - well then, why is it ok (sometimes) to put people in jail? Because they gave up their right to freedom by committing a heinous crime. Under what conditions does an action result in you giving up your “fundamental right”? Well, that’s a difficult question, but when you really get down to it, the answer shockingly ends up being “whenever it causes more good than the alternative”. If you argue Socratically with someone who “believes in human rights”, what you usually find is a dumb utilitarian.

  1. Pragmatism / Consequentialism. Per utilitarianism, only the sum of quality of life (over all time) matters - what you do to get it doesn’t matter. If you do something “horrible” in an attempt to be utilitarian, if it works, everyone is happier, so good for you. If it fails, people are less happy, and you failed. Lying, stealing, and murder are only wrong because they lower quality of life on average. If they made everyone happier, they would become morally right. Thus, the only thing that has moral value is consequences. Means statistically influence consequences (killing usually reduces happiness), but not always - and thus in an absolute sense, means are irrelevant. This is why, if you had to murder one innocent in order to save a hundred, it would be morally correct to murder, all other factors aside (e.g., if the one guy is Einstein and the hundred are all retarded violent rapists, of course it’s better to kill the 100).

Most people I’ve talked to who reject consequentialism are again too focused on the immediate, and love them some straw men. Ok, our goal is world peace. But what if to obtain world peace we had to kill 9 out of 10 people? Surely those consequences don’t justify the means. Of course, this argument is a straw man, and stupid because the means - killing 9/10 people - is always INCLUDED in the consequences. We don’t want (world peace and mass murder), we just want world peace. If the only way to get world peace was to kill 9/10 of all people, it would be a bad consequence. If the only way to get world peace was to murder one innocent, lovely human being, on the other hand, it would totally be worth it, and I’d pull the trigger myself.

  1. Science / Empiricism / Occam’s Razor. Per pragmatism and conseqentialism, we want a method of operating that maximizes our effectiveness. Our effectiveness is necessarily statistically dependent on our ability to predict consequences, which requires an assumption that consequences are predictable. Thus we have “the inductive fallacy”, only since we’re not stupid and not claiming that induction can be demonstrated from logic alone, it isn’t a fallacy, but rather a consequence of an augmented axiomatic system. Logic → induction is false, but (logic + utilitarianism) → induction seems to be true. And finally we throw in Occam’s Razor as a way to determine the correct expected system given available data. Occam’s Razor is often misunderstood to favor a simpler theory over a more complicated one. What Occam’s Razor actually does is to provide a way to choose between two theories that both explain all observed data. When two theories are equal in terms of matching what’s already been observed, pick the one that requires fewer additional assumptions. If you’re trying to explain the fossil record and you’re faced with two theories, one of which is evolution, and the other of which is evolution together with the tinkering of aliens, clearly you pick the former. The latter is completely possible, but it doesn’t explain anything that the first one didn’t explain, and it involves an extra assumption, so kick it out.

Who rejects these? Some people love to focus on the fact that you can’t prove induction from logic. This is completely stupid. Of course you can’t prove induction from logic. You can’t prove Maxwell’s Laws using logic either, but that doesn’t mean they aren’t true. You can’t prove apples exist using logic, but I ate one for lunch today, so I’m pretty sure it was there. This universe is not generated by logic alone - it has additional axioms. That should be obvious.

3.5) Evolution. Seriously, people.

  1. Atheism and associated. There is no god, there is no afterlife, there is no soul, and there is no fate except possibly via determinism, which doesn’t really count. This follows trivially from step 3, simply thanks to Occam’s Razor. The existence of God doesn’t explain anything that science can’t (remember that the lack of an existing explanation is in no way a demonstration of an inability to explain, which in itself would in no way be indicative of the need for a deux ex machina that can violate otherwise inviolable physical laws), and requires an additional assumption, so we reject it. That simple.

Who rejects atheism? Two groups: idiots, and those who depend on emotions or instinct to guide their beliefs. True, there is no disproof of god – but there’s no good reason to believe in god (or santa or the tooth fairy), so smart, rational people don’t do it. The end.

  1. Moral Relativism. This follows from science & empiricism, together with logic. We know the universe operates according to logic. Clearly the universe has extra axioms, since you can’t derive the existence of atoms from logic alone. However, since evolution and evolutionary psychology (read “The Moral Animal” by Robert Wright for a fantastic book on the subject) explain morality very thoroughly, up to and including our very strong belief in its absolute-ness, there is no need (per Occam’s Razor) to ascribe morality, or the actions of humans, to the axiom set of the universe. The universe doesn’t care what humans do. Only humans care. Morality is a human axiom, not a universal one.

Who doesn’t believe this? People who can’t get past the fact that they really, REALLY believe that some things are right and wrong. Not a very good trait for would-be philosophers.

Also, why would I list “utilitarianism” as the primary thing to believe if I think morality is relative? Simple. The universe doesn’t give a damn about morality. That’s the sense in which morality is relative. But to the human species, morality is not relative. The idea that we’re a blank slate is a stupid cultural bias (one that, amusingly, claims that morality is just another stupid cultural bias). Evolutionary psychology and twin studies have shown the existence and prevalence of moral absolutes (along with many other cultural absolutes). Morality, along with sex drive, vision mechanisms, competition for status, and a huge host of other factors, are genetically pre-programmed into each and every person who isn’t brain damaged. Now, I don’t think that utilitarianism is the “human moral system”. Not only is there no single human moral system (just because we’re morally pre-programmed doesn’t mean that ALL moral beliefs are pre-programmed; just some), but if there were one, it wouldn’t be utilitarianism. In fact, one of the strongest human moral absolutes is reciprocation, or tit for tat. Utilitarianism, on the other hand, sometimes advocates the screwing over of an individual to benefit the whole group, which is very much against the spirit of reciprocation. But one of the absolutes of human nature is that there are ways of feeling that we value. We value happiness (most of us anyway). We value satisfaction in accomplishment, we value feelings of peace and serenity. Whether or not you care to lump these things we value under the term “happiness” or refer to them with a multitude of terms, they are the things that drive us, and we want those feelings. By definition, anything that gives our lives meaning is encompassed under this list of feelings. Everything that we consider human rights, everything that we consider good or beautiful, comes from these feelings. To live, to be free, to pursue happiness. To create, to love, to wonder. To be dominant and successful and respected. Even the darker things we enjoy are still encompassed in these feelings. We usually only (although rightfully) consider these our “darker” side because they give us enjoyment at the expense of the well-being of others, or else in conflict with less-well-founded societal norms.

So these things are what we value, by definition. Now imagine a world with only one person. What would be wrong for this person to do? He clearly has no obligation to other people, since there aren’t any. (Let’s forget about non-human animals and suppose he has, somehow, unlimited food.) He doesn’t have to worry about lying or stealing or killing. His only moral duty seems to be to do whatever the hell he wants, because that’s how he maximizes his expected quality of life. (Go him.)

If we then throw another person into the equation, what has changed? The dynamics are now complicated… person 2 may wish to kill person 1, but probably shouldn’t. But these are only the DYNAMICS that have changed. What has changed, and become more complicated, is HOW to maximize their total quality of life. The idea that they should still maximize the total available quality of life remains unchanged. (To use a semi-crappy analogy, think of physics. We can solve the equations for a planet orbiting a star easily. We can use these to predict the occurrences of eclipses to incredible accuracy. But throw in a second planet, and we can no longer solve the equation. It becomes a mathematically impossible situation to solve exactly. But it still operates according to the same law - it’s just the exact dynamics of the situation that are difficult.)

(Some of you may have noticed that I did not at all prove utilitarianism - I just made an argument for utilitarianism stemming from some scientific fact together with an implicit appeal to intuitive morality. This is the best I – or indeed anyone – can do, since morality is, after all, relative. If I could do better, I would be actually PROVING utilitarianism, which would imply that morality was not relative after all. Nonetheless, I think I’ve made a pretty good case.)

These are things I think everyone should believe. There are many more, but they’re a start. Opinions?

Quite a block of text you have here…

After reading the first 3/4 of your post–I have to finish now and say that I already disagree with much of you’re saying. These aren’t things “everyone” should believe, because personally I feel there are better things to believe in that what you are writing. So me being a part of everyone, I shouldn’t believe (all of) those things unless I want to.

I will try again to read this and see if I can make more sense of it.

I won’t pick up the “stupid” rock and throw it back, but I will respond:

All your points that I disagree with fall back on this irrational statement. What organisms have a quality of life (over all time???), and what is something less than optimum.

(See below)

Another statement without foundation. Of course if you want to be irrational, total subjectivity is tailor-made just for you.

Absolute morality, not to be confused with subjective standards for virtue foisted on us by religion and socialists, is nothing more than honoring equally the rights of all to life, liberty and property from violation through force or fraud. If someone violates those rights, putting their rights above others and attempting to live by/establish a double standard, they in turn put their rights below others and make themselves liable to justice. Just that simple, just that consistent.

That’s true, since the Big Bang. Before that science and God are both completely without foundation as an explanation.

Now the idiot rock. I don’t reject atheism, nor do I reject the existence of God. Given the Big Bang barrier, no one can logically reject either and the likelihood (or unlikelihood) of either is 50-50. Santa and the tooth fairy, being on this side of the Big Bang, are indeed irrational as is the god of every “revealed” religion.

We don’t know it, but given the total lack of any evidence for the supernatural, we can assume it with the highest level of confidence possible.

More correctly, morality is a universal code of interactive behavior for sentient/self aware beings, they being the only entities able to make moral choices between right and wrong. Non-sentient beings are innocent.

[/quote]
You support the argument for a universal moral code (as I’ve defined it) that only deals with the moral interaction between sentient beings. How he treats animals, or whatever he does to or for himself, are matters of individual virtue. I think the inhumane treatment of the higher animals is a gray area which we can reasonably add to our justice system without calling it an issue of morality.

sorry but utilitarianism is weak…

the pragmatists “good-o-meter” of utilitarianism is a tool for someone born without a personality. :wink:

Uh oh, you don’t believe what I believe.

It wasn’t a statement, it was a rhetorical question. Most organisms have a quality of life. Humans do. There’s no reason to suppose larger mammals don’t - it seems almost certain that dogs, monkeys, elephants, etc., have emotions along the lines of fear, aggression, sadness, and so forth, although certainly less potent than human emotions, since they lack the higher brain functions that likely give our emotions added relevance.

The idea of maximizing quality of life over all time is a specification on standard utilitarianism, which simply advocates maximizing happiness. Specifying that we try to achieve the maximum over all time is what makes morally imperative actions such as restricting pollution now, and suffering economic consequences as a result, so that future generations will have a world to live in.

See, that’s ironic, because you didn’t bother giving any foundation for your statement. I’m assuming you responded to what I wrote piece by piece instead of reading the whole thing first, since I justified the claim of subjectivity further below.

First, let me say again that the subjectivity of morality is obvious, and that those who reject moral subjectivity are, to the best of my ability to tell, wholly guilty of projecting their own intense moral beliefs upon a universe that doesn’t give a shit.

There are three obvious possibilities regarding morality. 1) The axiomatic system of the universe contains axioms pertaining to morality (in addition to the obvious: logic, maxwell’s laws, etc.) 2) The axiomatic system of the universe doesn’t contain moral axioms, but there is a moral system that is common to all people. Or, 3) The universe doesn’t contain moral axioms, and there is no moral system common to all people.

Situation 1 is Moral Objectivity. Situations 2 and 3 are Moral Subjectivity. It’s pretty clear that the right answer is situation 2.5 – that is, the universe doesn’t contain moral axioms (there’s certainly no evidence that it does, and thanks to Occam’s Razor, that means this is our primary expectation). Morality is not common among all people, obviously - but there are underlying moral principles that are common to NEARLY ALL people. (Reciprocation, for example.)

Unless you’re very stupid, you agree that different people disagree on smaller and larger moral ideas. This brings up the eternal problem of, if there is objective morality, how do you know what it is? In the end, all you can fall back on are statements like “but it’s so obviously right (or wrong)!” and “everyone believes X” and so forth. There is no way to demonstrate that morality either follows from logic (like induction, it clearly doesn’t), or that the universe is the one who cares about morality, rather than just humans.

Look, I’m going to be straight up, here. What you say is a very nice idea. But it’s obviously wrong - at least, it’s obviously wrong in the context of having a functional society. In fact, in the sense of having a functional society, it’s very stupid. This is simply because if we honor equally the rights of all to life etc., we are committing ourselves to jail everyone equally. Presumably this means we don’t put anyone in jail, or kill anyone, or otherwise censure anyone who is willing to cause us harm. This would be great if everyone were nice, but they aren’t. There are some very violent and scary and self-absorbed people out there, who murder on a whim, or in order to get your material possessions. What do you want to do to the psychopaths, give them a stern talking-to? Even if they’re not beyond help, you need to keep them in a place where they can’t hurt others while they’re getting the help. And some are truly beyond help.

What you say is simple and consistent. But it’s also wrong. No one would ever want to live in a society where we followed your rules.

And again, it’s pretty clear that you’re taking ideas that are dear to you, and claiming that they’re objectively correct simply because they’re dear to you. This may seem like a small thing, but it’s an enormously negative personality trait, especially as a philosopher, but just as much so as a member of a voting society in which officials elected by an ignorant or dogmatic populace will become the bumbling policemen of the world.

No, that’s entirely incorrect. Science is never “without foundation”, because it’s a method, not an assertion. God is an assertion, not a method, and so can be (and is) without foundation.

This is terribly terribly wrong. You haven’t really thought this through, have you? You seem to be saying 50/50 just because neither can be shown to be false, and both “explain” the same thing. What if I throw in a third option: that there’s a big cosmic weasel who farted, and this universe is his fart. The weasel isn’t any sort of god, he’s just a weasel. Does that mean we say big bang = 1/3, god = 1/3, weasel = 1/3? No, of course not, because the weasel idea is ridiculous. But so is the god idea. It just doesn’t seem ridiculous to you because you were raised in a world that accepts such idiocy as religion. Santa may be “on this side of the big bang”, but if I start to claim that Santa created the universe, he isn’t any more. Is that suddenly a valid idea? Also, what do you mean by a “revealed” religion? That sounds suspiciously like a religious bias there.

Good for you! A true statement.

I’m going to cut off here, because this is pretty much a waste of my time. Hopefully you agree you need to re-think these things, because many of your ideas are obviously the product of understandable instinct, coupled with a complete lack of rigorous analysis. 50/50 is a nice idea, but obviously wrong. Respecting human rights for all, equally, no matter what, sounds like a nice idea, but is actually a horrible one. You need to attack your own ideas and see if they stand up to honest criticism, and not boldly press forward with them just because they seem nice. That’s what President Bush does, and he’s done a good job of messing up the world as a result. Even if you don’t have the weight of millions on your shoulders, hopefully intellectual honesty presses down a little bit.

Wonderer,

Wow. That’s stupid.

:D/

utilitarianism is the process of assigning values to complete unknowns… stupid indeed.

expected utility my ass…

not to offend or pester but i think you went wrong as soon as you said “should” after the word “everyone”… :wink:

So… you’re saying that I shouldn’t use the word should?

Most people say they want a system like this, without a double standard–many, then, hypocritically exclude themselves. Then there are those who acquire the power to establish the double standard for themselves and their buds.

Where in the hell did you get the idea I want to let psychopaths, or any rights violator (criminal), go?

Should believe nothing…

Hi Twiffy,

  1. It would be worth it to whom? Would you be willing to kill yourself for the good of the aggregate?
  1. If the consequence includes the means, as you say, then consequentialism is a non-theory.

Also,

  1. I don’t think you are using Moral Relativism in its proper sense. Moral Relativism refers to moral relativity within the community of morally relevant agents. There is no sense in talking about moral relativity outside the community of morally relevant agents. If morally relevant agents did not exist, the universe would be wholly amoral, not morally relative. Therefore, if you allow me to reinterpret your beliefs, you do not believe in Moral Relativism.

Twiffy - all in all, a nice enough post. I’m not sure I want to respond to all of it.

Since it’s been commented on, I will say that every moral system relies upon Utilitarian principles to some extent, even of the foundational idea isn’t Utilitarian (although a case can be made that a utilitarian element is basic to morality in general). The example that I use is that every day care provider is a Utilitarian. Think about it for five minutres. There is just no other way.

Logic is a self-referential system. There is no reason not to “believe in it”.

If Pragmatism means, ultimately, that you must be able to live your philosophy, then I agree. But many philosophers don’t apply their philosophical ideas directly to their own lives (certainly many posters here don’t) - so I am not sure this one is a “must”.

I think you could probably live your entire life happily and successfully without an opinon about evolution at all. I have an opinion about it, but I’m not sure that I need one.

There is nothing to believe in, nor is there need to…

[b]In this world, there is nothing to believe![/b]

Fuse, thanks for your comments.

It would be worth it to the group. Or in more precise terminology, it would have a better utilitarian outcome than not killing the one.

Yes, I would absolutely kill myself for the good of the aggregate. I certainly wouldn’t WANT to. And of course, never having been in that situation I couldn’t swear that I really would. But it seems to me that I would, and I certainly hope I would and believe that it would be the right thing for me to do. (Depending on the aggregate, of course. I wouldn’t kill myself to save 100 murderers.)

Let me clarify. The consequences don’t include the motivation of the means, or the actions taken in the means, just the consequences thereof.

Like I said before, if your goal is world peace, you can’t kill 9/10 people to achieve that, because the consequences of (9/10 dead + world peace) is a shitty consequence, and worse than (everyone still alive, no world peace). However, if you could kill one person to achieve world peace, it would definitely be the right thing to do (assuming that was the only avenue to WP). So let’s say we’re going to go ahead and kill the one person. It doesn’t matter who kills the one person, or why they kill him. It could be a moral person, doing an act that is emotionally horrible for him, just because he knows it’s right. Or it could be a vicious murderer who has been let out of jail just so he could do this one thing, and then would get put back in jail. The two individuals would have completely different reasons, but because just the consequences matter, you couldn’t condemn the whole scenario for the motives of the murderer.

In short, consequentialism says, it’s better to have bad people doing good things, than good people doing bad things.

Instinctively I use the Wikipedia definition :

In this sense, morality is relative to the individual, because there is no moral truth inherent in the universe. I believe that that position is trivially true. Additionally, I believe that there are very strong evolved moral common threads in virtually all humans, and that both the strength of these threads, and their prevalence, are what allows some people to be “self-deluded” as it were.

Faust,

Thanks for your comments.

Agreed on both counts (although as an irrelevant note, certain forms of logic are self-referential and certain forms are not. This is the subject matter of Godel’s Incompleteness Theorem, actually.)

If Pragmatism means, ultimately, that you must be able to live your philosophy, then I agree. But many philosophers don't apply their philosophical ideas directly to their own lives (certainly many posters here don't) - so I am not sure this one is a "must".

I should have clarified this - when I said pragmatism, I meant the notion that, if it works, use it. This is essentially how I justify adopting the methods of science. It’s true that you can’t prove induction - but it certainly seems to work, so keep using it, per pragmatism. (I don’t think this is circular, although it may sound like it.)

I agree, and I feel the same way about god. The reason I think they’re important to believe is because they’re “obvious truths” when analyzed just with the principles of science & Occam’s Razor, and so to NOT beliee them means to not value those scientific & philosophical principles as highly as whatever bias leads one to religion. This results in a person who believes based on emotional or intuitive reasons, and not one who makes decisions based on fact. These are the sort of people who voted for Bush. Bush is one of these people, and now he’s pushing faith-based initiatives, birth control funding only for abstinence groups, and other ideas that may (or may not) sound good, but that scientific study show to be ineffective or actively harmful.

Evolution and god aren’t that big a deal, really - but being a reasonable person very much is, and if you’re reasonable, you should believe in evolution and not in god, not as absolutes, but as the most likely thing to believe given the currently available evidence.

Interesting. Of course, to be consistent with this form of the theory of utilitarianism you must be willing to do anything so long as it contributes to the greater good. This means that each individual has value only in so far as s/he can contribute to the overall happiness. Ok.

Yes, much clearer. And after rereading your earlier post I assumed that this is what you had meant - though it was ambiguous. Needless to say, I agree with this,

but not this:

? I thought consequentialism meant: It doesn’t matter who does what action, so long as the consequences are good.

I have browsed Wikipedia, too, and I still do not understand why you use moral relativism in such a way. Is moral relativism not a prescriptive term? Does it not make a claim about the way things ought to be?

Yes, exactly, you are right. My statement was a particular example of consequentialism, used to illustrate that consequentialism ignores the morality or the motivation of the individual doing the action.

No, it is a descriptive term - it makes a claim about the way things are. (It can be used in your way, but more among sociologists and in political conversations, than in philosophy.) Those who are moral relativists believe that morality is not a part of the universe, it is just a part of human behavior. Moral objectivists, or absolutists, on the other hand, believe that the same way the universe has laws of physics, it also has its own laws of morality.

Just a note - consequentialism can mean that it doesn’t matter who performs the act, but that is an incomplete definition. Kantian morality also denies that it matters who is doing the acting, but not is not consequentialist. Consequentialism means that it is the results of the action that count, as opposed to, for instance, strictly following a rule, regardless of consequences. The Ten Commandments is an example of “rule” morality. “Do this because it is the will of God that you do this, no matter what.” That the consequences are good is defined by the rule.

Consequentialism is a flavor - it can be present on many types of moral systems.

Twiffy,

I’ve always agreed with the utilitarian doctrine. I’m loathed to see that you would downplay human rights so harshly, but I think I understand where you’re coming from. You did point out that human rights advocates often take those rights and raise them up to the level of absolute universal principles, which I think is as silly as you do, but by and large human rights work very well with the utilitarian program. I’m sure you’ll agree that, while not unconditionally congruent with utilitarianism, human rights are tools that are usually used to defend human happiness, prosperity, and wellbeing - all OK by utilitarian standards. But as I said, you probably agree with this already.

As for the relativism/utilitarian issue, my solution to that is as follows (use it in your own arguments if you like): Yes, all moral belief systems are relative to the culture or individual that holds them, utilitarianism includes, but utilitarianism is still unique in that it is the “original” moral system, and anyone who adopts it is going back to the foundations of morality. What I mean by “original” is that it is the primordial understanding that we as infants learn to conceive of “morality”. We begin by experiencing the “goodness” in our own pleasures and the “badness” in our pains. We have not understood “morality” from this yet. We come to understand morality, at least a very rudamentary version of it, when we first get the notion of something being “good” without our feeling pleasure, or something feeling “bad” without our feeling pain. This most likely happens when we get a stern “No!” from our parents when we do something wrong, like hitting our younger sister, even though it gives us pleasure to do so. When this happens, the concept of good and bad suddenly “splits” from our experiences of pleasure and pain, and thus the seed that eventually grows into the sophisticated “adult” concept of morality is born. But having been split from our sensual hedonic experiences means that the concept of morality suddenly takes on a more abstract - and distinct (this is key) - meaning from that of pleasure and pain. So we end up with two concepts: morality and what it is to feel pleasure/pain. Morality, now severed from its hedonic roots, can now go on to take on any meaning whatsoever, including that which God commands, or human rights, or whatever the law says, etc. Morality is now a loose concept in need of some grounding. What the utilitarian does is he finds that grounding by bringing it back to its original partnership with the experiences of pain and pleasure, assured that because this is what it was supposed to mean all along, this is where it ought to stay.