Hi everyone!
I posted something similar to this a while ago, but I wanted fresh answers, and possibly to attract the attention of any newbies.
I’m going to list some things I think everyone should believe. To not believe any one of these things is, according to me, to be flawed.
-
Logic, and all its consequences (namely, math).
-
Utilitarianism. We can nit-pick about details, but in the end, if you could choose between whatever optimizes quality of life for all organisms that HAVE quality of life, over all time, and your other choice was something less than the optimum, why would you ever choose something less? Sometimes we know it is right to sacrifice immediate happiness at the expense of something more important (can’t buy the new car because I need to pay the mortgage), but that is only a local sacrifice in happiness. We justify that because our quality of life is higher overall – in the end, we’re happier with the house and without the new car, than vice versa, even if we’re less happy on the day we had to make the choice.
Those who don’t buy Utilitarianism, in my experience, are usually either
a) stupidly concerned with other moral absolutes, like human rights. Why on god’s stinky polluted earth would you believe absolutely in human rights over something like utilitarianism? Idiotic. See further below.
b) hacking away at straw men, or using incorrect examples to justify the dismissal. (Let’s say Bob is a happy, kind, brilliant person dying of kidney failure, and Joe is a miserable self-absorbed stupid jerk who walks into the hospital because he has a small cold. Should the hospital staff kill Joe and give his kidneys to Bob? Utilitarianism seems to say yes! – No it doesn’t, obviously. It pains me when people dismiss good ideas because they’re too stupid or impatient to think them through correctly.)
c) less-stupidly concerned with other moral absolutes. Morality is obviously relative, so you can believe that acting selfishly is morally right. It’s pretty much impossible to argue against this, except to note that few people really believe this. Most people who propound selfishness don’t believe that it’s right of itself, but rather that selfishness “works” - or to put it in utilitarian terms, if each individual acts in his best interest, everyone will be happiest on average. Most selfish-theorists, if you really get into it, don’t believe in selfishness axiomatically, which is good since that would be impossible to disprove. Instead they believe, essentially, that selfishness is the optimal utilitarian strategy. This is trivially false, as exemplified in the prisoner’s dilemma, in general by game theory, and in fact by the very existence of the human tendency towards friendship.
If someone axiomatically believes in selfishness (or almost any moral theory), you can’t really argue against it, because of course morality is subjective. (See further below.) Why, then, do I bash those who advocate the moral absolute of human rights? Because all such views are not only poorly thought out, but they are almost always held in an inconsistent manner. Why is it wrong to kidnap someone? Because they have a right to freedom. Oh really - well then, why is it ok (sometimes) to put people in jail? Because they gave up their right to freedom by committing a heinous crime. Under what conditions does an action result in you giving up your “fundamental right”? Well, that’s a difficult question, but when you really get down to it, the answer shockingly ends up being “whenever it causes more good than the alternative”. If you argue Socratically with someone who “believes in human rights”, what you usually find is a dumb utilitarian.
- Pragmatism / Consequentialism. Per utilitarianism, only the sum of quality of life (over all time) matters - what you do to get it doesn’t matter. If you do something “horrible” in an attempt to be utilitarian, if it works, everyone is happier, so good for you. If it fails, people are less happy, and you failed. Lying, stealing, and murder are only wrong because they lower quality of life on average. If they made everyone happier, they would become morally right. Thus, the only thing that has moral value is consequences. Means statistically influence consequences (killing usually reduces happiness), but not always - and thus in an absolute sense, means are irrelevant. This is why, if you had to murder one innocent in order to save a hundred, it would be morally correct to murder, all other factors aside (e.g., if the one guy is Einstein and the hundred are all retarded violent rapists, of course it’s better to kill the 100).
Most people I’ve talked to who reject consequentialism are again too focused on the immediate, and love them some straw men. Ok, our goal is world peace. But what if to obtain world peace we had to kill 9 out of 10 people? Surely those consequences don’t justify the means. Of course, this argument is a straw man, and stupid because the means - killing 9/10 people - is always INCLUDED in the consequences. We don’t want (world peace and mass murder), we just want world peace. If the only way to get world peace was to kill 9/10 of all people, it would be a bad consequence. If the only way to get world peace was to murder one innocent, lovely human being, on the other hand, it would totally be worth it, and I’d pull the trigger myself.
- Science / Empiricism / Occam’s Razor. Per pragmatism and conseqentialism, we want a method of operating that maximizes our effectiveness. Our effectiveness is necessarily statistically dependent on our ability to predict consequences, which requires an assumption that consequences are predictable. Thus we have “the inductive fallacy”, only since we’re not stupid and not claiming that induction can be demonstrated from logic alone, it isn’t a fallacy, but rather a consequence of an augmented axiomatic system. Logic → induction is false, but (logic + utilitarianism) → induction seems to be true. And finally we throw in Occam’s Razor as a way to determine the correct expected system given available data. Occam’s Razor is often misunderstood to favor a simpler theory over a more complicated one. What Occam’s Razor actually does is to provide a way to choose between two theories that both explain all observed data. When two theories are equal in terms of matching what’s already been observed, pick the one that requires fewer additional assumptions. If you’re trying to explain the fossil record and you’re faced with two theories, one of which is evolution, and the other of which is evolution together with the tinkering of aliens, clearly you pick the former. The latter is completely possible, but it doesn’t explain anything that the first one didn’t explain, and it involves an extra assumption, so kick it out.
Who rejects these? Some people love to focus on the fact that you can’t prove induction from logic. This is completely stupid. Of course you can’t prove induction from logic. You can’t prove Maxwell’s Laws using logic either, but that doesn’t mean they aren’t true. You can’t prove apples exist using logic, but I ate one for lunch today, so I’m pretty sure it was there. This universe is not generated by logic alone - it has additional axioms. That should be obvious.
3.5) Evolution. Seriously, people.
- Atheism and associated. There is no god, there is no afterlife, there is no soul, and there is no fate except possibly via determinism, which doesn’t really count. This follows trivially from step 3, simply thanks to Occam’s Razor. The existence of God doesn’t explain anything that science can’t (remember that the lack of an existing explanation is in no way a demonstration of an inability to explain, which in itself would in no way be indicative of the need for a deux ex machina that can violate otherwise inviolable physical laws), and requires an additional assumption, so we reject it. That simple.
Who rejects atheism? Two groups: idiots, and those who depend on emotions or instinct to guide their beliefs. True, there is no disproof of god – but there’s no good reason to believe in god (or santa or the tooth fairy), so smart, rational people don’t do it. The end.
- Moral Relativism. This follows from science & empiricism, together with logic. We know the universe operates according to logic. Clearly the universe has extra axioms, since you can’t derive the existence of atoms from logic alone. However, since evolution and evolutionary psychology (read “The Moral Animal” by Robert Wright for a fantastic book on the subject) explain morality very thoroughly, up to and including our very strong belief in its absolute-ness, there is no need (per Occam’s Razor) to ascribe morality, or the actions of humans, to the axiom set of the universe. The universe doesn’t care what humans do. Only humans care. Morality is a human axiom, not a universal one.
Who doesn’t believe this? People who can’t get past the fact that they really, REALLY believe that some things are right and wrong. Not a very good trait for would-be philosophers.
Also, why would I list “utilitarianism” as the primary thing to believe if I think morality is relative? Simple. The universe doesn’t give a damn about morality. That’s the sense in which morality is relative. But to the human species, morality is not relative. The idea that we’re a blank slate is a stupid cultural bias (one that, amusingly, claims that morality is just another stupid cultural bias). Evolutionary psychology and twin studies have shown the existence and prevalence of moral absolutes (along with many other cultural absolutes). Morality, along with sex drive, vision mechanisms, competition for status, and a huge host of other factors, are genetically pre-programmed into each and every person who isn’t brain damaged. Now, I don’t think that utilitarianism is the “human moral system”. Not only is there no single human moral system (just because we’re morally pre-programmed doesn’t mean that ALL moral beliefs are pre-programmed; just some), but if there were one, it wouldn’t be utilitarianism. In fact, one of the strongest human moral absolutes is reciprocation, or tit for tat. Utilitarianism, on the other hand, sometimes advocates the screwing over of an individual to benefit the whole group, which is very much against the spirit of reciprocation. But one of the absolutes of human nature is that there are ways of feeling that we value. We value happiness (most of us anyway). We value satisfaction in accomplishment, we value feelings of peace and serenity. Whether or not you care to lump these things we value under the term “happiness” or refer to them with a multitude of terms, they are the things that drive us, and we want those feelings. By definition, anything that gives our lives meaning is encompassed under this list of feelings. Everything that we consider human rights, everything that we consider good or beautiful, comes from these feelings. To live, to be free, to pursue happiness. To create, to love, to wonder. To be dominant and successful and respected. Even the darker things we enjoy are still encompassed in these feelings. We usually only (although rightfully) consider these our “darker” side because they give us enjoyment at the expense of the well-being of others, or else in conflict with less-well-founded societal norms.
So these things are what we value, by definition. Now imagine a world with only one person. What would be wrong for this person to do? He clearly has no obligation to other people, since there aren’t any. (Let’s forget about non-human animals and suppose he has, somehow, unlimited food.) He doesn’t have to worry about lying or stealing or killing. His only moral duty seems to be to do whatever the hell he wants, because that’s how he maximizes his expected quality of life. (Go him.)
If we then throw another person into the equation, what has changed? The dynamics are now complicated… person 2 may wish to kill person 1, but probably shouldn’t. But these are only the DYNAMICS that have changed. What has changed, and become more complicated, is HOW to maximize their total quality of life. The idea that they should still maximize the total available quality of life remains unchanged. (To use a semi-crappy analogy, think of physics. We can solve the equations for a planet orbiting a star easily. We can use these to predict the occurrences of eclipses to incredible accuracy. But throw in a second planet, and we can no longer solve the equation. It becomes a mathematically impossible situation to solve exactly. But it still operates according to the same law - it’s just the exact dynamics of the situation that are difficult.)
(Some of you may have noticed that I did not at all prove utilitarianism - I just made an argument for utilitarianism stemming from some scientific fact together with an implicit appeal to intuitive morality. This is the best I – or indeed anyone – can do, since morality is, after all, relative. If I could do better, I would be actually PROVING utilitarianism, which would imply that morality was not relative after all. Nonetheless, I think I’ve made a pretty good case.)
These are things I think everyone should believe. There are many more, but they’re a start. Opinions?