An Algorithm for Axiom Identification

There are objective and subjective truths - this we know. Barring certain sorts of non-disprovable-yet-not-remotely-useful arguments, we can all agree that certain things are objectively true: trees exist, for example (at least for right now…) 1+1=2 will always be true. But there are things that are subjectively true - that is, true for a specific individual - maybe even all individuals - yet not true external to individuals, not true in a way inherent in the physical universe. For example, I hate sweet relish. It is gross. In fact, I find it so repulsive that I secretly like to entertain the thought that sweet relish is OBJECTIVELY disgusting. Nonetheless, the fact remains that it is only subjectively disgusting - it is disgusting to me, but not to some others. Further, unlike with people who disbelieve in “1+1=2”, a like or a dislike of sweet relish cannot be considered in any meaningful way to be “wrong” - it’s simply a difference.

Morality is well-known to be in the category of the latter - at least in a strong probabilistic sense. That is to say, it is not certain that morality is subjective, but it is close enough to certain that it doesn’t make a huge difference. (For example, it is certain in the same manner that Santa / God / etc. do not exist.) If you disagree with the subjectivity of morality, you will likely not appreciate the rest of this article - but even still, you will certainly agree that people have DIFFERENT moral systems, even if you believe that there is one correct such system.

So the question is - what IS my moral system? Or yours? It seems intuitively that it should be easy enough to answer, but often it is not. If I ask you, “is killing wrong?”, you may well answer “yes” - but then I come up with an example involving self-defense, and so you agree that sometimes killing is acceptable. So then, perhaps you argue that murder is wrong. What is murder? Perhaps murder is the taking of a life not in the immediate and necessary defense of oneself or others. Killing a random person for no good reason would be murder - so would killing someone who was coming at you to punch you in the face, or someone you knew would try to kill you in the future. Then I could perhaps construct an argument involving going back in time and seeing baby Hitler, and you may or may not agree that murder can sometimes be acceptable.

The point here is this: all our moral beliefs can be simplified down to a collection of rules that are ALWAYS true. It may be the case that some of our moral systems are so complicated that these rules look like “in the situation where there are 4 people, 3 of whom are over 50 years of age and have no living relatives, and one of whom is a young, healthful doctor with a wife and 3 infant children, it is better to kill the 3 than the 1”. However, it is probably much more common for one’s moral rules to be expressible in a more succinct fashion. Some of us, for example, may believe that killing, or murder, IS always wrong, regardless of whatever circumstances may surround the potential act. This would be such a rule.

Such rules - rules that are ALWAYS true in the system of our beliefs - are vastly important. They are, in a very real sense, the AXIOMS of our belief systems. They are what we suppose to be true on a fundamental level, and they generate ALL of our situational moral beliefs. Identifying them can help us try to analytically assess the morality of a situation when intuition fails us. It can also help us resolve conflicts. When we are capable of understanding that we’re conflicted about the morality of abortion BECAUSE on one hand we value the existence of life, but on another hand we value the quality of life, and the two can sometimes conflict, we can then make a rational meta-evaluation; which of the two axioms do we hold more dear? If we choose quality of life to be more important than existence of life (as would be my personal preference), we can then throw out the existence of life as an axiom and, if necessary, replace it with something similar that doesn’t generate contradictions - and thus be one step closer to moral consistency (should that state be something desirable within your personal moral system, as it is within mine).

Ok - so hopefully I have you convinced that being able to identify one’s moral axioms is potentially useful. But how do we do that?

Simple. Here is an algorithm that is fairly intuitive, although I don’t mean that to say that IMPLEMENTING it will be trivial. But I think you’ll all agree that it is an effective algorithm for finding one’s moral axioms.

  1. Consider a moral issue, or a situation that can give rise to a moral resolution. Pick something obvious to start - maybe imagine a situation in which you can choose to stab a random passer-by, or not.

  2. Identify your intuitive moral reaction to the issue or situation. Intuitively, the correct answer is to not stab the passer-by.

  3. Ask yourself why that answer was the correct one. Come up with a rule based on your answer - and make sure that rule is PRECISE. For example, you may answer that it was wrong to stab the passer-by because unwarranted violence is wrong. This would be a bad rule, because “unwarranted” is very vague. Define your terms so that the meaning would be clear to anyone reading this rule. So perhaps “violence when your life, or the lives of others, do not immediately depend upon it, is wrong.” This is acceptable, although it can be made more precise.

  4. Ask yourself if the rule you have generated is always true, or ever has any exceptions. Don’t let yourself be taken in by the rule sounding good - in fact, strive to attack the rule. Think of extreme unlikely situations that may be counter-examples. A good possible counter-example to the above rule would be the concept of baby Hitler: you go back in time, encounter baby Hitler, and can kill him. Would this be morally good or bad? People will differ on this, but let us suppose you say that the answer is “you should kill baby Hitler”.

  5. If you cannot find an exception after considerable attempt, congratulations - you’ve found an axiom! If this axiom “completes” your system - if it enables you to answer all moral questions - you are done. If not, go back up to step (1) and come up with a different moral situation that you can’t answer using your existing rules.

If you do find an exception, this means that the rule under consideration is NOT an axiom of your moral system, but rather is an APPROXIMATE TRUTH - it is a rule that is often, but not always, true. Such rules are useful for everyday life, but it is the axioms we are after, so that rule must get chucked.

  1. Now - and this is very important - you must consider WHAT IT IS that makes your exception true. Perhaps you’d kill Hitler because it would mean saving lives, even though he was innocent as a baby, and hadn’t yet made the choice to kill. If this is what you think, be more analytic. This implies that whether or not he was innocent, and whether or not he had made the choice to kill, was irrelevant, because killing him would still save lives. This may imply that you value results over notions of moral culpability (something which I vehemently endorse).

  2. Next, generate a modified rule based on careful consideration of the exception. In this situation, the modified rule may be something like “any action that results in the maximum number of lives preserved is morally good.” (This is an excellent rule in terms of how precisely it is stated, and how easy it would be to apply it thanks to its precision. Even if you disagree with it - and I do disagree with it - its precision makes it a much more manageable rule.) At this point, go back to step 4, and try to find an exception to this rule.


Fin


I'm curious to hear thoughts about this.

OK: so here’s my situation. You’re the judge at a trial being held whose question is the status of morality: is it mere conformity and obedience to custom-- or is it a deeper impulse of humanity towards self-overcoming, creativity and compassion? So it’s not your job to decide on the fitness of any particular moral axiom. You are judging morality itself.

But of course, because there are so many different kinds of moral axioms, and because morality itself is a fluid thing and means different things coming from different people, the question cannot be resolved into a simple “guilty” or “innocent” decision. For doubtless morality is both. As for my intuitive judgment, I would side with the court and hold the plaintiff guilty, but only because in this unusual case the victim is the creator of the laws! Since rules and laws themselves are customs, they become moral axioms – even when they have outlasted their usefulness. So morality is always guilty of being out of date; but the possibility of a new morality is purely innocent, though from these virtual beginnings quickly it becomes a concrete and moral axiom, whose violations shall be punished and perhaps severely.

So the final judgment or ruling which would have to be that morality itself must be sublimated. Not overcome; not synthesized; not subtracted from the situation. The ethical relation is the relation between human beings, it is language, the dialogue of the attorneys: standing up and face to face with another person, even hearing my name called, and i am already called to justice, to my true personal and ethical responsibility. We cannot even really avoid this: we are social beings at the core, and even at our most negative we still aim our actions at another, even (and perhaps most mysteriously and importantly,) when the other is ourselves – like when the law holds itself on trial.

We of course would like to know whether the decision of the court is true. But if the court of morality itself is sublimated, raised to a higher level – it’s decision dissolves, like when a case gets overturned on a higher court. A remainder of the question resists dissolution: is sublimation always “true”? And it isn’t, in a sense it’s a simulation, a ‘mere’ change in scope or a ‘shift’ in style, as in moving from a deeper to a lighter idea – but I’m not sure we should try to reduce the good altogether to truth. This is, after all, just a misreading of Plato. Just because the good is eternal, infinite, beyond being – it is not necessarily always the “true.” The unity of the good and the true is a subtle sort of sophistry. Not because the truth isn’t ever good – but its’ clear the truth isn’t always good! Lies can also sometimes be good: it’s almost a question of style, even of taste. Who’s to say, really, that a lie wouldn’t in some situations be objectively better than the truth? No one really “knows” the truth about the future, whether this or that action will have a certain and definite effect…

Your sixth question is good, because in my case it is about the exception to morality. In other words: if every social relation was “moral” it would hardly be interesting to observe it. So the question becomes: what kinds of social relations are immoral? This functions as an intersting sort of ‘axiom of sublimation’: what from the past shall be held in contempt, what shall be raised up high and revered? It is not necessarily as you imply that we should judge a moral axiom by whether it ‘produces results’ – but rather by how it ‘works,’ what makes it function the way it does.

Of course, what it produces is important, but it is a side effect of the actual process of the machine in its interaction with many other machines. The question of morality is one of social engineering: how to construct the spaces for a new way of life, a new utopia. It’s an imagination-game, played amongst the soft fabric of dreams: but this is already far from morality, which again WORKS, whether or NOT it produces anything!

An interesting algorithm.

Am I right in thinking that what you’re seeking is a kind of collective “know thyself” - the closest we can get to an absolute morality (something you acknowledge doesn’t exist)? If so, then you’d better hope the gene pool doesn’t detiorate much, lest the human condition change and make the carefully discovered moral code out of date!

I thought this was a v. interesting post. As I was reading, the question which began to fizz in my head was the purpose of clarifying one own ‘subjective morlity.’ To what end? For what purpose? The obvious answer was to know how to act in a given situation: clarify one’s own principles in order to lead an authentic life, in good-faith as existentialists would put it. This would be v. close to a self-determining, willful, life. And do you know, at bottom, what I found the reason for all this to be?

Contentment. To clarify one’s own beliefs, in order to act by principle in particular situations, is done with one end in view. No, not to be “moral.” Not to be “good.” But to be content with one’s own choices, one’s own actions. To free oneself, as best as one can, of contradictions, fangs of conscience.

Maybe there is something more than that, something deeper - but I don’t see it. I thought it was an interesting insight (into my self, and perhaps others) to share.

Thanks for posting Joe, the man.

Depends on your defnition. If I define ugly as the chemical resopnse my brain sends me, in the form of the sensation of disgust I get when visually observing someone in particular’s face, it most assuredly IS reflecting reality…i.e. true.

Are you suggesting because someone else does NOT feel that physiological reponse, that I therefore did not?

Morality reasonably defined follows logic, it’s nothing deep or that takes a lot of brain power to understand, it’s easy as pie…an easy pie of course.

Bob defines Jill as ugly.
Sam defines Jill as not-ugly.

Jill is ugly to Bob.
Jill is not ugly to Sam.

What’s to understand here?

Jill looks in a mirror and decides she’s ugly.
Jill is ugly to Jill and Bob.
Jill is not ugly to Sam.

What’s to question? It’s vanilla, pie, cake. Why do easy things always taste so good?

-Mach