Kant's Categorical Imperative

Uccisore asked me:

I responded quickly with:

I Think one of my first issues with the CI - and perhaps Kant actually refutes this somewhere or it is really a rather dead end one in and of itself - is the feeling that it boils down to ‘what if everyone did what you do?’. My first objection is ‘they won’t’ and my second is, if they start to, I will reevaluate, and then last, can I refute that if I am correct in my assessment that I am a good person and that I allow ‘infractions’ for myself that I would not allow others to make because they lack to judgment I have?

In a sense perhaps it is good that some people follow simple rules - commandments for example - since without these guidlines, and since they lack certain nuanced skills in evaluation situations, they would do horrible things. But then I may Think that I or Joe or Group X, should be allowed to do things beyond that allowed by the rules, because I or Joe or Group X, have(has) certain skills.

Another way to couch this is: when asked ‘what if everyone was like you’ they are asking me, as if my acts were causal on others only in a very limited way. I want to say ‘if everyone was like me things would be a lot better and we wouldn’t need so many rules’.

Objections to this are often based on the argument that everyone - really quite a lot of people - probably Think they are exceptions or part of Group X that should be allowed to have a more flexible set of guidelines.

And then I have a problem because it is so damn heady. Am I, subject X, so split in relation to the good and the right that I must evaluate myself through a very strange thought experiment, one in which, in fact, I cannot really imagine the consequences. (IOW I Think an assumption original sin is in the trojan horse of the CI)

Now a pro-CI - one who doesn’t simply laugh at what seems like a merely narcissistic or facile argument made above - could say that of course there can be varying freedoms. Doctors get to cut people open and others do not. Or that really you are thinking on the level of thou shalt not steal, when the CI is better thought of as leading to relationship priorities. Perhaps. Though I Believe Kant actually stuck fast, at least in situational exceptions. One should tell the murderer where their intended victim is, since honesty is right. But we can make a better, less ridiculous version of the CI than he did at one Point in his Life. Bu then we are moving towards a kind of consequentialism - where ends justify means - which I Think was part of what Kant intended to avoid with the CI.

So I have one objection based on what the CI actually does in one’s psyche - which I Think in part is that it maintains a conception of original sin and the necessity of split in the self. I have Another based on the idea that it creates a kind of lowest common denominator morality. And then I have consequeantialist concerns.

The C.I. isn’t utilitarian. It’s not “If everybody did that, the world would be a shitty place, therefor you ought not do that.” It’s more like “If everybody did that, then the world would change in such a way that your justification for doing it wouldn’t make any sense, therefor your doing it is irrational”. So it’s not ‘if everybody stole, that would suck’, it’s ‘if everybody stole, you wouldn’t be able to profit as you do by stealing things, so your behavior is inconsistent’.

He’s got this idea that when you perform an action, you are effectively stating two things:

“This action serves some end of mine,” and

“This action is representative of the kind of thing that should happen in the society in which I live.”

When the consequence of the second is that the first is no longer true, you have CI issues.

What it’s really about, is pointing out how rule based decisions can only take you so far in that the real context of our actions is ever changing. That’s why you need judgment. So that you don’t keep going around insisting on keeping the secrets of a murderer because you simply believe that rules about keeping secrets should be universalized, or that they even can be universalized. Am I making sense here? I think this is the gist of it. Making a rule, or a principle that is unbending will inevitably lead to that rule failing because of the reality that the context of human actions and psychology relating to those actions is ever changing, so you need judgment.

I agree with all of that. I don’t know if Kant would, but he’s dead so who cares.

He would agree, but would insist on talking in maths, like the person being complained about in that radiohead song. He’d buzz like a fridge, and act like a detuned radio, but after a half century of analysis by a gang of “experts” it would be determined that he was in agreement.

That would be a consequentialist interpretation. I Think Kant’s was more like ‘you wouldn’t want people to steal from you. So you do not want a universal freedom to steal.’ You might be a skilled or lucky thief, after all.

I more or less agree that’s what he meant, but I did raise some objections to that.

Add this to the fact that the ability of people to judge vary - their skills at evaluating situations and other people vary, and you have one of my objections to the CI (in the OP).

Well, I’m not going to defend the C.I., I really only value it as a tool in a toolbox. That poster’s one comment briefly made me think higher of the C.I., as I found myself judging his comment according to it, which is not something I am accustomed to doing. I’m wondering if the C.I. applies to belief systems in a way that it doesn’t apply to actions. Arguing from a universalizing of actions is foolish because it’s just obvious that different people will do different things, and besides which, you can always parse your imperatives in a way to make the C.I. say whatever you want it to.

But belief systems are different. It seems to me that if you believe something because (you think) it’s true, that entails that it would be ideal if everybody believed it, on the assumptions that beliefs are for truth-getting. So every belief system has an implied C.I. written into it. What then do we say about a belief system that would, admittedly, screw the world all up if it was widely adopted? Is it self-defeating by that alone?

I do Think a better version of the CI can be advanced. It seems like Kant really wanted rules, and even objected to exceptions where most people would make exceptions - honestly letting the intended murderer know where his intended victim is. Honestly a value pure unto itself.

But I more or less do not want to assume the better version (yet) and let someone else take responsibility for it. The trick with the better version is that 1) it may get rather unwieldy 2) who determines exceptions or factors 3) utilitarianism and ends justifying means start creeping in and it seemed like he was trying to keep them out.

Not necessarily. Let’s say you believed something to be the case that most people could not deal with emotionally.

Yeah, that’s what has me conflicted. So is the response, “This is the truth, therefore everybody should believe it, therefore the world ought to be screwed up” or is the response “This is the truth, people can’t handle it, therefore we should push a belief system based on falsehood?”

The universe has no obligation to play nice- it’s entirely possible that the most true belief system is one that most people couldn’t deal with emotionally. So we can’t say “if we can’t deal with it emotionally, it probably isn’t true”. By at the same time, it really seems like the whole purpose of believing things is to get at truth.

That’s usually right, but it’s beside the point. It doesn’t really matter what people actually do, it’s about whether you can conceive of a world where everyone does it (e.g., uses deception)----and in that example, you can’t. A world where everyone ‘deceives’ is incoherent, because a ‘deception’ is only possible in a world where truth is expected.

Longer answer: Kant wants to figure out how a free/autonomous/rational being—(those words are often exchangeable)—would act. That’s his locus of morality. The Categorical Imperative is just a test of that. The rational person is going to act with perfect consistency. The free person is going to act unhindered by worldly pressures. The autonomous person is going to act on a law he gives himself. (Depending on how Kant is talking, he speaks in any of those ways). The fact that the world will burn, or that it won’t, doesn’t change the fact that if a maxim fails the Universalizability Test (i.e., the Categorical Imperative), then you are not perfectly rational (insert: free, or autonomous). So, since being free/autonomous/rational is paramount, your first two objections don’t actually change things. I mean, it doesn’t change things from what Kant takes to be the locus of morality—your free/rational nature, (rather than the consequences of your actions).

The irony, for me, is that the Kantian moral law looks like a pedantic reformulation of the Golden Rule, but it’s is often utterly and horrendously impersonal.

I mean, maybe the most hardcore Kantian would say that you’re just bringing in consequences that you don’t like as an objection. And then they’d rehearse to you some 200 pages about how freedom/rationality is paramount and the basis of morality, not consequences.

For most of the maxims that you can put to Categorical Imperative, they pass. They pass, clearly—and they’re still immoral. For example, suppose I’m considering whether to slap someone in the face. I put it to the Universalizability Test. It passes. I can quite clearly consistently will that everyone else slap someone in the face. I don’t, right now. It’s not something I want----but my wants/desires have nothing to do with morality, for Kant. It’s not a maxim that fails a test of consistency.

Slapping people in the fact is an “imperfect duty” for Kant—i.e., it is a maxim that passes the CI, but just isn’t something that you want to happen. The problem is that the vast majority of the morality is made up of “imperfect duties”, which are quite obviously utilitarian/consequentialist based.

So, yea, you’re on to something…

I’m not sure about the Original Sin, or “split in the self” stuff…

Or, one may attach a contingency here, about evaluating the right thing on basis of an ideal set up by others. What about the right thing, realized by reasonable minds, singly and without validation from others?

That right thing, maybe a reasonable attainment, for example to illustrate an extreme situation:

(And this example has been used maybe to excess, but extreme situations maybe excuse overuse)

A dozen people have been left bereft on the high seas, after their boat went down. They are at their wits end, and resorted to drinking urine. Now they are starving to death, and the lead man has decided that the strongest men of good harmony and nature should consume the useless, the weak and the ill. They proceed on this course, but they fear, if they do make it to shore alive, that they will be prosecuted for crimes against humanity on the high seas.

What should they do, and under what authority.

Someone pulls out the categorical imperative and says, hey, the right things is not what the good is, but how to define the good as it should be defined under present conditions. And then he gives various reasons why, the strong will certainly be able to deliver the survivors , and will have to for the benefit of others. If, on the other hand, an indiscriminate feeding frenzy on others would commence, it would amount to murder, because actions would not be based on reason.

Finally, if nothing was done, and no one consumed anyone else,the chances are, there may not be survivors at all.

The thing to do therefore, is apply reason to the use of the categorical imperative, and do what should be done.

I bring in an extreme existential example, since in the course of ordinary daily life, the CI, would be easily contradicted by the view of the problem of defining what the good is.

Moreno: sometimes some people have to accept , a reasonablly good course of action regardless of how they may feel about it.  But I am not at all totally convinced.

This is not exactly the same as deception, but I thought I would toss it out.

My own issue would be that deception is just peachy in certain situations. Not that I Think it is good all the time.

Given your emphasis, which I Think is appropriate, on rationality I will now add a further objection, that actions in the World are beyond the scope of what generally gets called the rational mind. IOW sometimes I am going to follow gut reactions - certainly in crises this will happen (iow when I have to decide fast and do not have the luxury of even minimal rational analysis) - in many situations. I may even (seem to perhaps) contradict my general ways of dealing with people/organizations. The rational mind is often seen as being able to handle just about any amount of variables, or if it can’t, nothing can. But actually one can have developed ‘good people sense’ or ‘warning bells’ and despite not having evidence one can put on the table for others to analyze, make good decisions. (not making any claims to infallibility). I do not want to reign in those skills - if I have them and one can empirically assess them also - to conform to what a rational person would do in the abstract and want everyone to do all the time in general. That to me is like cutting out one set of Tools.

Is this an objection on your part to it? If so, could you extend it some.

Which to me ends up being a kind of transcendent value that I must accept.

But some people want to slap and worse and society might just chug along OK.

Whew.

I’ll try to reword it. I often like to look at what happens when a meme comes into a body/mind. Often ideas are evaluated, sort of on their own. Are they right or wrong? I like to look at what happens if a person accepts the belief. What side effects are there? (and side effects can often be primary) What does the idea do, not just in the area of intended use, but as a whole, once it and anything implicit in it is accepted by that body/mind. (I write ‘body/mind’ not particularly assuming a dualism, but rather I want the reader to Picture a person (body), an individual, and they follow me ‘in’ to see what would happen once a mind has accepted the belief.

So: I look at what happens if I evaluate myself and my values in the way Kant wants. I first accept a split between good and right. What is good for me need not be right morally. This sounds so obvious, however must everyone accept that they cannot focus on the good and ignore this idea of right. Let’s say you have an empathetic person. One who is disturbed by other people’s pain. Raised by peopl who were also like this. Fairly perceptive. I Think there is a loss if this person begins with a universalized apriori, that one must assume that if one focuses on what is good (for them) bad stuff will happen. It certainly seems like this will be true and is true for others. That when they follow self-interest, they end up doing what is not right. Or, in any case, for the gain in ultra luxury they may put childre in slavery, for example, which seems like a small gain for a great deal of suffering. But 1) must the existence of some (even, many) people like this entail that I should assume that I need to accept this good/right dichotemy and view myself with suspicion 2) long Before Kant got in the game, we had systematized very harsh judgments of the self - original sin being part of the Christian kind. What if this very notion is causal? Which came first the conception of the self and the demand that people view themselves as if not for the grace of self-distrust and rational testing of my desires, I would be damaging to others or the behavior it is trying to prevent but in fact may be creating?

To use his categories, it might be said that if we are going to analyze rationally, I would like to see where the good is failing to include empathy and why and focus on that. We can certainly see how one is often trained to not be empathetic, through religion, racism, nationalism and various kinds of beliefs about the cause of suffering, what a real man, for example, does in Life and so on. BEfore we start adding inhibitions and self-distrust, i would want to see what memes, rules adn training are doing already to be causal in what Kant would see as the split between good and right. Rather than first trying to get people to accept the split and distrust themselves even more.

I haven’t heard that one before. As an assertion, I don’t find it very intuitively convincing. I cannot imagine a world where everyone breaks their promises. In that world, ‘promise’ would not mean what we take it to mean, if anything. A promise is an assurance that you will/will not do something. You can’t assure someone of anything in a world where nobody keeps promises.

Right, and you think it’s peachy because it leads to more happiness, or better consequences, or less pain, or whatever. What’s Kant going to say? I think he might say something like…

Start with a common intuition that you already have. You believe that every person has an inherent moral worth. (You may not, but if you don’t, then Kant has nothing to say to you—or any other moral skeptic. He’s not trying to prove the existence of morality to other people, he’s just trying to explain/justify what they already believe about morality).

What explains the inherent moral worth of people? It can’t be any of the qualities (like courage, or compassion, or strength, or etc), because those can all be used at the wrong time, in the wrong way, for purposes we’d all call bad. It also can’t be that a person is happy, pain free, or something like that—because many people who have inherent moral worth just aren’t. Ultimately, it has to be that they are capable of guiding their action on the basis of reason. Hence, the locus of morality is on figuring out what the right reason is.

Why isn’t the right reason “maximizing happiness”, or any other consequentialist motive? I think it’s because the goodness of ‘happiness’, (or money or power or whatever), is always contingent on the moral worth of a person who has it. I.e., Your happiness just isn’t valuable when you don’t deserve that happiness. On the other hand, the moral worth of a person is NOT dependent on the happiness that they may (or may not) have.

EDIT: So, in a nutshell: The locus of morality has to be on having the right intention behind your action. Kant thinks he’s discovered the test (the CI) that judges intentions right or wrong. And despite the fact that some of those actions might bad consequences, their moral status isn’t changed. “Do what is right, though the world may perish”, Kant says. He thinks it’s the only way to act consistently with the idea that every person has an inherent moral worth (aka treat them as ends in themselves, etc).

Sometimes you will follow gut reactions, or natural inclinations, or pressures from other places. Some people have good people sense, and generally “get it right” when deciding how to act. Yea, so you’ll need something to say to someone who has all of those skills—thinks he does—and consistently gets it wrong, but thinks he’s right. Kant probably thinks that those things have no moral worth, because what gives you moral worth is your ability to guide yourself based on reason.

Any criticism of Kant like: “If we do what Kant says about morality, the world will burn, we’ll suffer, we’re tear ourselves apart psychologically, etc, etc, etc” is probably right on. “Do what is right, though the world may perish” --pretty sure that’s a direct quote from Kant. Amazingly, it’s a sentiment that is ACTUALLY COMMON.

[i]- I’m going to technologically prolong this person’s suffering because helping them die peacefully when their future contains nothing but torture is morally wrong.

  • I’m going to force this birth and so what if it costs a house, and a life.[/i]

I was once sitting in a class where we were talking about the railroad scenario. You know, basically: would you kill someone in order to save many more lives? Or, specifically, would you push someone in front of a train in order to stop it from running over 10 other people? Someone else in the class said yes. And everyone else gasped in horror. It was a class on Kant.

That was pretty funny.

The truth is almost always funny.

I think the point was, you would know what people would not do. It is a fairly predictable world, though less information is given by promises.

I can certainly come up with justifications along consequentialist lines, but I don’t think it has to be justified consequentially. There is a flaw in abstracting actions out of contexts. Deceiving someone does not exist in a vaccuum. (so far the consequentialist is with me). If I meet someone who gives me the creeps or seems to be hiding something that sets off warning bells or is threatening, I adjust my behavior, not simply, I think, because of consequences, but because it is simply not a situation where it fits. In a sense like if a woman who I am not attracted to, say, starts gently rubbing my arm, I will likely pull back -unless the context fits this interaction somehow. Not because of consequences (at least: only) but also because it does not fit me. Sure, I may be concerned that if I allow this to continue she may think X, but really I am going to not participate in a relation that does not fit my feelings. In the example of deception, I may decieve about things that could not possibly hurt me or someone else, just because, for example, I do not want to relate honestly to the person AND I do not want to make this known. It’s a distancing move, just for me. Now it may be the case that some of my deceptions are simply extra precautious. I can’t think right now why negative consequences there might be if I answer honestly about seemingly trivial stuff, but I do this just in case I am missing the threat. But there is a surplus here that goes beyond consequences.

I have to come back to get the rest.

I don’t think a world where nobody kept their promises would be a predictable world—(not that predictability is the reason why that world is inconceivable). It’s not that everyone always does the opposite of what they promise, it’s just that the fact that they promised has no bearing on their future behavior.

That’s fine—however you think deception is justified…
What followed was my attempt to put Kant’s entire theory (from Groundwork) into, like, 3 paragraphs.

Does this mean that a person who does good and wishes others well in general, but who does not determine their actions always (or generally or most often) via reason is not a moral (or good) person? And how does a rational person without empathy fare? This would seem to leave a lot of room for rational actions that Kant would likely have problems with. Or?

  1. It seems to me there are rational people who can hurt simply through their presence. (I am not saying this is a necessary part of being rational) They may pass Kant’s CI and judge others with it. But they are hateful people and this can be felt.

  2. It doesn’t seem to me the CI is judging intentions, at least not directly. One’s intentions could be simply to be consistant. Or to be rational. Or to have a good defense against criticism. Or to think one is better than other people. The last is really quite common.

I can imagine definitions of ‘moral’ that would mean that not using reason to make choices takes it off the table of moral choices. But it seems to me Kant needs to justify getting everyone to move over to his approach to being a good guy. And to do this, at least for a consequentialist, would mean that he needs to show that there are no significant losses when people all decide not to follow their gut reactions, even when these seem to do rather well and even when they get good feedback about these. Further, it would seem the CI might be usable against Kant on the issue of the CI. Should everyone act as if they do not have certain skills, since some people will think they have them but do not?

Part of this is an ‘if it ain’t broke don’t fix it’ objection. But I also want to go further and say that you are then placing moral pressure on people to use only one process for determining their actions. (I am quite sure that however much some people may emphasize moral gut reactions, they also reason their way to actions also, but now they would have to only determine actions through rational processes and including a test via the CI. I think there will be a loss at least for some people, and likely mostly amongst those who would even decide to listen to Kant. The ones who would ignore Kant - and similar heuristics presented by others - completely and without caring would include the important moral failures out there.

I wouldn’t generally say we would tear ourselves about physchologically. But it seems really rather heady and to at least add to the distrust we are generally being trained in various ways to aim at ourselves. I agree that this is common.

Which is ironic because they are all implicitly approving of precisely those kinds of decisions all the time. Unless they are very, very politically active, with a wide range of political targets: hospital policies, allocations of tax funding, foreign policy, corporate policy and much more. Right now they are all or their parents are, paying people who do and will make decisions like that. Jack Nicholsons General, in that Tom Cruise court room drama movie whose title I cannot remember, was right. We tend to want immoral people on our walls.

I appreciate it. I did read Critique of Pure Reason, a good while ago, but not C of Practical Reason. I mean someone threw me the categorical imperative in college, but how much foundation was thrown with it I don’t know.

I would say reacting non-candidly is often wise/intuitively smart, though the degree of lack of candor appropriate varies.