Pre-Rational Morality
This year’s winner of the Levinson Award at the University of Maine.
The aim of this paper is to show that there must be a moral awareness that informs, and is independant of rational thought. By 'rational thought', I mean the entire scope of what takes us from apprehending evidence to deliberating our way through a conclusion. By 'moral awareness', I mean only the sense that there are some actions we ought to do, and others we ought not. I intend to make no claim about whether morals are ultimately virtue-based, centered on duties, or something else altogether. While it is beyond the scope of this paper to conclude on a particular moral system as being the best (or indeed to construct a new one), there will be strong criticisms provided to certain moral systems which do not meet the demands this paper makes.
Perhaps the most philosophically discussed roll of conscious, rational thought is the process of trying to achieve epistemic justification. Justification is another term with a hotly disputed meaning, and which unfortunately this paper doesn't have room to address. Very loosely, however, most philosophers have agreed that justification means having one's beliefs for the proper reasons, or in the proper circumstances, such that having that belief is rational or truth-conducive. For example, believing that Mars is the fourth planet from the Sun because you heard it in a song lyric, and you always believe what song lyrics tell you, would be considered unjustified (even though the belief is true), because the "always believe what song lyrics tell you" rule is inappropriate. "Inappropriate" could mean many things here- that it's unethical or blameworthy, that it's likely to produce wrong or undesirable results in a majority of situations, or other options. Either way, as philosophers we commonly find ourselves examining our beliefs to see if they achieve justification, with the understanding that justification is a necessary (but generally not considered sufficient) condition for knowledge.
At this point, let me assert what seems obvious to me- that many of the behaviors we engage in while reasoning are taken to have moral worth. Honesty, Open-Mindedness, Courage, and a host of other qualities are seen not only as important for getting at the truth of things, but as generally praiseworthy qualities to have in general life. We could probably attach every common virtue to the intellectual life in some way or another. One rather inobvious connection, which gets to the very heart of intellectual life is Charity- when we hear statements like the following,
“I saw a man eating chicken on the way home from work today.”
We are immediately faced with a choice in how to interpret that, since we can't hear the presence or absence of hyphens in common English. In this case, there are four very legitimate interpretations- the first clause could be a reference to a man having a chicken dinner, or to a monstrous chicken that eats people. The second clause could mean that the observer was on her way home from work, or that the subject of the sentence (man or monstrous chicken) was on their way home from work. Now, in this case, choosing the wrong interpretation results in little more than a laugh, but the day to day life of a philosopher is filled with confronting arguments and positions, and what's absolutely key to progress is interpreting opposing arguments in the strongest possible way. Interpreting the argument of a critic in bad faith to his intentions, in order to make his position easier to defeat is simultaneously irrational and immoral.
As far as reasoning goes, interpreting sentences is a fairly complex, derivative activity. Also, we can always get clarification later if we interpret in bad faith. Do moral issues apply to more basic, objective operations of reasoning? I think so. Consider the following:
1.) If matter exists, then God exists.
2.) Matter exists.
3.) God exists.
The conclusion follows from the premises as a simple matter of modus ponens. Suppose that premise 1 had been proven to the satisfaction of all who considered it, by some sound argument. Would then, everybody be forced to conclude that God exists? No. From the above argument, one could conclude that God exists. But one could also conclude, just as validly, that premise three is unacceptable, and that since premise 1 is proven, premise 2 must be false; and thus matter doesn't exist (given that the rejection of 1 is untenable for whatever reason). For any deductive argument, considering the conclusion as an absurdity which shows one of the premises must be false is a 'legal' move. However, we would have to agree that tossing out the existence of matter for no better reason than to avoid accepting the existence of God would be an intellectually unethical move- dishonest to oneself, and flaunting what is technically permitted, to do damage to the importance of the dialectic. Indeed, if one is willing to bite any bullet to avoid a certain conclusion, no argument can convince them. Thus, 'behaving oneself' is a key element even to such basic behaviors as drawing conclusions from syllogisms.
There is a class of philosophers who would easily agree with everything I've said so far, those who take a deontological stand in epistemology. That is to say, they believe that justification of true belief just is a fulfilling of one's moral duties. However, the aim of my paper is something other than defending deontology. In fact, there are a number of very good objections to that position. It is a common belief, to put it in the words of Kant, that "ought implies can". In order for an act to have a moral component to us, we must be able to do it, or refrain from doing it. Upon reflection, however, there is good reason to doubt that our beliefs are in our control in a way that makes them opens to moral criticism.
A thought experiment will outline this clearly. Suppose there is a locked door, which you desire to pass through. The only way to open the door is to hook onesself up to an attached lie detector, and to sincerely answer 'Yes' to a series of questions. So, you hook yourself up, and the first question the automated door asks is, "Are you at least 10 feet tall?" If we think about what it would take to get the door open, we must either reflect on 'fooling' the lie detector without changing our beliefs, undergoing some long, experimental process that would, over time, force us to believe something that wasn't true, or somehow becoming 10 feet tall. What we could not do is simply decide to believe that we are 10 feet tall because there is an immediate advantage in doing so. In fact, the very usefulness of lie detectors is dependant on our inability to easily change our beliefs. It is commonly held that in order to be morally culpable for a behavior, it must be under our control- ought implies can, as Kant said. So, if our beliefs are not under our control, it seems likely that epistemic justification cannot be primarily a moral matter.
There are no doubt rebuttals to the above argument, but I would like to defend the existence of a moral component to reason even if deontological justification is a failed concept. Without reference to ethics, then, justification would be about one's beliefs being in a particular relation to other things they believe, and perhaps to the facts themselves, without reference to any moral judgment. Does it follow, then, that there are no moral judgments required in proper reasoning? Not at all. Consider the example above, of the sentence that could be interpreted in a variety of silly ways. Whatever justification may be, it is still imperative to interpret a person's statements correctly in order to understand or interact with their positions, and interpreting an argument in a weaker or stronger is still, at least oftentimes, a matter of charity or honesty, and thus a matter of moral judgment. No matter how we define justification, there are moral choices to be made with respect to our reasoning- and those choices often have great impact on the success or failure of our examinations. In fact, if we accept the view that our beliefs are not directly in our control, the reasoning processes we use that ultimately give rise to those beliefs become even more important, because they are what we are responsible for, and they no doubt have great influence over what we ultimately believe, even if we do not direct control it. A bad belief, then, would be like a sickness- while we cannot choose to become sick, or choose to become well, it is only that much more important that we avoid behaviors that we know make one prone to sickness.
It is possible that these truth-getting behaviors are correct, not for moral reasons, but for pragmatic reasons? Might it just be that honesty, good research, and so on, are the correct things to do just because they work the best, and any moral qualification is besides the point? This seems plausible at first, and I believe this interpretation could be consistent with the thrust of my paper. Consider the case of the bias researcher. A student trying to shed light on a controversial issue will frequently make the mistake of reading only that research which speaks highly of their previous views and prejudices, while glossing over (or outright ignoring) the defense of positions that make them uncomfortable. No doubt, this happens unconsciously quite often- but not always. A person can choose to read only one-sided research while on their quest for the truth. We can analyze this behavior pragmatically- it is certainly true that if your aim is to get to the truth of a controversy, that reading only one side of the issue is a poor means of getting there. Bias research doesn't work. We can also, however, analyze this morally. Shying away from uncomfortable positions while chasing arguments that bolster our previous views (and thus our ego), while telling ourselves "I am digging for the truth" is dishonest, and perhaps cowardly as well. It is the case, then, that unbiased research is both moral and pragmatic- it's good, and it works. This reinforces the central point of the paper- that morally correct behaviors are necessary for proper reasoning. If there was a person who had no sense of the morality of situations, it is true that this person could discover that the methods we consider ethical are the best methods. Indeed, if this were true in every case, it would be no argument against my position- the point here is just that doing the moral thing leads to success when it comes to reasoning.
Still, it seems that there is a legitimate problem here. I believe it is this- if we acknowledge that all these seemingly moral decisions are also the best way to procede in the pragmatic sense, then why suppose our actual decision-making in these cases is moral at all? Couldn't it be that all of these activities are done for purely practical reasons, and interpreting them as having moral value is something we learn to do after the fact? I think there is some merit to this point. In examples such as willful misinterpretations, rejecting solid premises to avoid undesired conclusions, and related situations, we immediately see these actions as morally wrong at the same time as we see them as irrational. So perhaps the speculative conclusion to draw here is that one can get to the best behavior in the absense of one of these faculties- a person who had no sense of the moral could get to the best reasoning techniques through pragmatism, and ostensibly a person with weak critical thinking skills but a strong moral sense could get to these techniques by doing what is right. Is there any way to decide which of these impulses, moral or pragmatic, is more central to reasoning? I think so.
Pragmatic thinking involves concerning onesself with what works, with what is most useful. However, what is most useful for a situation depends in a great deal on the intended ends. Consider our example of the bias researcher. As was said, if a person is trying to get to the truth of a matter, one-sided research is unlikely to work. But what if that's not the person's aim? What if the goal in mind is to create a strong polemic against a popular view, or to support the status quo from some new source of criticism? In these cases, not reading the best work of the opposition, or at the very least reading it with a very different 'eye', might be the most useful approach. The pragmatic 'best course of action' has changed. Indeed, what if one's goal in philosophical pursuits in general is not the aquiring of truth at all, but perhaps learning a certain set of persuasive techniques, or gaining the influence to affect culture in a certain way? Clearly there are a variety of potential ends to rationality, however, I think there is a strong impulse in us to see the pursuit of truth for it's own sake to be more appropriate (to philosophy at least) or more noble than these others listed. In other words, there is a sense in which pursuit of the truth is a greater good than political power or manipulating one's friends, etc. Even if there are situations in which this is not the case, however, one still has to choose some end in order for a pragmatic approach to make sense. As such, there is already a value judgment implied in pragmatic reasoning, and such judgments are open to moral evaluation even if it turns out there is no consensus. When somebody chooses to persue persuasive power or self-satisfaction over a pure quest for the truth, this choice has a moral component- what is the right end to seek, given the circumstances? So then, any pragmatic approach to reasoning that purports to ignore the moral component to the rational process must acknowledge that not only are the right choices identifiable using moral concepts, but that the process itself began with a value judgment insofar as which end is pragmatically pursued. Thus, basic reasoning skills are unavoidably characterized by ethics even under a pragmatic view.
The consequence of my position here is that morality must to some degree be epistemologically prior to reason. We to respond in the appropriate way to the moral content we see in the decisions before us in order to reason successfully about any sort of complex issue- such as ethics. The obvious objection is that there seems to be a circularity here- we need to know what's good in order to reason successfully, and we need to reason successfully in order to figure out what's right. Is there a way to break this circle while retaining my point? A survey of some popular moral systems shows that not all of them will fit this new perspective. For example, Kantian ethics states that morals are derived specifically from the application of reason. Here we see a perfect example of the sort of situation I describe above, as well as the importance of pre-rational morality. It is a well-known problem when dealing with the categorical imperative that if one formulates their maxims just so, they can seem to get away with any behavior at all. Well, what is 'just so', and what did Kant intend? From what I can tell, getting around the Categorical Imperative involves forming maxims with an attitude of selfishness or duplicity, and Kant is presuming a certain forthrightness from people. Kantian ethics, then, seem to be predicated on a presumption that people know how to, and will, 'play fair'. So this cannot be our grounding of moral reasoning, even as it shows us how important a such a grounding is. A person with no sense of fairness who tried to use the categorical imperative to discover which actions were moral would soon discover that just anything they wanted to do was justified as long as they worded the maxim carefully enough. It's important to note that if the Categorical Imperative were the person's sole source of moral information, they would never come to a position that there was anything wrong with tailoring their maxims just so- they would forever seem 'carefully constructed', never unfair or as cheating.
Consequentialism, insofar as it is evaluative, faces a similar problem. It can require all the focus and skills of our reason to discern who is being hurt or helped by a certain action, and that discernment is subject to the same demands of integrity and consistency as constructing a maxim for the C.I. We must value the happiness of one against the happiness of another, and since no two people or situations are alike, there is inevitably a degree of interpretation here. We are tempted, at every turn, to quantify one's potential happiness as lesser, and another's as greater, in order to support whatever action we ultimately wish to perform. Again, we have an example of a complex moral system that is dependant on pre-existing moral reasoning in order to function.
Are there stronger options? Christine Korsgaard presents in her paper "The Origin of the Good and Our Animal Nature" a rework of Aristotle's system, in which she demonstrate the final good to be in the evaluation by a being of it's own quality of life. In other words, the perception of a being that it's life IS good is the final good. This seems dependant upon reason at first glance, and it may be so. However, Korsgaard maintains that animals can perform the evaluation (be in an evaluative state, as she puts it) necessary for the final good, so she obviously has some non-rational situation in mind. Humans however, work differently than animals according to her- the Aristotelean virtues come in to play, in order to ensure that the final good of one human doesn't conflict with another. It is certain that cultivation of the virtuous life will require, at least at times, the full exercise of our reasoning. We are no doubt getting closer to our goals- in Korsgaard's view, there is indeed a perception of the good that can be known without rational thought, though it is unclear to what degree this applies to humans. I believe we can get yet closer to pre-rational morals. I turn now to the philosophy of Thomas Reid.
Reid's philosophy is based around what he calls 'common sense'. While it is wrapped up in the pre-reflective beliefs of the 'vulgar masses', it is nevertheless a well-developed philosophical concept. Essentially, Reid believes that there are certain things we believe due to our nature as human beings, and come to us without any process of rational justification. This is not to say that these beliefs might not be mistaken, or that they can't be examined critically. That our wills are free, that our memories are of real events that occurred in the past, and that other human beings have conscious minds like we do are some examples. An important one is the general faith in the reasoning process itself- that we can get to correct conclusions through the application of deduction. There is an important point here. When we search for (or demand in others) a rational basis on which to trust (for example) the deliverances of memory, we are in effect giving reason primacy over memory- we are setting reason as the standard against which memory is judged. Reid says this is arbitrary at best- it would be circular to appeal to a rational principle in justifying this course of action, and the deliverances of memory and of reason both come from the same ultimate source- our human constitutions. The principles of common sense are identified through a reference to what Reid calls "all languages". This is to say, people in every culture refer to themselves as free, refer to memories as real references to past events, and etc. It is possible to doubt common sense (Reid is critical of philosophy insofar as philosophers take doubt as a default position, as introduced by DesCartes), but to do so puts one at odds with their own human nature.
The principles of common sense that Reid lists concern human inquiry and knowledge. In separate writings, spread across a series of essays collectively known as Practical Ethics, Reid makes a similar case for morality. That is to say, human beings possess a sense (faculty, as he calls it) of right and wrong in just the same way as we possess a sense the past through our memories, or a sense that the words others say to us can convey truth (acceptance of testimony). Here we have, at last, a moral system that can satisfy the demands of grounding our higher reason itself. For under this view, when a person considers a potential course of reasoning action (say, how to interpret a statement, what sources of data to use, or how to construct 'fair' Kantian maxims), they are aware of a series of things, all arising from their human nature with none depending on the other for it's reality. They are aware of their freedom in choosing one course of action over another. They are aware of their memories, and how past situations resolved themselves. They are aware of the rational connections between propositions- what contradictions are, that some propositions entail others, that some are simpler than others. They are aware of values in reasoning- that simpler propositions are to be preferred, not just what a contradiction is, but that it is in some way a flaw or failing, that entailment is preferable to inference. Finally, they are aware of the morality of the situation- that there is such a thing as reasoning fairly, honestly, and charitably. None of these things are dependant upon the others; we do not use a deductive argument to arrive at our memories, and conversely, we do not need to refer to memories of past contradictions to recognize one before us now. They are all judgments that rise from the observation of the same object- in this case the vista of conceived future actions one may choose to bring about in response to a situation. Reasoning in the broadest sense, is a mixed intellectual activity that draws upon all these faculties to derive it's conclusion. In this sense, reasoning is dependant on morality, as has been shown in the examples above. Reasoning in the narrowest sense, as the recognition of the basic interactions between propositions (entailment, contradiction, etc), does not depend on morality, but then, neither does morality depend upon reason.
Not much more needs to be said about Reid's system in detail- and in fact, we can depart from it in a large way without losing it's usefulness to this situation. For example, the real objects of the moral sense could be moral rules, as Reid (as well as Kant and others) imply, but they could just as easily be moral qualities in the agent, such as the Aristotelean virtues. They could even be properties of objects- according to G.E. Moore, something is right or wrong in just the same way that something is blue or triangular, and we immediately perceive rightness and wrongness as a part of the essential character of something (say, lying). Either way, the essential notion here is that rightness and wrongness are identified independent of rational reflection. This is borne out by reflection- we see a moral character in our basic reasoning, acting in according to which brings us the results we seek. It leads to strong defeater to any strongly rationalist ethical system- as we've seen, the Categorical Imperative and Consequentialism both rely on a prior sense of fairness or honesty in order to work through the rational problem-solving which is their strength. A preference for simplicity dictates that if we need a pre-rational system for moral reasoning, we should rely on that same system to answer as many moral problems as possible.
It should be a simple argument from here to show that all human moral thought has it's roots in the pre-rational. For there are two options- moral thought which depends upon reasoning (R), and moral thought which does not (N). This paper shows that N exists, and is necessary for basic reasoning (B). If R exists, it is, by definition, dependant on basic reasoning. If R depends on (could not exist without) B, and B depends on N, then R depends on N. Thus rational moral thought depends on pre-rational moral thought. But rational and pre-rational are the only types of moral thought possible (either morality depends on reason or it doesn't), so therefore, all moral thought is dependant on pre-rational moral attitudes, as are all but perhaps the most basic acts of reasoning.


