Pre-Rational Ethics

Pre-Rational Morality

This year’s winner of the Levinson Award at the University of Maine.

          The aim of this paper is to show that there must be a moral awareness that informs, and is independant of rational thought.  By 'rational thought', I mean the entire scope of what takes us from apprehending evidence to deliberating our way through a conclusion. By 'moral awareness',  I mean only the sense that there are some actions we ought to do, and others we ought not.  I intend to make no claim about whether morals are ultimately virtue-based, centered on duties,  or something else altogether. While it is beyond the scope of this paper to conclude on a particular moral system as being the best (or indeed to construct a new one), there will be strong criticisms provided to certain moral systems which do not meet the demands this paper makes.

          Perhaps the most philosophically discussed roll of conscious, rational thought is the process of trying to achieve epistemic justification.  Justification is another term with a hotly disputed meaning, and which unfortunately this paper doesn't have room to address.  Very loosely, however, most philosophers have agreed that justification means having one's beliefs for the proper reasons, or in the proper circumstances, such that having that belief is rational or truth-conducive. For example, believing that Mars is the fourth planet from the Sun because you heard it in a song lyric, and you always believe what song lyrics tell you, would be considered unjustified (even though the belief is true), because the "always believe what song lyrics tell you" rule is inappropriate.  "Inappropriate" could mean many things here-  that it's unethical or blameworthy, that it's likely to produce wrong or undesirable results in a majority of situations, or other options.   Either way, as philosophers we commonly find ourselves examining our beliefs to see if they achieve justification, with the understanding that justification is a necessary (but generally not considered sufficient) condition for knowledge.

          At this point, let me assert what seems obvious to me- that many of the behaviors we engage in while reasoning are taken to have moral worth.  Honesty, Open-Mindedness, Courage, and a host of other qualities are seen not only as important for getting at the truth of things, but as generally praiseworthy qualities to have in general life.  We could probably attach every common virtue to the intellectual life in some way or another.  One rather inobvious connection, which gets to the very heart of intellectual life is Charity-  when we hear statements like the following,

“I saw a man eating chicken on the way home from work today.”

   We are immediately faced with a choice in how to interpret that, since we can't hear the presence or absence of hyphens in common English. In this case, there are four very legitimate interpretations-  the first clause could be a reference to a man having a chicken dinner, or to a monstrous chicken that eats people.  The second clause could mean that the observer was on her way home from work, or that the subject of the sentence (man or monstrous chicken) was on their way home from work.    Now, in this case, choosing the wrong interpretation results in little more than a laugh, but the day to day life of a philosopher is filled with confronting arguments and positions, and what's absolutely key to progress is interpreting opposing arguments in the strongest possible way.  Interpreting the argument of a critic in bad faith to his intentions, in order to make his position easier to defeat is simultaneously irrational and immoral. 

          As far as reasoning goes, interpreting sentences is a fairly complex, derivative activity.  Also, we can always get clarification later if we interpret in bad faith.  Do moral issues apply to more basic, objective operations of reasoning? I think so.  Consider the following:

1.) If matter exists, then God exists.

2.) Matter exists.


3.) God exists.

      The conclusion follows from the premises as a simple matter of modus ponens.  Suppose that premise 1 had been proven to the satisfaction of all who considered it, by some sound argument.  Would then, everybody be forced to conclude that God exists?  No. From the above argument, one could conclude that God exists. But one could also conclude, just as validly, that premise three is unacceptable, and that since premise 1 is proven, premise 2 must be false; and thus matter doesn't exist (given that the rejection of 1 is untenable for whatever reason).  For any deductive argument, considering the conclusion as an absurdity which shows one of the premises must be false is a 'legal' move. However, we would have to agree that tossing out the existence of matter for no better reason than to avoid accepting the existence of God would be an intellectually unethical move- dishonest to oneself, and flaunting what is technically permitted, to do damage to the importance of the dialectic. Indeed, if one is willing to bite any bullet to avoid a certain conclusion, no argument can convince them. Thus, 'behaving oneself' is a key element even to such basic behaviors as drawing conclusions from syllogisms.

          There is a class of philosophers who would easily agree with everything I've said so far, those who take a deontological stand in epistemology.  That is to say, they believe that justification of true belief just is a fulfilling of one's moral duties.  However, the aim of my paper is something other than defending deontology. In fact, there are a number of very good objections to that position. It is a common belief, to put it in the words of Kant, that "ought implies can".  In order for an act to have a moral component to us, we must be able to do it, or refrain from doing it.  Upon reflection, however, there is good reason to doubt that our beliefs are in our control in a way that makes them opens  to moral criticism.

          A thought experiment will outline this clearly.  Suppose there is a locked door, which you desire to pass through. The only way to open the door is to hook onesself up to an attached lie detector, and to sincerely answer 'Yes' to a series of questions.  So, you hook yourself up, and the first question the automated door asks is, "Are you at least 10 feet tall?"  If we think about what it would take to get the door open, we must either reflect on 'fooling' the lie detector without changing our beliefs,  undergoing some long, experimental process that would, over time, force us to believe something that wasn't true, or somehow becoming 10 feet tall.  What we could not do is simply decide to believe that we are 10 feet tall because there is an immediate advantage in doing so.     In fact, the very usefulness of lie detectors is dependant on our inability to easily change our beliefs.  It is commonly held that in order to be morally culpable for a behavior, it must be under our control- ought implies can, as Kant said.  So, if our beliefs are not under our control, it seems likely that epistemic justification cannot be primarily a moral matter.

          There are no doubt rebuttals to the above argument, but I would like to defend the existence of a moral component to reason even if deontological justification is a failed concept.   Without reference to ethics, then, justification would be about one's beliefs being in a particular relation to other things they believe, and perhaps to the facts themselves, without reference to any moral judgment.  Does it follow, then, that there are no moral judgments required in proper reasoning? Not at all.  Consider the example above, of the sentence that could be interpreted in a variety of silly ways. Whatever justification may be, it is still imperative to interpret a person's statements correctly in order to understand or interact with their positions, and interpreting an argument in a weaker or stronger is still, at least oftentimes, a matter of charity or honesty, and thus a matter of moral judgment.   No matter how we define justification, there are moral choices to be made with respect to our reasoning- and those choices often have great impact on the success or failure of our examinations.  In fact, if we accept the view that our beliefs are not directly in our control, the reasoning processes we use that ultimately give rise to those beliefs become even more important, because they are what we are responsible for, and they no doubt have great influence over what we ultimately believe, even if we do not direct control it.  A bad belief, then, would be like a sickness- while we cannot choose to become sick, or choose to become well, it is only that much more important that we avoid behaviors that we know make one prone to sickness.

          It is possible that these truth-getting behaviors are correct, not for moral reasons, but for pragmatic reasons?  Might it just be that honesty, good research, and so on, are the correct things to do just because they work the best, and any moral qualification is besides the point? This seems plausible at first, and I believe this interpretation could be consistent with the thrust of my paper. Consider the case of the bias researcher.  A student trying to shed light on a controversial issue will frequently make the mistake of reading only that research which speaks highly of their previous views and prejudices, while glossing over (or outright ignoring) the defense of positions that make them uncomfortable.  No doubt, this happens unconsciously quite often- but not always. A person can choose to read only one-sided research while on their quest for the truth.  We can analyze this behavior pragmatically- it is certainly true that if your aim is to get to the truth of a controversy, that reading only one side of the issue is a poor means of getting there. Bias research doesn't work.  We can also, however, analyze this morally. Shying away from uncomfortable positions while chasing arguments that bolster our previous views (and thus our ego), while telling ourselves "I am digging for the truth" is dishonest, and perhaps cowardly as well.  It is the case, then, that unbiased research is both moral and pragmatic- it's good, and it works.  This reinforces the central point of the paper- that morally correct behaviors are necessary for proper reasoning.  If there was a person who had no sense of the morality of situations, it is true that this person could discover that the methods we consider ethical are the best methods. Indeed, if this were true in every case, it would be no argument against my position- the point here is just that doing the moral thing leads to success when it comes to reasoning.

          Still, it seems that there is a legitimate problem here.  I believe it is this- if we acknowledge that all these seemingly moral decisions are also the best way to procede in the pragmatic sense, then why suppose our actual decision-making in these cases is moral at all?  Couldn't it be that all of these activities are done for purely practical reasons, and interpreting them as having moral value is something we learn to do after the fact?  I think there is some merit to this point.  In examples such as willful misinterpretations, rejecting solid premises to avoid undesired conclusions, and related situations, we immediately see these actions as morally wrong at the same time as we see them as irrational.  So perhaps the speculative conclusion to draw here is that one can get to the best behavior in the absense of one of these faculties- a person who had no sense of the moral could get to the best reasoning techniques through pragmatism, and ostensibly a person with weak critical thinking skills but a strong moral sense could get to these techniques by doing what is right.  Is there any way to decide which of these impulses, moral or pragmatic, is more central to reasoning? I think so.

          Pragmatic thinking involves concerning onesself with what works, with what is most useful.  However, what is most useful for a situation depends in a great deal on the intended ends.  Consider our example of the bias researcher.  As was said, if a person is trying to get to the truth of a matter, one-sided research is unlikely to work.  But what if that's not the person's aim? What if the goal in mind is to create a strong polemic against a popular view, or  to support the status quo from some new source of criticism?  In these cases, not reading the best work of the opposition, or at the very least reading it with a very different 'eye', might be the most useful approach.  The pragmatic 'best course of action' has changed.  Indeed, what if one's goal in philosophical pursuits in general is not the aquiring of truth at all, but perhaps learning a certain set of persuasive techniques, or gaining the influence to affect culture in a certain way?  Clearly there are a variety of potential ends to rationality, however, I think there is a strong impulse in us to see the pursuit of truth for it's own sake to be more appropriate (to philosophy at least) or more noble than these others listed. In other words, there is a sense in which pursuit of the truth is a greater good than political power or manipulating one's friends, etc. Even if there are situations in which this is not the case, however, one still has to choose some end in order for a pragmatic approach to make sense. As such, there is already a value judgment implied in pragmatic reasoning, and such judgments are open to moral evaluation even if it turns out there is no consensus.   When somebody chooses to persue persuasive power or self-satisfaction over a pure quest for the truth, this choice has a moral component- what is the right end to seek, given the circumstances?  So then, any pragmatic approach to reasoning that purports to ignore the moral component to the rational process must acknowledge that not only are the right choices identifiable using moral concepts, but that the process itself began with a value judgment insofar as which end is pragmatically pursued.  Thus, basic reasoning skills are unavoidably characterized by ethics even under a pragmatic view.

          The consequence of my position here is that morality must to some degree be epistemologically prior to reason.  We to respond in the appropriate way to the moral content we see in the decisions before us in order to reason successfully about any sort of complex issue- such as ethics.  The obvious objection is that there seems to be a circularity here-  we need to know what's good in order to reason successfully, and we need to reason successfully in order to figure out what's right.  Is there a way to break this circle while retaining my point?  A survey of some popular moral systems shows that not all of them will fit this new perspective.  For example, Kantian ethics states that morals are derived specifically from the application of reason.  Here we see a perfect example of the sort of situation I describe above, as well as the importance of pre-rational morality.   It is a well-known problem when dealing with the categorical imperative that if one formulates their maxims just so, they can seem to get away with any behavior at all.  Well, what is 'just so', and what did Kant intend?  From what I can tell, getting around the Categorical Imperative involves forming maxims with an attitude of selfishness or duplicity, and Kant is presuming a certain forthrightness from people.  Kantian ethics, then, seem to be predicated on a presumption that people know how to, and will, 'play fair'.  So this cannot be our grounding of moral reasoning, even as it shows us how important a such a grounding is.  A person with no sense of fairness who tried to use the categorical imperative to discover which actions were moral would soon discover that just anything they wanted to do was justified as long as they worded the maxim carefully enough.  It's important to note that if the Categorical Imperative were the person's sole source of moral information, they would never come to a position that there was anything wrong with tailoring their maxims just so- they would forever seem 'carefully constructed', never unfair or as cheating.    

         Consequentialism, insofar as it is evaluative, faces a similar problem. It can require all the focus and skills of our reason to discern who is being hurt or helped by a certain action, and that discernment is subject to the same demands of integrity and consistency as constructing a maxim for the C.I.   We must value the happiness of one against the happiness of another, and since no two people or situations are alike, there is inevitably a degree of interpretation here.  We are tempted, at every turn, to quantify one's potential happiness as lesser, and another's as greater, in order to support whatever action we ultimately wish to perform.  Again, we have an example of a complex moral system that is dependant on pre-existing moral reasoning in order to function.

          Are there stronger options?  Christine Korsgaard presents in her paper "The Origin of the Good and Our Animal Nature" a rework of Aristotle's system, in which she demonstrate the final good to be in the evaluation by a being of it's own quality of life.  In other words, the perception of a being that it's life IS good is the final good. This seems dependant upon reason at first glance, and it may be so.  However, Korsgaard maintains that animals can perform the evaluation (be in an evaluative state, as she puts it) necessary for the final good, so she obviously has some non-rational situation in mind.  Humans however, work differently than animals according to her- the Aristotelean virtues come in to play, in order to ensure that the final good of one human doesn't conflict with another.  It is certain that cultivation of the virtuous life will require, at least at times, the full exercise of our reasoning.  We are no doubt getting closer to our goals- in Korsgaard's view, there is indeed a perception of the good that can be known without rational thought, though it is unclear to what degree this applies to humans.  I believe we can get yet closer to pre-rational morals.  I turn now to the philosophy of Thomas Reid.

          Reid's philosophy is based around what he calls 'common sense'.  While it is wrapped up in the pre-reflective beliefs of the 'vulgar masses', it is nevertheless a well-developed philosophical concept.  Essentially, Reid believes that there are certain things we believe due to our nature as human beings, and come to us without any process of rational justification. This is not to say that these beliefs might not be mistaken, or that they can't be examined critically.    That our wills are free, that our memories are of real events that occurred in the past, and that other human beings have conscious minds like we do are some examples.  An important one is the general faith in the reasoning process itself- that we can get to correct conclusions through the application of deduction. There is an important point here. When we search for (or demand in others) a rational basis on which to trust (for example) the deliverances of memory, we are in effect giving reason primacy over memory- we are setting reason as the standard against which memory is judged.  Reid says this is arbitrary at best- it would be circular to appeal to a rational principle in justifying this course of action, and the deliverances of memory and of reason both come from the same ultimate source- our human constitutions. The principles of common sense are identified through a reference to what Reid calls "all languages".  This is to say,  people in every culture refer to themselves as free, refer to memories as real references to past events, and etc.  It is possible to doubt common sense (Reid is critical of philosophy insofar as philosophers take doubt as a default position, as introduced by DesCartes), but to do so puts one at odds with their own human nature. 

          The principles of common sense that Reid lists concern human inquiry and knowledge.  In separate writings, spread across a series of essays collectively known as Practical Ethics, Reid makes a similar case for morality. That is to say, human beings possess a sense (faculty, as he calls it) of right and wrong in just the same way as we possess a sense the past through our memories, or a sense that the words others say to us can convey truth (acceptance of testimony).  Here we have, at last, a moral system that can satisfy the demands of grounding our higher reason itself. For under this view, when a person considers a potential course of reasoning action (say, how to interpret a statement, what sources of data to use, or how to construct 'fair' Kantian maxims), they are aware of a series of things, all arising from their human nature with none depending on the other for it's reality.  They are aware of their freedom in choosing one course of action over another.  They are aware of their memories, and how past situations resolved themselves.  They are aware of the rational connections between propositions- what contradictions are, that some propositions entail others, that some are simpler than others. They are aware of  values in reasoning-  that simpler propositions are to be preferred, not just what a contradiction is, but that it is in some way a flaw or failing, that entailment is preferable to inference.  Finally, they are aware of the morality of the situation- that there is such a thing as reasoning fairly, honestly, and charitably.  None of these things are dependant upon the others; we do not use a deductive argument to arrive at our memories, and conversely, we do not need to refer to memories of past contradictions to recognize one before us now.  They are all judgments that rise from the observation of the same object- in this case the vista of conceived future actions one may choose to bring about in response to a situation.  Reasoning in the broadest sense, is a mixed intellectual activity that draws upon all these faculties to derive it's conclusion. In this sense, reasoning is dependant on morality, as has been shown in the examples above.  Reasoning in the narrowest sense, as the recognition of the basic interactions between propositions (entailment, contradiction, etc), does not depend on morality, but then, neither does morality depend upon reason.

          Not much more needs to be said about Reid's system in detail- and in fact, we can depart from it in a large way without losing it's usefulness to this situation.  For example, the real objects of the moral sense could be moral rules, as Reid (as well as Kant and others) imply, but they could just as easily be moral qualities in the agent, such as the Aristotelean virtues.  They could even be properties of  objects- according to G.E. Moore, something is right or wrong in just the same way that something is blue or triangular, and we immediately perceive rightness and wrongness as a part of the essential character of something (say, lying). Either way, the essential notion here is that rightness and wrongness are identified independent of rational reflection. This is borne out by reflection- we see a moral character in our basic reasoning, acting in according to which brings us the results we seek. It leads to strong defeater to any strongly rationalist ethical system- as we've seen, the Categorical Imperative and Consequentialism both rely on a prior sense of fairness or honesty in order to work through the rational problem-solving which is their strength. A preference for simplicity dictates that if we need a pre-rational system for moral reasoning, we should rely on that same system to answer as many moral problems as possible.

          It should be a simple argument from here to show that all human moral thought has it's roots in the pre-rational.  For there are two options- moral thought which depends upon reasoning (R), and moral thought which does not (N).  This paper shows that N exists, and is necessary for basic reasoning (B).  If R exists, it is, by definition, dependant on basic reasoning.  If R depends on (could not exist without) B, and B depends on N, then R depends on N.   Thus rational moral thought depends on pre-rational moral thought.  But rational and pre-rational are the only types of moral thought possible (either morality depends on reason or it doesn't), so therefore, all moral thought is dependant on pre-rational moral attitudes, as are all but perhaps the most basic acts of reasoning.

It´s natural for me to affirm the conclusion of this essay, but I arrive at it via an ontological road where you have taken the more difficult epistemological approach.

Ethics pertain to values, and values are not originally rational. Formulating them is a rational activity, and so is formulating morality. But ethics have been well established in the form of instincttualized do´s and dont´s before the emerging of the rational mind.

Language itself is based on a prerational choice, to give similar identity to different things, and tied to that, to identify as things out of time, instead of strictly actual identification in terms of the present context.

But the most direct affirmation of this conclusion is to consider the use of rationality itself as a matter of ethics.

What do you mean that values are not originally rational, but formulating them is a rational activity?

I can hold a value pertaining to the taste of a certain kind of food without formulating that value.
A rational formulation would be, for example, the statement ‘‘food x is good’’ or ‘‘food x is bad’’. Therefore we have to identify the type of food as an identical, and divide the world up in good and bad. Both rational operations not prerequired for the original valuation.

Hrm. Is there a way to describe the pre-rational valuation, or is putting it into words necessarily the rational formulation of it?

I think your paper is tarnished by the vague definition of “moral awareness” given here.
ANY METHODOLOGY can fit into this interpretation of “moral awareness”.

What you call “pragmatism” would fit in your "moral awareness, too, making the comparison of two (Moral based and Pragmatism based reasoning you showed, later) virtually meaningless.

I think the perspective, the scope of “moral and moral awareness” isn’t well defined/fixed and wobbling during the paper, too.
First, it’s nearly all encompassing scope, and then later it seems to change into more or less common (but still poorly defined) notions.

I tend to think “justification” is mostly done to make something illogical sound rational/logical.
If someone arrives conclusion via logical thinking, there is no need to believe the result nor form beliefs upon it.
So, justification is very often used by illogical people who wanted to present their beliefs (of often unknown origin, most probably coming from fear) as something logical.

But you can safely discount my opinion as I don’t claim to be a philosopher.

Reasoning, to me, requires clear perspectives.

“Honesty, Open-Mindedness, Courage, and a host of other qualities” are required for people who’s perspective is highly restricted/blocked by (usually illogical) beliefs.
Otherwise, their thought would likely to remain illogical, obviously.
And it’s not really praiseworthy but simply basic and obvious requirement in rational thinking, to me.

I think it’s probably better to follow the perspective of the speaker, if possible.
When it’s not clear, we can always ask.

This is because I see any logic as the combination and the flow of perspectives.
Now, you can see it as a “moral”, too, if you keep very vague and vast definition of it.
To me, it’s a simple requirement to understand the perspective of others.

You are twisting things, here.
For the conclusion #3 to be true, both #1 and #2 need to be true.
In other words, IF #1 (although pretty silly one, at least to me) is proven, THEN #2 must be proven.
But you are somehow jumping to the conclusion after #1 is done, which is illogical.

In this example, I think #1 is too silly to begin with and #2 is too vague (because it lacks the delimiter for the existence and the lack of definition of “matter”).

I think it was a bad example with bad presentation.

When we simply want to understand others, it’s simply required to follow the perspectives.
I don’t think “charity”/“honesty” as you presented is needed.
If you need such moral decision, it’s because you have the motivation other than understanding others, I guess.

In other words, what you are saying is probably correct for the type of people you seem to be familiar with, those who do not simply desire to understand others but talk/interpret with different intentions.
But not all people are like that all the time.

I’m not sure if beliefs are made by reasoning (especially rationally/logically correct one).
I think I have been loosing beliefs as I thought more, reasoned more.
Less beliefs are required when we can think/feel on the spot.

Your logic leading to this conclusion seems weak, at the best.
I mean, you presented pragmatic perspective and it didn’t require the help of moral perspective.
So, “morally correct behavior” wasn’t necessary. It needed either “pragmatic” or “moral” (or both, if you like).

I don’t think it reinforced your point, much.

I do think you have VERY STRONG bias to see things in “moral” perspective.
I think we do (and think) from different reasons, including that of moral origin, but often by simple impulse, reflex, and so on.

I tend to see fear and blocking, instead of "wrongness’.
We see side sliding perspective, abrupt jumps/leaps, wobbling focus, and so on, when someone is trying to avoid certain perspective.
Usually, people who do these aren’t aware of what they are doing.
They are more of ignorant/unaware than “wrong”/“bad”.
And I don’t think they have much choice.
I mean, it’s like sleeping. We can’t really blame sleeping people, much.
I’d say let them sleep comfortably as much as they want.

You brought in “perspectives”, finally. :slight_smile:

But you should have done that for the moral, too.
Moral (good or bad, ought to or not) depends great deal on the intended beneficiaries.
I mean, is it good FOR WHO (and WHY)?

Your perspective is crisscrossing, here.

You have stated that " the pursuit of truth for it’s own sake to be more appropriate (to philosophy at least) ".
If so, it’s pragmatic to seek that and not other possible goals you’ve presented.
In other words, there is no moral choice to make, and person behaves in the way you think it’s the best (for unbiased research).

I think you are trying too hard to associate moral and reasoning.

So, this part isn’t proven, at all.

I don’t agree with this.
When I observe moral interpretations/behaviors I do see (mostly wrong) reasoning/logic.

I think logic precede moral (and even emotion and sensory interpretation).

I think you are throwing in A LOT, here.

Human logical thought is too slow to make decision on complex issue.
Some of us can think faster, and also some of us can use other method to make multi-threaded evaluation, so to say.
But I think these are exceptions and reasoning isn’t suited for making quick decision on complex matter, in general.

It’s obvious that you want to retain your point, here, instead of seeking to know IF your point in appropriate. So, I think you are biased (and that makes you immoral by the perspective you showed).

Yah, without justification, just because dad or mom said so, type of beliefs.

If someone buy these as beliefs without question, I guess that person isn’t very aware.

I highly doubt we have similar level of awareness.
Some people are aware (about certain things, or in general) more than others.
And our memory seems to contain mixture of real and unreal events (at least mine does).
I’m not sure if we have very free will. Maybe so, but maybe not very free.

I’m not sure about these, at all.

So what?
I thought you want to seek the truth.
Do you want to obey (vaguely presented) “human nature” and stop examining?
I think this guy is contradicting to your “moral.”

Now, Reid seems to be in conflict with himself, too.
As the “common sense” may both pushes and blocks inquiry.

[/quote]
Are you taking his words, as is?
I mean, did scientists or researcher confirmed that we human have such faculty, indeed?

It seems to me you were looking for someone who’s theory might support your point and you just stumbled upon this guy and using it without due examination.

The rest seems to be based on the highly doubtful (to me) theory of Reid and thus too hypothetical (and weak) to my taste.

As to be expected, academic standards for philosophy still appear to be appallingly low. . .

Not specifically, that I know of.

Unless we devise a non rational language. A language not based on identicals.

Of course the evasion of identicals would be based on identicals - it would have to hook on to the non-identical, human experience - which repeats patterns, but never amounts in the exact same entity.

The pattern (the language) would then be the identical, the postulate, the object, and we would move inside it without further postulates, without exact statements. I think that this is the quality of poetry. Statements which can be taken as referring to themselves (as objects), but at the same time make possible an emotional comprehension of an undefinable but active and determining field of awareness.

Well, that’s kind of the point. In this paper, I didn’t want to make any claims about which moral systems/positions had to be correct, other than those (like consequentialism) that are ruled out because they’re post-rational. Pre-rational ethics still have a lot of possibilities, including some form of pragmatism, maybe.

I think the salient difference is that I’m using ‘justification’ as a noun, and you’re using it as a verb. Yes, the act of justification often occurs for the reasons you describe. But the question of when, and how, beliefs are justified is something else- it’s just a description of the state a belief/person/state of affairs ought to be/must be in for us to consider the belief valid.

Well, of course it’s better to follow the perspective of the speaker- but there’s the question of the motivation. Perhaps I’m interacting with the statement in order to simply understand what occurred. Or perhaps I want to make the person seem foolish for some reason. Perhaps a lot of things. Even a dispassionate aim for getting at the truth of things and nothing more is something that is chosen to be valued.

You’re skipping over the relevant bit here, which is what it means for something to be ‘proven’. It would be nice if there was some formula that determined when a premise was proven, but there isn’t- deduction only goes so far as the interrelation between premises. Deduction is going to have nothing to say about the premises themselves, they are either accepted, or they aren’t- usually for inductive or other reasons.

So it’s not a matter of 1 and 2 being ‘proven’ in the way the syllogism proves it’s conclusion IF the premises are true. It’s a matter of 1 being a given from the perspective of the audience such that we can rule it out as being doubted, for the sake of the hypothetical. From the rules of the syllogism, this leaves the audience with the option to either accept the conclusion, or reject premise 2. The very point is that since the premises in a syllogism are very often NEVER deductively proven or provable, the ‘reject a premise’ option is always available, and when a person chooses to do that vs. accepting conclusion to two premises they have (previously) considered true is something like a value judgement.

Which is very often, if not always, the case! I don’t know about you, but I don’t spend my life going around trying to merely understand things. If that were the case, I’d read children’s books and count ceiling tiles. I put effort into understanding certain things and not as much effort into understanding certain other things because that is where my interests lie. Nothing logically rigorous about that.

I’m not reinforcing my point in that passage, I’m rebutting it. Sorry if that wasn’t clear.

And if you were to say that when you see somebody’s actions as rooted in ‘fear and blocking’, there’s no accompanying judgment that they have somehow failed or fallen short of what they could have done, I wouldn’t believe you. You say they can’t help but, but I don’t think you can say it without a certain disdain. Maybe I’m reading into it, but it seems obvious to me that philosophers as such strive not to be, or are happy that they aren’t, or at the very least wish they weren’t, people in the situation you describe.

“The pursuit of truth for it’s own sake is more appropriate” IS the moral choice. I didn’t ground that in any evidence or logical argument- I made a value judgment, and you agreed with it reflexively, because it was the right value judgment. The pragmatic seeking of it is post-moral, it’s simply following the best course towards the desired (valued, morally chosen) goal. It is certainly NOT a pragmatic matter to seek one goal instead of another. Pragmatism doesn’t have any meaning in the absence of a pre-determined goal.

The point here is that we have two ways of understanding rational pursuit- we can say that trying to understand what a person means is ‘being charitable’, or we can say that it’s the pragmatic ‘most useful way to proceed’ if you want to know what they mean. In the second case, the moral component is preserved by pointing out that ‘wanting to know what they mean’ is itself a value judgment made for good/bad reasons.

I don’t agree with this.
When I observe moral interpretations/behaviors I do see (mostly wrong) reasoning/logic.

I think logic precede moral (and even emotion and sensory interpretation).
[/quote]
Right, and when I see people thinking logically, I see them choosing to think logically because they believe it’s the right thing to do, OR at the very least choosing what to spend their time thinking logically about because they value it.

You know this is my paper, right? What you’ve just accused me of is being biased in favor of my own position that I’m arguing for, as I’m arguing for it. I’m not sure that’s a valid call of bias, but if it is, it’s one everybody is guilty of at every moment. This is a paper arguing for a pre-rational basis for ethical decision making, so obviously it has a bias in favor of there being a pre-rational basis for ethical decision making, and so does the author. Or else I would have written some other paper about some other thing.
Or better yet, if in my research, I discovered that all moral thought was grounded in logic, then when I presented my conclusions in a paper, some other guy would be accusing me of being biased in favor of THAT conclusion. Of course I am, and this paper gives the reasons why.

I’m going to chalk this up to you not having read Reid combined with my lack of desire to explain him to you, and move on. If you’re actually familiar with him and have specific questions, then let me know. I didn’t stumble upon him as someone who supports my theory (for an example of someone I did stumble on, see G.E. Moore, and the sort of brief mention he gets). Rather, Reid is a philosopher that’s been highly influential to me for a number of years, and I set out writing this paper with a goal of incorporating his approach.

Well, you don’t need to make claim about which moral system is the best, but I think you need do clearly identify what is moral and what isn’t as you want to talk about “moral”. Otherwise we basically don’t know what are you talking about and if you know what you are talking about.

What’s the difference? I mean, the motivation is “for us (you) to the belief valid.”
I think you are (subconsciously) hoping /expecting the belief to be valid, rather than simply examining IF the belief is valid, in the justification, as well.
So, justification seems to be biased.

Maybe it’s just the bias toward positive identification coming from how our perspective works, though.

According to your perspective, it’s immoral and you are not supposed to feel right/good if you are having intentions other than following what others want to convey.
But the way you explain almost appears as if you don’t have any moral awareness about that …
If you don’t have the sense of “ought to”, then it’s not “moral awareness” for you (by your own definition). And thus you can do anything, even if the act does not lead to the better understanding/discussion, as long as you don’t feel the “ought to”. It may render the point in your paper invalid.

First, you didn’t say that you are limiting yourself to the verification of interrelations.

And premises (any proposition) can be verified.
If the elements of propositions are not well identified, the proposition is bogus and meaningless.
Also, if the relations between well identified elements are not clearly declared, the proposition isn’t usable.
When the relation between well identified elements is clearly declared, we can then verify if such relation really exists under what kind of condition/limitation and/or if it’s verifiable and to what kind of accuracy/precision, and so on.

I think playing with bogus propositions is logically meaningless.

This is where I see your perspective to be skewed.
I think logical order to follow is to go from premise #1 and then #2.
But you are skipping #2, somehow, and jump to the conclusion, and then back tracking to #2.
Why do you want/need to jump to the conclusion, skipping #2?

In your paper, I don’t think you mentioned that the person have “(previously)” considered two premises to be true.
So, you might be changing the scenario, now, or you might have forgotten to write about it in the paper.

Anyway, it’s not so important, but it appeared very weird (and almost illogical) to me when I read the part.

In other words, your “moral awareness” varies depending on your personal interest.
And thus the moral awareness you treat in your paper is based on subjective/personal interest and preferences, most probably.

I think you are judging people (in this case, me) from probably limited experience of yours.
There are wide variety of people with very different way of life.

In my case, I’m highly selfish and lazy person that I don’t spend lots of energy making moral judgments.
I’m more interested in analyzing why they failed, for example.

I’m not really “blaming” type of person.
Usually, I’m highly oriented toward finding solution (if needed).
Also, I have years of experiences in professional training that I’d rather analyze the situation rather than making moral judgment.
I’m not so often surprised by the act of others.

I don’t consider myself to be a philosopher.
Actually, I don’t really know nor really concerned about the definition of “a philosopher”.
So, I don’t have any “moral awareness” of being a philosopher.

You mistook what and how I said.
I’m simply using your perspective, which I may or may not agree/accept. And it’s not important if I agree/accept, in this case.

Putting aside that, I do understand that you see “moral” before “pragmatism”.
However, I’d say that there is not much distinction between pragmatism and moral in the way you put.
“Moral” doesn’t have any meaning in the absence of a pre-determined goal.

When you make value/moral calls, you are evaluating against certain standard aimed for certain goal (often subconsciously).
In other words, I think moral feeling/awareness is the result of the subconscious information processing and usually the data and algorithm used for the processing isn’t very rational, as far as I’ve observed and analyzed.

We’ve seen that your moral sense changes depending on your personal interest and you don’t take the best course of action (described by you), all the time.

In short, I think there is logic before “moral”. However, the logic isn’t well done because most people are not aware enough nor well trained in programing their mind so that their subconscious evaluation would yield logically correct result.

As people don’t understand how “moral awareness” works, I think some of them tend to mystify (or even glorify) the irrational piece of logic, perceived as moral.

I think you are making any action of ours into a moral action.
Any desire, even instinct is moral judgment in a perspective like that.
And in this case, the word “moral” become meaningless because anything you do have “moral” components and thus there would be no distinction of “moral” and “non-moral” action.

The vague definition let you make anything you like into “moral”, but it makes the “moral” valueless/meaningless when you push too much.

Right, and when I see people thinking logically, I see them choosing to think logically because they believe it’s the right thing to do, OR at the very least choosing what to spend their time thinking logically about because they value it.
[/quote]
As I’ve said earlier, many of moral calls are done subconsciously and not by what I call logical mind where we can model/imagine/manipulate concepts/perspective in plastic and dynamic manner.
Subconscious moral evaluation seems to be done in what I call emotional mind in which data and algorithms are more or less fixed (like beliefs) and it’s not easy to change.

Nevertheless, there is logic (usually bad one) behind ANY evaluation, and it’s often done without the person thinking it’s right things to do.

However, since you made the definition of “moral” so vague to the point it’s practically meaningless, even the subconscious logic can be seen as “moral” action, as I said.
But it’s kind of redundant to say “moral” in this case, because a “moral” action is simply an action, in this case.

Actually, it’s your perspective that is accusing your action, to certain degree.
However, your definition of moral is so vague and personal that I think you can declare anything you do as “moral” behavior, for you.

But being biased, you are not taking the best course of action for seeking the truth, again, according to your perspective.

You can examine/study something, and you can have certain conclusion/theory about it.
Then, you can write a paper explaining how you thought/examined and how you got the conclusion.
If you do so, I don’t think you are biased toward conclusion if you have done unbiased analysis/etc.

In your paper, I don’t think you are presenting enough critical perspective.
And I got the impression that you are pushing hard for the (desired) conclusion.
To achieve that, the definition (and perspective) of “moral” changes as it goes, and it may make some reader believing in what you say.
However, if we try to follow the perspective presented, analyzing the validity of points, I don’t think we can arrive at the conclusion you presented (unless making that irrational move of making the “moral” universal, and thus meaningless).

I don’t have the impression that the people who have read my paper don’t know what I’m talking about when I say ‘moral’. If I clarify what is moral and what is not, then that’s the same thing as claiming which moral system is best. Other than a vague ‘pertaining to value judgments’ which you should be able to figure out for yourself, anything more explicit would be to endorse, say, deontology over virtue theory, which is specifically something I didn’t want to do.

You seem to be taking a very common philosophical concept (the cornerstone of epistemology, really), and it putting it in my mouth as though I came up with it, and then trying to speculate why I did so. If you have issues with the notion of justification as pursued in philosophy as a whole, that’s really beyond the scope of my paper to address.

I don’t see how belief in a pre-rational moral awareness implies that that awareness is always accurate, or always active. I compare it to other faculties such as sense perception, memory, and so on, and certainly all of those are prone to error. So I don’t think it’s true that a person can do just anything as long as they don’t have a moral perception telling them they oughten’t. That would be like saying that nothing in your life has ever happened to you that you don’t remember.

And I think your doing a lot of rambling to cover for the fact that you missed the point of very simple thought experiment, for favor of actually analyzing the hypothetical premises I used as if they mattered. I’m not going to chase you down the rabbit hole or arguing over whether premises can be proven and so on, when my point was much simpler. I’ve read a lot of your posts, and I’m worried that the reference to God in my thought experiment might be provoking you. Let me remove that element with a slightly different presentation:

One can respond to a valid argument by rejecting one or more of it’s premises, instead of accepting the conclusion.

Despite this being a logical permissed move, it’s a move that can be made for the wrong reasons- one can reject a premise that they in good faith ought to accept, for no reason other than the conclusion being unpalatable to them. If we see that a person shouldn’t do this, the ‘shouldn’t’ isn’t a reference to a violation of the rules of logic, it’s a reference to a violation of something closer to a moral precept.

Yeah, hopefully the above clears this up.

I’m pretty confident you not getting this thought experiment isn’t my fault, this isn’t the paper’s debut and I already have a pretty good idea of how it’s responded to.

Or, the moral awareness informs one’s personal interests, depending on the circumstances they find themselves in to apply it to. That would be closer to my position. Most people see right away that a genius spending his life counting the grains of sand in a bucket is a sad state of affairs- even if counting the grains technically counts as discovering information nobody previously knew.

Absolutely.

Yes, I know there isn’t much distinction, that’s a rebuttal to my position that I put in my paper, and this is you re-stating that rebuttal back at me like it’s news. Later you’re going to tell me how biased my paper is, because I didn’t apparently spend enough time with the rebuttals that I’ve provided for you to criticize my position with.

Without realizing it, I think, you've just agreed with my position in this paper in the above quote.  Yes, moral awareness comes from the subconscious, and it isn't rational.  As a consequence, our reasoning is informed/guided/directed by it much more than the other way around. 
I don't really have any comment on the notion of a sub-conscious logic that the 'very skilled' can tap into if only the temper themselves and rise above the masses.  No comment other than to say I'm highly skeptical of systems with an elitist bend. 

Do you feel the same way about reason? It seems clear to me that any action of ours has a rational component- it can be judged rational or irrational, can be thought of as a ‘conclusion’ based on the ‘input’. Do you think then that ‘reason’ becomes meaningless in the face of the fact that there are no ‘non-rational’ actions?

Here for example:

You’re very explicitly making the mistake you just accused me of. Now, I’m going to go out on a limb and assume that you think ‘logical’ is a meaningful term, despite your apparent claim that it’s a component to every action (conscious and sub-conscious) we engage in (if that’s not your claim, let me know). So you should be able to see how it is that morals are that way, too.

The main criticism you’ve leveled at my position is that the pragmatic view and the moral view are too similar, and that’s a criticism I gave you in my paper. Obviously, anybody that disagrees with a paper’s conclusion will think the writer didn’t present enough critical perspective, by definition.

Thesis:

Antithesis:

Synthesis:

Maybe you are doing “willful/wishful interpretation” or selective denial, here. One of the people who have read your paper (me :slight_smile:) has been telling you that your definition is vague and also probably changing in the paper.

I think your perspective is upside down, here.
Without defining “moral” and identifying what you are talking about, I don’t think you can really talk about any moral system.

Well, I’m trying to understand your perspective of “moral”, which is currently vague and you are refusing to clarify, somehow. Maybe you don’t know about it very well. Maybe you have reason to hide. Maybe you are misunderstanding it. I don’t know. So, I’ll try to help you clarifying it.

What is common among virtue theory. deontology, and whatever you consider as “moral system”?
I guess you consider them all to be moral system since you see a common factor among them, and the common factor can be tightly related to what you consider as “moral”.
In this way, you don’t have to worry about " claiming which moral system is best".

And if you can’t make it more specific than “value judgment”, you can probably explain what is required for the specific “value judgment” that leads (or that is related, equal, containing) to “moral”.

Once we get your idea of moral straight, we can examine IF each usage of “moral” in your paper is coherent to the perspective or not. At this moment, I’m skeptical that your perspective is well held in your paper.

I’ll comment on the rest, later, when we are done with this, since your paper is about “moral”.

Hi Uccisore,

I hope I’m not too late to the party. While I am very interested in the questions your paper asks, I came at it with a predisposition to both think that morality cannot be considered meaningful outside of practical reasoning, and to think that any claims about morality are very hard to interpret without at least a structural description of what it means for a claim to be moral. As such, my main difficulty is predictably where you make the argument that morality needs to precede practical reasoning:

What you’re saying is that (correct me if I’m wrong), because rationality seems to be entirely instrumental, whereas morality defines specific goals, motives, or values, rationality cannot possibly define morality. Thus, you conclude, it must be true that morality defines, or perhaps at least provides an aim for, rationality, and thus precedes rationality.

I read this as very similar to the familiar problem where rationality seems to be entirely instrumental, and our goals, motives, and values do not seem to be governed by rationality. The issue I have is that you’ve essentially begged the question of this familiar problem, because the next question is always, “If we cannot determine what is moral via rationality, then how can we possibly know what is moral?” Your solution is to simply assert that certain noble ends, such as the greater good, are moral and have some sort of authority over other motives.

Absent this question, though, you also do not address many prominent contrasting pictures of the relationship between morality and rationality. Take Christine Korsgaard, whom you quote for her Aristotelian views. Korsgaard is also a Kantian of sorts, and believes that rationality does require one to adopt a goal, absent any external moral forces (Sources of Normativity). Very roughly, any rational agent (virtuous or not) has the goal of perceiving themselves according to a unified set of principles, otherwise they would be incapable of viewing themselves as a singular agent. Korsgaard takes this, plus the premise that we usually cannot help but recognize reasons as objective, to lead to ethics, which in turn leads the agent to try and cultivate particular moral motives.

I hope you’re still interesting in discussing this essay, and I look forward to your response; thanks.

ghjui, yes, I think so. Are you questioning Uccisore in that regard somehow?