http://users.ox.ac.uk/~magd1534/JDG/petts.pdf
I should be very interested to read any comments.
http://users.ox.ac.uk/~magd1534/JDG/petts.pdf
I should be very interested to read any comments.
James! I am very excited to read this paper!
Please read my essay “values in nature” and see what you think!
I am sorry but it is now 3:00 in the morning I will send you a reply tommorow.
Szpak
Where can I find your paper?
James,
I thought that was a very interesting and insightful essay. I do have an issue with one of its important points, as I understood it, however.
You claim that life behaves as it does because such behavior will tend to bring about certain ends. For example, a lion chases zebras because doing so will tend to give the lion food, and a flower turns itself towards the sun because doing so will tend to give the flower the sunlight it requires for survival. Yet, I would say that only the lion’s behavior is actually teleological, that is, only the lion behaves as it does in order to bring about a certain end. The flower (which I consider to be unconscious, unlike the lion) just exhibits entirely arbitrary behavior, which evolved because such behavior has been beneficial to the flower’s survival on earth, and hence it survived through time. The teleology of the flower is ‘as-if’.
I suspect that you were already familiar with the genuine/as-if distinction regarding teleology and intentionality in general, and just don’t take it seriously for some reason. I take it seriously because I find it perfectly conceivable that something could behave as if it’s striving to achieve certain ends, and yet really just be ‘pushed’ by physical force. I also think that evolutionary theory explains how organisms can exhibit such apparently teleological behavior and yet be mere mechanisms. Furthermore, I think it’s incorrect to deny the existence of teleology altogether, because I am pretty certain that my own behavior is prompted by goals that I desire to obtain.
You argue (as I understand) that value emerges from life tending to cause certain states of affairs, with those states of affairs that it causes becoming valuable for the kind of life that causes them; the value emerges from the teleology. Yet, I see no reason to conclude that this teleology is more than the ‘as-if’ variety, except in the case of conscious life; and then, I would think that values come first, and actions are then directed at obtaining what’s perceived as valuable (that is, a taste for zebra meat develops, and so then zebras are chased; not first zebras are chased, and so then a taste for zebra meat develops).
There is another problem I have with the idea that value emerges from life’s tendency to behave a certain way. If this were so, then it seems that life should be optimally conditioned to live in a way that’s most satisfying (for how life ‘ought’ to be would be entirely dependent upon how it has actually turned out to be). In observing human life, I find this to not be the case; rather, I observe that the natural human condition is in ways a hindrance to living a good (satisfying, pleasurable) life. On my own view, this is because evolution has conditioned human life to be best suited for survival, but the goal of human life is to not merely to live but to live well. While an ability to live well is integral to survival in the case of any conscious kind of life (for conscious life must be motivated to do the things that enable to it survive), there is yet some conflict between the two. I think a couple instances of such conflict are the tendency to fear death, and the tendency to become greedy (or more complexly, to develop expectancy of goods as goods are obtained, such that losing goods will put one into a more miserable state than one was in prior to ever obtaining the goods, and also more goods are needed just to remain content). By fearing death to such a great degree, we are more strongly driven to take measures to prevent it; and by being greedy, we are more strongly driven to obtain more goods, and also maintain the goods that we currently possess. Yet, I think human beings would be happier without the tendencies to fear death so much, and to become greedy. But this is now getting into a different topic, and I don’t want to drag you into it.
Anyway, I enjoyed your essay and I think I’ve learned from it (part VII was especially helpful). I hope I have understood it well, and that my points make sense and have some value to you. I think I may have misunderstood you regarding the point that I’ve discussed here, because later in the essay you make points that seem to me to go counter to what I understood in part II.
Iss
I am glad that you enjoyed it.
You seem to think that teleology - acting in a certain way in consequence of the kinds of consequences of acting in that way tends to have - is either confined to the intentional (i.e., that achieved by the operation of some sort of sentience), or else that there is some important distinction between that achieved by sentience and that not so acheived. There is no reason to believe that such a distinction is relevant in the way that you seem to think that it is: the whole point is that sentient entities and genes both have in common the process of evaulative feedback (doing X in consequence of the fact that the consequences that doing X tends to have results in conditions more condusive to X being done in the future than not doing X) is necessarily common to both sentient decisionmaking and non-conscious decisionmaking, such as natural selection. The point that I make is that what the two processes have in common is far more important, at least as far as an understanding of value is concerned, than how they are distinct.
Did you read the section on independent values? The point is that humans have not evolved to serve the ends of human minds; rather, the converse: human minds have evolved to serve the ends of the genes that created them. Whatever pattern of pleasure and displeasure to minds tends to be most successful at propogating whatever gene it is that causes such a pattern to tend to be experienced will be the pattern that human minds will tend to adopt.
That does not mean, however, that that which is valuable for human minds is not distinct from that which is valuable for human genes, for reasons that I explain in some detail in the paper itself and therefore need not repeat here.
Humans will, of course, tend to try to make their lives more pleasurable, but they will not always succeed (just like not all genes succeed in propogating themselves). The fact that there is not always success, however, does not mean that the attempt is not significant.
As explained above, the concept of “the goal of human life” conflates the different and potentially conflicting systems of evaluative feedback that exist in humans (or any entity with a conscious mind). There is no single “goal of human life”: there are instead separate, and potentially conflicting, goals possessed by the particular human’s genes and mind respectively.
I have trouble making sense of the notion of “non-conscious decisionmakingâ€. Conscious decisions are made because there is some end that the conscious agent desires to obtain, and it’s desired because it’s perceived that obtaining the end will bring about satisfying (pleasurable) experiences (or avoid painful experiences). A non-conscious entity cannot have pleasurable or painful experiences, and hence, it seems, it cannot desire to obtain ends. If an entity can’t desire to obtain ends, then I can’t imagine what would cause its decisions. It seems to me that only the behavior of conscious entities can be explained in intentional terms (“conscious entity E does X in order to obtain Yâ€), and the behavior of non-conscious entities must be limited to functional explanations (“entity E does X because doing X tends to bring about Yâ€).
In part III, regarding conscious decisionmaking, you say “If performing act A in condition X most increases V, then the creature will tend to perform act A in condition X. If, conversely, performing act A1 on condition X would most increase V, then the creature would tend to do that instead…â€, where V is a state of affairs that is valuable to the mind making decisions. It is not always the case, though, that we tend to do what is most valuable for ourselves, or I think, that we even tend to attempt to do what is most valuable for ourselves. I’m aware that there are often conflicting states of affairs that we find valuable, but this does not entail that there isn’t some overall state of affairs which isn’t more valuable (or equally valuable) than any other possible overall state of affairs. I agree that human minds have evolved to find valuable that which tends to serve the replicativity of the genes that created them, but I differ a bit in how I think it is that what’s valuable for human minds can differ from, and conflict with, the function of those genes. We both agree that it’s pleasure (construed broadly), not replicativity, which is ultimately valuable for human minds. On your explanation, it’s changes in the environment which can cause the states of affairs that maximize pleasure to differ from or conflict with the state of affairs most conducive to replicativity. However, I think that even from the start, the state of affairs most valuable to human minds already differ from, and conflict with, those most conducive to replicativity. Human minds just don’t find these states of affairs valuable initially, even though if they were actually to be obtained, there would be greater overall pleasure than were they not obtained. The state of affairs that includes not fearing death, for example, is a state more suited to pleasurable experience than that which evolution has disposed us towards (which is a state that includes fearing death); however, it requires rational thinking to recognize that fearing death (at least in the way that we fear it) is irrational, and ultimately a detriment to living a pleasurable life.
Perhaps my point in the above paragraph is not inconsistent with your view, I’m not sure.
When I said “the goal of human life is to live wellâ€, I was essentially just saying that the ultimate goal of human minds is obtaining pleasurable experience, which I think is exactly your view. The fact that what particular states of affairs bring pleasure to a given mind can conflict does not entail that obtaining pleasurable experience is not the ultimate goal of human minds.
Iss
Your argument is that there cannot be non-conscious decisionmaking because decisionmaking entails consciousness. My point, however, is that a more useful conception of decisionmaking is one that does not entail consciousness, and is a set of which conscious decisionmaking is a subset: the underlying logic of conscious decisionmaking is identical to all other forms of decisionmaking, but the mechanism is that of consciousness, rather than anything else, such as gene selection.
Why do you state that people do not tend to attempt to do what is most valuable for themselves? This seems to be an empirical proposition for which you have not provided empirical evidence.
Quite possibly (although, it is quite possible to have two very different states of affairs with equal value), but it can be inordinately complex to calculate which decisions will procure the most valuable state of affairs, and human minds tend to be very far from infallible. Inefficiency and failure, however, do not entail absence of attempt.
What do you mean by “the function of those genes”? Do you imagine that genes have any function other than to make lots of identical copies of themselves for a very long time?
I don’t see how those propositions conflict. Certainly, your proposition doesn’t conflict with what I write in the paper: it is quite possible for there to be many possible states of affairs that are differentially valuable for genes and the minds that they create.
What do you mean by “find these states of affairs valuable” here? How can a human find a state of affairs valuable other than by experiencing pleasure in consequence of it? Or are you referring to how accurately that humans tend to predict how valuable that possible future states of affairs will be?
Fearing death is not irrational, since people do not choose to fear death: there is no decision by a human mind involved. Only decisions (or, more accurately, an attitude towards reason that may or may not result in suboptimal decisions) can be characterised as rational or irrational, since a process of reasoning is nothing other than a process of decisionmaking.
Yes.
Can conflict with what?
James,
At this time, I still can’t make sense of “non-conscious decisionmakingâ€. The concepts I’ve internalized over time just don’t allow for it (but I certainly admit that something could be wrong with these concepts).
I wasn’t sure if my proposition conflicted with yours, or just said something different but still consistent with what you said. You write: “Significantly, however, even if the environment changed so that, whilst performing act B on condition Y was still most conducive to replicativity, performing act B1 would now most increase V, then the creature would tend to perform act B1â€. Since you only talk about changes in the environment as a possible cause of how the states of affairs that maximize pleasure can conflict with the states of affairs most conducive to replicativity, it appears that you believe that the only way this can happen is by changes in the environment. I understand that what best serves the replicativity of individual genes can conflict, but the human mind can still be viewed as a whole entity, which has evolved to serve the replicativity of the set of genes (perhaps a “society of genes†would be an appropriate metaphor) that creates the human organism. You appear to believe that the only way that there can exist a (possible) state of affairs such that if that state of affairs were brought about, the human mind would experience greater pleasure despite the state of affairs being less conducive to the replicativity of the “society of genes†that created it, is via changes in the environment.
In my view, even without changes in the environment, there already exist (possible) states of affairs such that if one were brought about, the human mind would experience greater pleasure despite the state of affairs being less conducive to the replicativity of the “society of genes†that created it. Often, there is some emotionally-based belief that causes us to not recognize the value of such states of affairs (and hence why I said that we don’t always tend to attempt to obtain what is most valuable for ourselves).
I think a very clear example of an emotionally-based belief that causes us to not attempt to obtain a more valuable state of affairs is evidenced in the fear of death. You implied that fears don’t qualify for rationality/irrationality; but to fear something is to hold a certain belief about the thing, and beliefs can be irrational (that is, impossible to justify via reason). We tend to believe that death (that is, permanently losing consciousness) is a really terrible thing, when actually death merely brings about a state of things that’s essentially ‘neutral’ in terms of value. That at an emotional level we believe death to be really bad is to hold an irrational belief. Although holding this belief is a negative in terms of living a pleasurable life, one might not attempt to expunge it just because one likely won’t recognize its irrationality unless one tries to justify the belief via reason and finds that it can’t be done.
In my original response, I thought that my view conflicted with some of what you were saying, but now I actually don’t see that it does. In a sense, I do think that when making decisions, human minds always attempt to obtain what is most valuable for themselves, in that they always attempt to bring about a state of affairs that’s believed will likely be the most pleasurable. The sense in which I don’t think this is always the case is that they can be ignorant of what state of affairs is most likely to be most pleasurable, and hence not decide to take actions directed at obtaining that state.
What I meant was just that our desires can conflict, such that whatever state of affairs is brought about, some desires will remain unsatisfied.
Scratch “the functions of those genesâ€; I should have just wrote “that which tends to serve the replicativity of those genesâ€.
Iss
Is the problem the concept, or just the word “decision”?
No, I didn’t mean that. It is quite possible that a mind is inherently inefficient at serving the ends of genes (but still more efficient than something other than a mind), so that there exist conflicts even if there had been no change. The change in environment there was just intended as an illustration.
I understand that what best serves the replicativity of individual genes can conflict
[/quote]
It is very rare for different genes in a single organism to have conflicting values, since, because they are all passed on by the same sexual mechanism, the same things (the survival and fecundity of the organism that they create) tend to be condusive to each’s replicativity.
Fear does not entail belief: one can be have a fearful reaction to something without holding any particular beliefs about it. Fear is not always (and usually not at all) a result of a conscious decsion to be afraid.
It is only neutral in comparison with not being alive in the first place. It is, at least for those with reasonably happy lives, markedly less good than being alive.
I conceive of decisions as having intentionality, and in my conception, intentionality requires consciousness. This is a view that I arrived at to some extent on my own, but which has also been influenced by reading various writings by John Searle.
Do you suppose that beliefs need to be deliberately formulated in order to exist? I’d say that cats have many beliefs about their environment, even though I doubt they ever arrive at these beliefs via a deliberate thought process. I have no problem with the idea of beliefs that form as a result of unconscious processes, and which are not articulated in a language.
Yes, but when people do actually overcome their fear of death, recognizing that death is in no way bad (painful) is often a factor. Buddhist thinkers often advise that we view death as just an “eternal sleepâ€, and that if we can think of it like this, the fear of death will be overcome. I certainly think death is to be avoided (assuming that one’s life is likely to have a positive measure of pleasure), but I don’t think that fear is a proper response to the prospect of dieing (at least, not fear with the intensity that it tends to have in human beings).
Iss
But you still haven’t told me whether this means just that you think that the word “decision” denotes intentionality (whether, in other words, your issue is purely semantic), or whether you think that there is something in the concept of the thing that I call decisions that requires intentionality.
Firstly, this doesn’t address the point that I made that fear is very often not a product of a belief at all. Secondly, there must be a decision of some sort involved in the adoption of a belief, since a belief is an attitude towards a proposition: there may very well be little thought involved, but there must be some thought.
What exactly does “deliberate” add to “thought process” here?
It is perfectly possible to be afraid of death without believing that being dead (as opposed to the process of becoming dead) is painful. Indeed, I have yet to meet anybody who believes that being dead is an inherently painful experience.
I suspect that that is how many people, whether Bhuddist or not, conceive of it.
In any event, we seem to be diverging from the topic somewhat…
In part II, you say “Life is unique in tending to bring about states of affairs because of the nature of the states of affairs so brought about:…â€. So far, this does not entail intentionality. An entity can tend to bring about states of affairs that serve a certain end, such as self-replication, without ever intending to bring about states of affairs. If you want to call such an entity’s behavior “decisionmakingâ€, then I guess go ahead, even though I think usually people use the word “decision†to denote some intention to bring about a particular state of affairs, rather than the behavior that results from the decision. In your case, “decision†would have to refer to behavior that’s exerted because of its likelihood of bringing about a state of affairs, not the intentions to bring about states of affairs.
You finish the sentence I quoted above with “value is those properties of those states of affairs that cause it to be the case that entities tend to bring them about.†Now, intentionality is required. If an entity tends to bring about a state of affairs because that state of affairs serves an end, it’s possible that the entity is just a robot; but if it does so because the end is valuable for the entity, this entails that the entity desires that end, and behaves as it does due to its intending to bring about a state of affairs that will serve that end.
On your view, “value†is essentially that which causes entities to tend to bring about certain states of affairs; and whatever end is served by bringing about those states, is “valuable†for the entity. This is what I have an issue with. I think that “valueâ€, although it does cause entities to tend to bring about certain states of affairs (but it’s not the only thing that does), is not essentially a behavior-causing thing. Rather, “value†is essentially synonymous with “pleasure†(that is, the good feeling that can exist only for conscious entities), and it cannot otherwise be defined. What states of affairs a conscious entity tends, or intends, to bring about has nothing essential to do with what is valuable (brings pleasure) for the entity (although there tends to be much correlation between them).
EDIT: I just want to give a quick conceivability experiment to illustrate what I mean by “value is not essentially a behavior-causing thingâ€.
Imagine that you have no ability to control your behavior, and you also know that you can’t control your behavior (hence not only do you not tend to exert behavior because of its likelihood of bringing about certain states of affairs, you also do not intend to exert such behavior). Now, try to imagine that over time, you experience various pleasures of varying intensity; I think you will find that that such is indeed imaginable. If so, then pleasure cannot have anything essential to do with causing behaviors or intentions of behaviors. Since pleasure is your value, value cannot have anything essential to do with causing behaviors or intentions of behaviors.
Yeah, perhaps the off-topic points should be dropped. Although, I believe that your argument regarding them is wrong, so I will respond again respond to your challenge.
The reason that I asked if you supposed that beliefs need to be deliberately formulated in order to exist is that I thought that this would be the reason why you think that fear does not entail belief. Since I think that a belief can form in a person’s mind without the person making a decision to have that belief, I have no problem with the idea that fears are beliefs.
I think that we can believe one thing at a ‘rational level’, and another, conflicting thing at an ‘emotional level’. For example, at a rational level, nearly everyone believes that their life some years later will still be lived by themselves (unless one expects to be dead soon), but emotionally, it may be hard to grasp that this is really the case. I know I’ve done imprudent things which I knew, rationally, were ultimately harmful to myself, but emotionally, it seemed like the future of my life that would be effected was going to be experienced by a different person. I think that anytime someone decides to do something that obtains pleasure in the present, while also believing that this action will likely cause an overall less pleasurable life, it is because at an emotional level one does not believe that what happens in the future of ones life matters much, since one does not believe that it will be oneself who experiences it.
But let me give a more concrete example of beliefs held at an emotional level, and at a rational level, conflicting. When we were studying Berkeley’s theory of idealism in an Intro to Phil course, my teacher said regarding his theory “I can accept it rationally, but I don’t believe it.†If our rationally held beliefs were the only beliefs we had, then that could not have been a sincere statement; but, since our emotions can force beliefs upon us, no matter how counter to reason they may be, that statement could be made with sincerity.
When people think about death, I figure most will conclude, rationally, that there is nothing in any way painful about being dead. However, I think it is clear by the fact that people can fear death with such intensity, and then overcome this fear by meditating on how being dead is akin to eternal sleep (so that this idea can ‘sink in’ emotionally), that people do believe at an emotional level that being dead really is in some way painful. (The reason, I think, is that people tend to build emotional attachments to things in their lives, in such a way that makes them feel that not having these things (which in death, one does not) entails a painful state; but this just gets further off the supposed topic of this thread.)
I realize that speaking of “emotional levels†and “rational levels†is somewhat vague, but in my mind at least, these terms still refer to something.
Iss