DesCartes, Materialism, and the problem of Mental Causation.

This is the paper I’ll be presenting to the Undergraduate Research Conference here at the university the middle of next month. Thanks in advance for any comments.

Ryan Smith
History of Modern Philosophy
Feb 18, 2009

 Briefly stated, Cartesian Dualism is the position that both mind and matter exist, and furthermore, that they each have different essential characteristics, that make it impossible for one to be completely explained in terms of the other. In other words, if you break down matter into it's simplest form, you won't find mind, and if you do the same with mind, you won't find matter.  In his second Meditation, he goes into detail on this, saying that the most essential properties of the mind is that it thinks, and that consequently I cannot know anything more certainly than I know of it. The most essential properties of matter are firstly that it is extended-  that is, it always has shape, figure, quantity, proportion, and location.  Secondly, matter can only be known indirectly, through the actions of the mind- it is always at some distance from that which we can know certainly. It's important to know that Descartes only argues for dualism conceptually- that is, he shows why we must think as though mind and matter were inexorably separate things, but he provides no argument that this conceptual difference is actual- that there is in fact a stuff we would call 'mind' which is in fact not at all the same as that which we would call 'matter'.  Despite the lack of an argument, it's clear from the Passions that Descartes believes this difference to be actual, as in the Passions he goes so far as to actually describe the part of the brain that he believed the will used to bring about action in the physical world (the Pineal gland, incidentally). Dualism enjoyed a long period of popularity in philosophy, and even though now it is currently out of favor, it's still very common to think and speak of ourselves as both mental and physical beings.
 One of the major challenges to the dualist since DesCartes time has concerned the nature of the mental bringing about physical effects in the world - such as, the  movement of the body.  While at first glance, saying "the mind chooses to raise the arm" appears to answer the question, the more one looks at it, the more one will find themselves wanting more.  We know that the arm is raised by contractions of the muscles, which are stimulated by nervous impulses coming from the brain, so  the obvious conclusion would be that the mind is doing something to the brain to make this process happen.  But what, exactly?   According to Descartes, the mind is not extended, and thus not physical, since extension is the primary quality of all physical substance.  This means the mind isn't crudely pushing or pulling on the brain, it's not adding some chemical or producing an electric charge in the neurons.  In fact, the way the mind is defined seems to make it impossible by definition for it to do anything at all in the physical world, without violating natural laws as we have come to understand them.
   Another related, but less discussed problem, is that of association.  If the mind isn't extended, then it seems as though it's not in any particular place.  Without a location, what sense can we make of the idea that this body is associated with this mind?  In other words, while how exactly my mind can make my arm lift is problematic, there's no obvious reason why my mind can't make your arm lift, or shake the leaves of a tree, or turn all the rocks in my driveway purple.  Without a physical account of how the mind affects those things it can affect,  there seems to be no reason for any particular limitation in it's capabilities- as Descartes has pointed out, the will is infinite, so some sort of explanation is needed for why some things willed are different than others in terms of our ability to actually bring them about.   It is these major problems, along with the success in brain study in explaining some of the mysterious of human behavior, that has made dualism a less common position than it used to be, and materialism seems to be favored these days in philosophy. It is thought that it if the mind is something physical, or otherwise wholly explained by the brain, then we don't have these problems of explanation, things are more consistent and parsimonious.  However, what I would like to show in this paper is that mind/brain interaction is just as problematic for the materialist as the dualist.
   One of the ongoing problems with a materialist outlook is the nature and existence of consciousness.  It is at least conceivable for there to be creatures or machines that perform all the same physical actions we do (including speech, physical creation of artistic works, and the performance of religious ritual), without any inner life whatsoever.  If the brain works as we suppose it does, there's no obvious reason why it could not govern our behavior without the emergence of or correspondence to the sort of self-reflective thought processes that Descartes relies upon for the cogito- In fact, the existence of conscious experience seems entirely superfluous if all our actions can be accounted for through the brain's mechanical responses to sense-stimuli.  In addition, there are machines and simple organisms that do in fact perform some of the functions of the human brain, that seem to be just in this state.  For example, it would be odd to suppose that a pocket calculator is thinking about math. 
 There's a parallel here to the problems of dualism that I have not seen pointed out before.  Specifically, the problem of consciousness for the materialist is precisely the inverse of the problem of causation for the dualist. Where one cannot see how the mental is able to affect the goings-on in the physical world, the other cannot see why the goings-on in the physical world should result in anything characteristically mental.  If it's precisely the same problem stated backwards,  then why does this universal issue lead to a perceived advantage to the materialist?  I think it has something to do with the supposed importance of the question in light of it's place in an imagined causal chain.  For the dualist, the explanation of a bodily event would look something like this.

1.) A mental event (an act of will to bring something about, say) affects the brain in some mysterious way.
2.) The brain sends nerve impulses to muscles and organs in ways that science has come to understand very well.
3.) The body responds to these impulses in the way the will has directed- presuming all the physical parts are functioning properly (the body is healthy).

 The problem in 1 leaps right out at us.  It is as though we can't accept or take seriously the entire explanation, as long as step one is so vague.  What does the materialist's account look like?

1.) A brain event occurs.
2.) The brain event sends nerve impulses to muscles and organs in ways that science has come to understand very well
3.) The body responds to these impulses in the way the will has directed- presuming all the physical parts are functioning properly (the body is healthy).
4.) Also, the brain event in 1 corresponds to a mental event, in ways that we don’t fully understand.

  The overall effect is for the problem of the mental to be shunted off to a side-project that a philosopher could work on if he chose, but is not at all essential to the understanding of how behavior and the body work together.  Is this a pragmatic success for the materialist?  It seems so at first, however, notice that implicit in the structure of the second account is the premise that our mental behavior (acts of will, desires, fears, etc.) don't actually play any role in deciding what our physical actions will be.  Both mental and physical results are 'downstream' from brain behavior, each on their own fork. This is called epiphenominalism.
  Note what I said above about the problem of consciousness being an inverse of the problem of causation for the dualist. Just as the dualist has trouble explaining why a mind can move this  arm and not that arm, the materialist has a parallel problem explaining why a particular brain event should cause this act of consciousness and not that act of consciousness.  In other words, there's no obvious reason why the brain event that makes me lift a fork to my mouth should be associated with a mental event related to the experience of doing so. It could just as easily be connected to a fear of snakes, or the experience of seeing a green pentagon, or anything else.
   The rebuttal will be made that if our mental events were not related to brain events just so we would do all sorts of silly things.  After all, if I have a sudden fear of snakes come over me every time I sit down to a meal, I may well end up on the table rather than at it.  If when confronted with a dangerous mugger, I have the experience of seeing a green pentagon accompanied by jazz music, I may well end up dead.   Remember, though,  that for the epiphenominalist, mental events do NOT cause physical events at all. The experience of seeing a green pentagon does not affect how we lift a fork to our mouths, in just the same way that the sunlight shining on Venus does not affect the sunlight shining on the Earth.  The brain brings both about independently of each other.
   Alvin Plantinga has used this fact very well in his Evolutionary Argument Against Naturalism.  In it, he argues that materialistic conception of where the brain comes from lead us to skepticism, and thus a defeater for our claims to knowledge of things like human origins(1). What's relevant here is the idea that if our mental events need not be connected to our physical actions in any particular way, it leads to skepticism, since the experience of riding a horse is as likely brought about by eating peanut butter fudge as it is by actually riding a horse or anything else we might do. If our beliefs aren't connected to what's really going on in the world in any predictable way, that's a strong defeater for anything we may come to believe empirically.  Thus, epiphenominalism is not a viable materialistic description of mind/body interaction. 
 There's two alternatives to epiphenominalism for the materialist that I can see.  The first is that brain events and mental events are the exact same thing.  This seems impossible on the face of it, since it should be a simple matter for a scientist to observe a brain event in somebody else's brain (say, by viewing electrical impulses on a mechanical display of some kind), and his observations will not contain any information about the corresponding mental experience, or indeed if there is one. When we describe a mental event, explaining the neurological states involved would have nothing at all to do with the information we are trying to get at,  and similarly, describing what it's like to feel sad gives us no information about the neuro-chemical processes in the brain. So at least conceptually, talking about a mental event is one thing, and a brain event another, even if materialism is ultimately true.  Also, in order for this to be a relevant break from epiphenominalism, it's not enough to suppose that seeing a green pentagon is just the way we experience a certain brain event.  This would lead us right back to the point that there's no necessary reason why the same brain event couldn't be experienced as a blue triangle instead, and into the skepticism described above.  What's needed here is for mental events to be both identicle with physical brain states, and to also have prepositional content relevant to whatever brought that event about.  In turn, the propositional content of beliefs that we form needs to have a real influence on the actions we take by virtue of that content. 
  The other option for the materialist is to say that while mental events are completely explained brain events, and thus physical (or physical in a sort of second-order way), they have a unique role in causation that is NOT completely accounted for by the brain events- in other words, it is necessary for the brain to produce the desire of wanting to lift my arm first, and then that desire somehow plays a causative role in lifting my arm. In that case, the materialist account of personal action is not like the second one I gave above, it's like this instead;

1.) A brain event gives rise to a mental event.
2.) The mental event takes the form of ‘deciding’ to perform a physical action,
3.) This mental decision results in nervous impulses in a way we don’t understand.
4.) The nervous impulses cause movement.

 Of course one immediately sees that this sort of materialist is in exactly the same problem as the dualist- now they need an explanation of mental causation affecting the physical in order for their account of action to be complete. Of course, the problem for them comes from both sides at once- they need an explanation for how a brain could produce a mental event with the right sort of content to avoid skepticism, and then they need an explanation for how that mental event can make physical actions occur. The explanations may or may not be one in the same.
  The conclusion that I'd like to draw from all of the above is that the most viable forms of materialism have the exact same problems as dualism concerning a complete explanation of personal action. From this basis alone, there is no reason to prefer one over the other.  In the next section I will present a reason to prefer dualism over materialism, beginning with one of Descartes' own arguments.
  Descartes sets up the idea in his second Mediation that I may not be all that I take myself to be.  Through insanity, a dream, or the acts of a capricious god, I could be in a different place, in a different time, or of a different nature than I suppose. More than just my surroundings, my body could be different- I could be a brain in a vat without a body anything like what I believe myself to have. It could even be true that I have no body at all, and am some being of pure spirit, fooled into believing in matter.  Nearly everything I take myself to be could be other. Descartes gives this revelation, and then asks if there is anything at all that is impossible to be mistaken about. The conclusion he comes to is that he is, in fact, a thinking thing.  From this, he sets up a difference between matter and mind that he will stick without through this work, and latter in the Passions of the Soul - the mind is that which we must be, first and foremost, and matter is that which we discover through the usage of the mind.  This sets up a conceptual dualism- there are things we can say about the mind that aren't true of matter, and vica versa.  However, this alone is not enough to establish a metaphysical dualism- it may still be the case that matter and mind are fundamentally the same kind of stuff,  just in the way that a marble statue is unique from a chunk of marble in a quarry, but not different in substance.   Is there an argument for more than a conceptual difference?  Leibniz(2) actually makes one I find compelling- he points out that anything that is composite, physical, is not going to have the potential for thought. Imagining a machine that is alleged to think, such as a brain, one can picture expanding that machine, or shrinking ourselves, until we are able to walk around inside it. All we would find is 'parts pushing on one another', or in the case of the brain, electrical currents flowing back and forth. Nothing we saw would explain why thinking should be going on around us, and as we explained before, it seems entirely possible that a machine could work just in the same way as a brain without any thinking going on at all. Leibniz concludes that only a simple substance without parts can think, and this is surely not matter. 
  Looking back at Descartes, we see that because of the arguments laid out in Meditation 2, if we accept that mind cannot be matter, then we also have to accept that mind is what we know first and most surely. The two arguments taken together say that not only is there another substance, but of the two of them, it is matter, not mind, that we should be somewhat skeptical of, if either. Not only is there an argument for dualism here, but if we invoke Berkeley, we could possibly argue once again for a monism that excludes matter altogether!  This is unneeded for the present paper, though- it is enough to show that there are strong arguments to prefer dualism over materialism.  Now, these arguments will have criticisms, it's true. However, to date, one of the strongest criticisms has been that dualism makes it impossible to explain action, and as I've shown above, this argument fails on the grounds that the alternative, materialism, is in precisely the same situation.  If either dualism or materialism are to be endorsed, it must be on some other grounds- perhaps our experiences as conscious beings, some deliverance of science, or abstract argument. 
  1. Naturalism Defeated, Alvin Plantinga, pp. 7-8

  2. The Monadology, Gottfriend Wilhelm Leibniz, section 17.

Hey Uccisore, sorry I’m late to the party, but I’m always interested to read your new thoughts on the mind-body relationship. I’d like to offer my own materialist position as best I can describe it, as I don’t think it fits into any of the categories you describe.

Let’s suppose we have a humanoid robot (let’s call him Andy the android) with arms, legs, sensory devices, and a “brain” CPU. What if the relationship between brain processes and mental processes is like the relationship between Andy’s electronic processes and his programmatic process? (By programmatic process, I mean the process of executing whatever program makes Andy run.)

Physically, Andy’s electronic process is just a seething jumble of electrons which are occasionally affected by his senses, and occasionally cause changes to his body:

  1. Andy receives input from his senses
  2. Andy processes input in his “brain”
  3. Andy sends output to his body

If we stop here, we have an “eliminativist” description of the android; in fact, it’s exactly what an eliminativist would say about the operation of a human being named Andy. But this description is unsatisfactory whether applied to a human or an android. It doesn’t explain how the input is processed in the brain, or if/how/why mental events arise for either of the two. A more complete description of android-Andy should include the program he is running to process the input:

  1. Andy receives input from his senses
  2. Andy processes input in his “brain”
    a) sense input is fed into a program stored in Andy’s memory
    b) program performs operations on inputs and produces outputs
  3. Andy sends output to his body

But what’s the nature of the program? Let’s suppose that in the program code, we see it is written in a high-level language (call the language H), because it would be impossible for anyone to program the android at the electron-by-electron level. In this high-level code, it’s written that the inputs trigger certain “desires” to act according to “values” coded into Andy, he deliberates about how best to act in accordance with these desires and values, and then he makes a choice which is output to the body. So our model is now:

  1. Andy receives input from his senses
  2. Andy processes input in his “brain”
    a) sense input is fed into a program stored in Andy’s memory
    b) program performs operations on inputs and produces outputs
    i) Andy weighs his “desires” and “values” and makes a choice
  3. Andy sends output to his body

At this point we have a fairly satisfactory description of android-Andy. It’s also a good description of how human beings operate, both mentally and physically. Now, what if our mental process is like the high-level program code H in which Andy’s governing program is written? Under this hypothesis, our experiences of perceptions, thoughts, and desires are interpreted as high-level abstract objects signifying complex brain events. For example, my experience of hunger signifies that my brain is undergoing a process which stimulates my stomach juices and encourages my limbs to move towards food. Our perceptions, thoughts, and desires are abstract signifiers in a high-level “programming language” which the brain has evolved to help the human body act adaptively in the world, just like Andy’s “desires” programmed in H help Andy act according to the will of his human programmers. In short, mental events are a high-level abstract description of brain processes. They are an abstract structure of signs which point to the current status of the brain. Let’s name this hypothesis. In the tradition of “eliminativism” and “epiphenomenalism”, where the name refers to the status of mental events in the theory, let’s call it “abstractionism”.

Now let’s see how abstractionism answers the objections you’ve set against other materialist theories.

First, why would we expect the “right” mental event to arise from a given physical stimulus or action? To paraphrase your essay, why would the brain event that makes me lift a fork to my mouth be associated with a mental event related to the experience of doing so? Because by hypothesis, the mental event is ontologically merely a signifier of the brain event, with no independent semantic content! To say that sitting down to dinner could be experienced as a green pentagon would be as false as saying that “Batman” could signify Lois Lane. “Batman” can’t signify Lois Lane because it is a word bound by social convention to another fictional character. Similarly, the brain activity caused by sitting down to dinner won’t be experienced as a green pentagon because that experience is bound by brain convention to signify a different set of brain events, such as thinking about or seeing an actual green pentagon.

Second, how do mental events cause brain events in abstractionism? This is where the programming language interpretation comes in. Although mental events are just a system of signifiers for brain events in abstractionism, the system is physically encoded in the brain’s information processing centers, much in the same way that Andy contains memory and a compiler/interpreter for the programming language stored in its “brain”. In Andy’s brain, thoughts and desires are both signifier and cause of brain events – signifiers because the objects of the language H represent brain activity, and causes because the compiler turns them into brain activity which affects Andy’s behavior. Similarly, human mental events are both signifiers of brain states and physically encoded causes of brain events. The choice to eat peanut butter is a mental event occurring in our brain’s programming language, and this event gets translated by some kind of compiler into brain activity and action. Under this hypothesis, brain states have both propositional content (in the high-level language encoded in the brain) and causative effect (through the compiler).

In this essay, I’ve offered a (possibly new?) materialist model of cognition called abstractionism, and explained how it can answer some questions about the relationship between mental events and brain events. Abstractionism suggests that man has been programmed by evolution to act according to an evolved high-level language of thoughts and desires, similar to how man programs artificial intelligences to act according to a man-made high-level language. In abstractionism, mental events both signify and cause brain states.

A summarizing thesis:
Just as computer code is a manifestation of the abstract language by which a programmer programs a computer, mental life is a manifestation of the abstract language by which the brain programs itself.

And a slogan:
The mind is how the brain represents and programs itself.

I think this is the crux of my argument, the part that renders abstractionism immune to the EAAN. I would be most interested in your take on this.

Hi Uccisore,

I would like to offer the following critique of your submission.

Whatever problems materialists face in explaining how mental causes can have physical effects, they are not the same problems that Cartesian dualists face. Cartesian dualists are substance dualists, whilst materialists are substance monists. The defining feature of a substance is its independence from other things which exist; nothing else needs to exist in order for a substantial thing to exist. Accordingly, substance dualists posit two things which exist independently of each other (mind and matter); whilst materialist posit one thing that exists independently of anything else (matter).

The problem with interactive causation that faces the substance dualist is in explaining how, given that an effect is dependent upon its cause, one substance can affect another substance. According to substance dualism both mind and matter are sovereign unto themselves; and yet the nature of interactive causality requires that at various times one be subject to the other. Substance dualism and causal interaction are therefore incoherent. Materialism, meanwhile, avoids this difficulty by positing matter as the only substance.

Given this, you may want to either abandon your claim that “dualism” (Cartesian substance dualism?) and materialism suffer from the same problem with regards to causal interaction; or make your commitment to property dualism (rather than substance dualism) more explicit. Clarifying this point may allow your audience to better follow your argument.

:slight_smile:

Aporia–
It seems to me what you have done is add a language component to a functionalist account of how the brain might work to produce situationally appropriate action. The problem I see with your supposed Andy the Android is that your description can have it working perfectly well without any consciousness at all. Andy doesn’t need to experience what it is like to lift a fork to its mouth OR experience fear of snakes when doing so. Andy can lift fork to mouth perfectly well without experiencing anything at all.

Consciousness is superfluous to Andy’s function just as it is to everyday computers and robots. Consciousness, if it could be produced at all would be an unnecessary addendum to a fully operational Andy, an epiphenomenon. So why would Andy’s designers bother with it at all?

Consciousness being the sine qua non of mind, Andy the Android doesn’t get to the issue of mind at all. Language- processor Andy joins the well functioning zombies, Cartesian automatons and Turing machines in the scrap heap of hypotheticals that fail to experience what it is like to be what they are. Those models tell us nothing at all about the relationship of our minds to our bodies except by contrast.

felix, what does consciousness mean to you? what does it mean to “experience what it is like to be what you are”? To me, it would mean having access to your own current mental state, and being able to use that data in thinking and evaluation of your current situation. I see no reason why Andy couldn’t do that. And the uses of consciousness for Andy could be similar to its uses for us. For example, we get frustrated when our plans aren’t working, and sometimes we notice our own frustration and contemplate changing our plans as a result. This could be powerful functionality for Andy just like it is for us.

If you see no reason why Andy the Android could think and evaluate, why haven’t you put it into production already? You will be the first to invent a being that is conscious like a human. To do so, Andy would have to be able to appreciate the way things look and feel and sound and taste. Its qualitative experience would have to go beyond its ability to express what it is like in words. And Andy would not just experience a jumbled mess. No, its experience would be a coherent and unified worldview. It would be able to reflect rationally about what its world means to it. The objects of its experience would be about something and have personal meaning something to it.

People have been able to construct devices that imitate human function but not human consciousness. Please explain how how you can construct consciousness from matter. How would you make such a thing be conscious and aware of it surroundings? How could Andy notice anything, let alone a human emotion like its own frustration?

You can imagine a conscious android because you are conscious. Similarly you can imagine a disembodied consciousness or that your consciousness is a brain in a vat. Imagining these situations isn’t the same as achieving them. And it doesn’t solve the problem of mental causation.

I see no reason human-like thinking is impossible in an android. I am not qualified to speculate on how close we are to producing one, and I suspect it’s far off in the future. What we’re discussing here is what we mean by consciousness and what barriers there might be to a computer possessing it. There will be no proof that it’s possible until it has been done, so the best I can do right now is critique arguments that artificial consciousness is impossible.

Is this your definition of consciousness? All sounds very reasonable. Now what barriers do you see to an artificial consciousness having all these properties? And how would you determine whether something has consciousness by this definition? For example, how do we know when something is appreciating beyond what it can put into words? How do we know when it has a coherent and unified worldview? Unless we answer these questions we cannot really say whether anything is conscious or not. Maybe rocks have a form of consciousness which can feel their surroundings and experience them in a way that is beyond words. We need testable criteria for consciousness if this discussion is to ever get off the ground at all. Something like the test Turing posed for intelligence, in which a machine is deemed intelligent if it can converse with a human and the human can’t tell that it’s a machine.

It’s conceivable and not metaphysically impossible as is say 2+2=5.

I suppose it will do as a working definition. Consciousness is qualitative, centered, meaningful and intentional and yet simple and unified.

The main barrier is , as Uccisore pointed out, that while mind body are perceived to interact in a way that is causal, netiher the materialist nor the dualist can explain how.

I begin with my own consciousness and then infer consciousness in others when they exhibit behavior that is similar to that which derived from consciousness in my own case. While third party monological behavior can be indicative of consciousness, the primary evidence is dialogic. We can take our discourse here as an example. I have not doubted your consciousness once during this discussion.

While I would approach a talking robot with significant skepticism initially, if conversation with the robot reached the level of discussion we are having here now, I would be forced to admit the possibility that either the robot is conscious or that I am being effectively deceived into thinking so.

In the case of our own perceptions it happens all the time. How do you describe blue to a person who has been blind since birth? Recognizing that another is conscious of an ineffable world is not as easy.

How do we know when it has a coherent and unified worldview?
Poets or novelists with ablity to evoke the illusion of a qualitative world are reknown for it. The rest of us express it to a lesser degree. I recognize consciousness in other species other than human. But, it is difficult to prove to skeptics who are satified with functionalist explanations.

I consider it my responsibility to be true to my own consciousness in the first place. As Uccisore said, “the mind is that which we must be, first and foremost…”

I’m glad you wrote this paper Ucc. Recently I’ve been beating my head against this wall too. Not so much dividing mind from body as such, but dividing ‘carrier’ from ‘content’ - the brain’s gross physiology and the information it biologically encodes, in that simply by studying a CAT scan, one cannot discern the personality it pojects, nor by reading an autobiography, can one map adequately the brain housing the intellect that wrote it. There is the biologic self, and the informational self.

The two seem separate, and I cannot unite them however fashionable monoism may be.

I honestly don’t see the problem here for monoism/materialism.

The only problem we have is that we are not equiped with a schematic of our own brains, and so we don’t know “red” as the activation of “neuron #1344623” or what have you… in fact we have no idea what the hell any of our neuron’s DO… much less what they are in relation to our “experiences”.

It’s like first showing you the effects of running internet explorer on your screen… and then showing you the effects on the hardware. There is no way those two would even remotely resemble eachother… and there’s no reason to expect they should.

Let’s say we wanted an AI to guide and instruct a new user of a computer, then it stands to reason that to optimize it’s computatonal efficiency it need not be “aware” of the hardware effects of any program… it only need know what is taking place on the screen, and how to manipulate THAT via the keyboard or mouse. the AI would be as blind as the user as to what is really causing all this wonderful stuff on the screen… much less the AI itself.

Now imagine a computer that needs to run certain programs at certain times and the times keep changing and depend on variables all of which are displayed on the computer screen, but we don’t want a person to have to do it, so we make up this “consciousness” program designed to manage this whole thing… it needs to be aware of the operating system, have preferences for how things ought to look on the screen, so as act at all, it needs to be able to learn, in case new programs or situations are introduced, and in order to learn, it also needs to be aware of it’s own actions, so as to know what caused what… also the ability to alter it’s own reactions so as to not make the same mistakes twice…

Now what do we have that this “consciousness” program would lack?

Where’s the problem?

Sure it’s primative… sure it’s digital and not analog like us… but you pretty much have everything you need there… now it’s just a question of complexity.

Or more importantly… how do you describe what YOU experience when exposed to a certain wavelength of EM radiation, to anyone else? they might very well experience it very differently than you, but have learned to call it “blue” all the same. So the blind person might as well call a sound “blue” and for all you know the difference in how it is experienced between you is no greater than how you and another sighted person experience the wavelength of EMR known as “blue”…

So maybe all you are describing when you speak of the quality of “blue” is the activation of a neuron or a cluster or neurons in your own brain… but because you don’t have a schematic or model of your own brain (nor, for that matter, the ability to calculate the significance of any neural activity even if you did), you fail to recognize it as such… and thus you can’t describe it in any meaningful way.

P.S. I urge anyone interrested in the nature of consciousness to try and read up on neuroscience and the progress we’ve made mapping the human brain… There’s ALLOT of data that supports the notion that the mind is software running on the brain… and virtually nothing other than our ignorance suggests that it might not be.

I’ve read a lot of neuroscience thankyou very much.

The trouble with the software/hardware bit is that it is the wrong distinction to try to make. Because there is a third element.

Okay - we have hardware, a laptop. We have software, microsoft word. Then we have the story I just wrote on it regarding the adventures of three small pigs, assorted building materials and a wolf.

Now I agree, making a distinction of software/hardware is not realy very important at all. But making a distinction between the meaning of the symbols which are pixellated and the thing that is doing the pixellating… Is a different matter entirely.

Meaning is recursive, any word or phrase only being explainable by using yet more language or some other conveyer of meaning. And as such, isolate. What can the brain know of the difference between the colour blue and the ‘blues’, or the state of a banana looking a bit like a big yellow willy…?

Good for you… :smiley:

I’m sorry… you’ve lost me… I have no clue what you’re talking about… maybe you could explain it in another way?

Sorry for being obtuse. I’ll have another go tonight. :wink:

nothing obtuse there tab…

“the limits of my language are the limits of my world” - Wittgenstein

the question of meaning existing outside the language is the key

-Imp

A color spectrometer may register information about the spectrum of light, but it doesn’t see color the way you and I do. It’s conceivable that the mind exists apart from the brain. Therefore, they are not identical.

I don’t experience neurons or clusters of neurons or my own brain unless I am undergoing neurosurgery. I doubt that you do either. I usually experience a coherent unified picture of my feeling, thought, and memories and my perceptions of the outer world.

Neuroscience by itself doesn’t exhibit any fallacy in the arguments for dualism presented here.

Thanks Imp, I don’t suppose Witty had a handy answer left lying about did he…?

I’m knackered so let me link you to a post or two on my blog:

http://writeitorbust.blogspot.com/2009/03/ducks-of-meaning.html

http://writeitorbust.blogspot.com/2009/03/carrier-and-content.html

yeah, he disavowed the tractatus crushing the logical positivists…

seems handy enough

-Imp

Well, well done to Witty. It’s not often anyone disavows a work written by their own pen. Most hold on for grim death. Mind you, I’m not surprised that positism took over from Hegelian metaphysics. He was a complete loon with his meaningless triads of thesis, anti, and synth.

clean, dirty => soap.

:confused:

Is that where your love of ‘ambiguous conveyance’ comes from…?