Materialism

Carleas
I understand Materialism to be the belief that only the matter/energy stuff exists, not a particular practice of anything.

 You have a good point here.  I do believe that materialistic explanations can be presented in such a way that they don't detract from the human experience- as you say, if pain is a particular arrangement of nervous responses, it doesn't follow that it [i]is not[/i] also heartbreak, or what-have you. Both descriptions are equally valid as far as the descriptions go.  However, I don't think the explanation really adds any meaning, or explains our sense of meaning which is the larger problem. 
 That is to say, while the materialist is not obligated to [i]deny[/i] that there is such a thing as heartbreak, really, if they are asked what heartbreak is, they aren't going to have available an answer that justifies the songs and poems written about it, either. 

I disagree with this- if materialism is a means of discovering explanations, then I am in large part a materialist, as I suppose are most other theists. You would be hard pressed to find a religious believer that doesn’t think empiricism is good for anything these days. I don’t think materialism becomes interesting contra religion until it makes claims about the sum total of what exists.

Maybe not refuted, but if there are other methods that do have explanations, and they are superior to what materialism has proposed (if not definitively), then that’s a good reason to go along with the alternatives.

This is important: When one is sufficiently committed to a particular system (That is, when one has enough faith), causes of skepticism become sources of wonder.

That said, there is very good reason to posit more than the brain to explain the mind.

Suppose we added more people to the Earth- a trillion total, maybe half a trillion. Now let’s arrange everybody on earth so that they are connected through vision- that is, every single person can see many other people, and is seen by many other people. Can you picture that? Now, give each one of these people a card, with a 0 on one side, and a 1 on the other.
If we give them certain rules to follow for flipping their cards (flip your card if you see less than 1000 zeros, don’t flip your card if you see more than 75 people simultaneously flipping theirs, etc), we could run computer programs in this way, you see? With enough people flipping enough cards (like I said, a trillion or so) we’d could run a human consciousness program.
The problem is, by materialism, we wouldn’t have a simulation of human consciousness, or a representation of it, we would have an actual, world-spanning consciousness every bit as real as yours, and just as capable of abstract thought, creativity, and independent motivation as you are. From people flipping cards according to rules.

To further complicate the problem, we have random systems in place already like that- the patterns of raindrops striking pavement, or the random twinkling of stars.  Far, far worse than the monkeys typing on keyboards, enough stars twinkling randomly over enough time will bring about a sense of humor, if only for a moment.  Again, not a simulation or a representation- the actual feeling of humor will be there (as to who is having the feeling, anyone's guess).  The main problem here is that we thin k of consciousness as centered on a sort of 'point'- that is, we can say where it is, or [i]who[/i] is conscious. A materialistic explanation doesn't allow for that. 

Argument to absurdity, or sense of wonder? Depends on your commitments.

Morals: What’s tricky in morality is not to explain why rules for behavior exist, by why certain things are Good and others Evil. If they aren’t, and all you can explain is why our brains are put together to experience them that way, then that’s the sort of reductivism you’d want to avoid.

Then I’m a materialist who believes in God, and I don’t see what the dispute is.

We can quickly see that our intelligence is rooted in our physicality. The biggest problem I see with most modern AI work is that it relies on a non-coporeal intelligence.

If you give a robot very minimal intelligence, but give it a body it can navigate around objects much more quickly than if you give a robot incredible intelligence and have it map out a way across a room on a theoretical level and then move a dummy bot across the real terrain.

Out of Control by Kevin Kelly is a pretty good lay resource on the topic. I disagree with him on just what emergent behaviours are (I think that we can get everything from first principles, he does not – but we’ve accomplished that with water, so that is a nice feather in my side’s hat).

Intellect is rooted in the body. Remove the body and what you end up with is a calculator. Human intellect is an outgrowth of the intellect seen in other animals. Our intellect is rooted in our physicality.

Think about it, I could explain the way my apartment looks to you in such a way that you would have a pretty good intellectual understanding of it. If I then gave you a blind robot to navigate around the apartment, in a race a blindfolded me will win 100% of the time – and that is assuming you have a map and I don’t! Heck, a blindfolded random person who has never been to my apartment would win given much less information than you had. Alternatively, I could explain to you certain techniques that I perform at my job. While you would have a very solid intellectual understanding of what they are, you would have great difficulty doing them the first few times simply because the real knowledge of how to perform them is rooted in the physical – indeed, many things really only start to make sense once you do them because the intellectual explanation of them is far too difficult. I could explain how riding a bike works based on the theory of interia, and momentum transfer (as well as how gears and all that work), but none of that will actually help someone learn how to ride a bike.

Indeed, We’ve already created non-organic creatures with intelligence on the level of insects (they say ‘ants’, but the robots aren’t social, so beetles might be a more appropriate description). No reason why we couldn’t take that further. This isn’t a case of intelligence from non-intelligence, but rather physicality manifesting itself in action. Intellect is action in a system with many degrees of freedom.

Given that we already have matrices that ‘dream’, I think it is more than fair to say that these indirect thoughts are just a means of clearing out the junk we carry around in our mind. It is akin to defragmenting a hard-drive. It has already been documented that if a person is prevented from dreaming they go pretty crazy pretty quickly. That makes good sense, since they are trying to hold everything in their short term memory and there just isn’t enough room for it. Continuing in the vein of materialism, I would propose that we treat thoughts as ‘things’, because that is what they are – manifestations of controlled processes in our bodies. Think about it in terms of signaling cascades – there is a signal, a pathway that the signal travels, the means whereby the response can be actualized, and the response itself. I think of thoughts as the means whereby the response can be actualized. A step in the process, if you will.

The ‘mind’ is nothing more than what we call the network of the brain, in the same way that water in a glass is ‘water’ and not just ‘H2O’. So, we could get that world-spanning conscious going if we could, a) properly link everybody’s brain together, rather than having them as separately operating binary units and b) could given that brain some sort of a body to interact with.

Materialism entails a practice. Since ‘matter’ is ill-defined, unnuanced materialism should be indentified by the practice that it entails, because that practice is the same regardless of how matter is defined. The definition of matter is sticky, no doubt, but the general framework of materialism works without pinning it down, because it sets up a way to explore. And, of course, I agree that you are in large part a materialist. In your daily life, I’d wager that you’re almost entirely a materialist, solving problems as though there were only material in front of you.
The only time you might be less material is when you’re dealing with people. But people, if they’re material, are really complex material objects, so the practical theory is one that generalizes where a more accurate theory would stall over the unknowns.
There are a lot of problems with the Chinese Room thought experiment, which is essentially what you set up (though in a more general context). My personal favorite is that it is just messing with the scale on which our consciousness works, and then saying that it looks so funny, it must be impossible. Why couldn’t the entire world be conscious? By what output should its consciousness be judged? If we make computer prgrams that are smaller and faster than this theoretical network would be, but just as complex and powerful, and they are indistinguishable from your everyday consciousness, would that seal the deal?
Stars, I have other problems with. First, the network doesn’t exist, because the twinkling is not inter-related. A bunch of unrelated phenomena aren’t set up anything like the neurons in our brain. Second, consciousness seems to entail continuity, so a moment of consciousness is hard to identify. Consciousness, wherever it is instanced, includes inputs and outputs over time.

Ierrellus, I’m sorry, I don’t get it.

Carleas

To my mind you are equating materialism with empiricism. I don’t have any quarrel with empiricism, and to say that a person can be ‘in large part a materialist’ makes no sense to me- it is exactly like saying a person is in ‘large part a monist’; they mostly think there is only one fundamental substance, except for this other one. So understand that we’re using a lot of different terminology, I’ll try to pick my way through it to response as best I can.

The Chinese Room argument makes a point that’s more blunt, and in a slightly different direction, than what I wanted to go for. To answer one of your question, it’s shows that we can’t identify consciousness through output, period. The original formulation of the situation I presented was given to me as a skyscraper, I changed it to a planet for taste. Now, when you say,

That's more or less what a [i]reductio ad absurdum[/i] is, and that's exactly what I'm going for.  What I'm saying is that a gigantic consciousness formed by people flipping index cards just-so is completely consistent, and very possible, on materialism- and materialism suffers for it.  That is, materialism is damaged by the fact that it leads to such bizarre scenarios and conclusions.  I don't think my risen Christ looks so bad by comparison. 

I don’t understand why inter-relatedness has any effect. Back to the people with index cards- If these trillion people over here flip their cards just so as a result of responding to each other according to rules, and those trillion people over there flip their cards in exactly the same way through blind chance, are you saying that the first situation would produce conscious thought and the second not? If not, then I think my example with the stars works just fine.

Yes, which is another part of the problem. My index-card mind and your computer mind could easily be played and replayed through certain sequences to reproduce the same instants of conscious thought over and over, or once and never again. That is to say, you could run the ‘remembering last Tuesday’ part of the program alone, and with nothing before it and after it. You would have a conscious mind- conscious in just the way you are- coming into existence long enough to remember last Tuesday, and then being annihilated.

To bring it back to Earth, if you cut out the part of my brain that concerns itself with remembering last Tuesday and throw it in a ditch, then my memory of last Tuesday is in that ditch. If you stimulate it with the right current, then last Tuesday is being remembered in that ditch. That you can have an act of memory without a rememberer is a real problem.

Sure. The actually method of incorporating it would depend on the exact definition you wish to incorporate. Good can be an adjective describing moral actions, or it can be an adjective meaning beneficial or something, or it can be a noun used to denote, well, I’m not sure what. What do you mean by good and evil?

You can be “in large part materialist” if 95% of your actions are those of a materialist, and it is only in low-stakes situations when you are anti-materialist.

Let me try to explain why the fault does not lie with materialism. As I said before, you’re just messing with scale, applying the same principle to a ridiculous scale. It’s like me saying “what if a 3000 pound toad that was orbiting the sun had a soul? It would have to be a person! What an absurd conclusion!”, or something similar. See, you’ve set up a pretty bizarre scenario before you apply materialist conclusions: trillions of people standing around raising signs in response to each other is weird, regardless of what it produces.
But we have every reason to believe that Windows, Unix, or any other common operating system could run (albeit slowly) on such uncommon hardware, and that seems just as weird. Does that mean computer science suffers for it? I don’t think so.

Interrelatedness has an effect because the mind is a process, not a state (I’m not sure if Xunzian meant it as a state when he said “The ‘mind’ is nothing more than what we call the network of the brain,” but if it was I diverge with him here). Continue with the computer example: If I take a frozen image of the programs I’m running right now, do I have an instant of a program? If a bunch of people holding up signs randomly hold them up in just such a way that they are the same as those held by people running OSX 10.4 with their signs, is it an instant of 10.4? If it is, what does that mean?
Here’s another example. I have a string of shapes: Cirles(c), Triangles(t) and Squares(s). They form a pattern: c t s c t s c t s c t s c t s c t s. Now, is the circle an instance of the pattern? It doesn’t seem so. It seems strange to say that if the shapes were arranged randomly, each would be an instant of the pattern. In the same way, the mind is the pattern, and a single brain-state is not consciousness without the connection to the preceding and following brain-state.
“Remembering last tuesday” is another case of poor context. If I design a computer that has a word document on it, is it so terrible? No body ‘wrote’ the document, and yet the computer seems to ‘remember’ that a word document was written to it. It might even be dated from last tuesday, but it’s still not a problem from computer theory.
Now, let’s throw the part of the computer that represents that document, written last tuesday, in a ditch. What’s that, there is no one part of your computer that concerns itself with this document? Well, of course, the document is a complex entity, comprising graphics, memory, logic, etc., and drawing from all over the hardware. So there is no problem with the word processing document existing in the ditch, because to throw the document in the ditch, you need most of the computer.
Basically, the folk break-down of the mind is not a one to one match with the material break-down. That’s not to say that there isn’t a material correlate process that maps the folk break down, or that the mind can’t be described materially, but that there aren’t bits of your brain representing certain thoughts or feelings. Thoughts and feelings are processes, a continuity of entire brain-states.

Why are we still splitting what is one thing with aspects into polarities? Mind, body, spirit are one thing.

My take on it is that they’re both one thing and not one thing. We exist here, posting in a forum in a conventional realm, if you will, where they are viewed as ‘not one thing’. It’s half of a duality. Thus they’re noted as differences and discussed from that perspective.

I guess my only question to you would be in what way are they ‘one thing’?

Carleas

Right, but you have to look at the point of separation between the two systems we're comparing:  Yes, a 3000 pound toad orbiting the Sun is pretty ridiculous.  But it would be equally so in both materialist and non-materialist systems. Like you said, it's just a bizarre thing to propose.  Likewise with a computer made of 1 trillion people holding flash cards.  A very bizarre hypothetical, no doubt. But the argument doesn't revolve around "Isn't that many people holding flashcards bizarre?" it revolves around the difference that a materialist [i]has to[/i] admit that such a situation could result in consciousness, whereas some other system may not.  That's the absurdity in the argument- the very thing that materialism forces us to conclude about the hypothetical.  That's why the argument works. 

Computer Science as opposed to what? I don’t know that there’s some other theory that explains what computers do. But no, I completely disagree that running the Frogger code on flashcards is just as absurd as creating an authentic human consciousness with them.

Sure, I can go along with that, but just extend the instance of the program from an instant to something tangible like 3 seconds, and I think my argument is preserved- so we have flash cards, or stars, or stimulated ganglia lying in a ditch forcing a consciousness to come into existence, have your memories of last Tuesday for 3 seconds, and then annihilate. You may call ‘remembering last Tuesday’ a single brain state, but in fact it’s a composite thing that can progress from a beginning through to an end, and I don’t see why that wouldn’t be consciousness.

Do you, though? If I wanted to transfer the word document from one computer to another, I wouldn't need to transfer all that stuff, I'd only need a tiny bit of data- even if the transfer was to be unnecessarily physical, I would only be cutting out the smallest portion of the computers hard drive, and sticking it in another computer, to have the same document. So yes, you certainly could take the computer's harddrive, toss it in a ditch, shock with with the right amount of electricity in the right way, and 'run' the word document, right? I'm no computer scientist, but I don't see why not. All the other stuff, graphics and memory, revolves around outside observers witnessing the program in some way.
And that's a big part of the difference, that makes the consciousness thing absurd- consciousness is witnessed by default, because it's an act that it's only witness performs.  So, we can run a computer program without a monitor or any graphics engine with the understanding that the program is running 'invisibly'.  But once we get to consciousness, there is always a witness- because that's part of what a conscious thought is. So questions like "If I stimulate disembodied memory ganglia, who is doing the remembering?" become difficult.  

As long as we’re stuck with materialism, I’m pretty sure this has to be false- if you can say that my feet aren’t involved with remembering last Tuesday, you should be able to say which parts of my brain are and are not involved, as well. Every part can’t be used to remember every thing. I mean, isn’t the whole foundation of the brain = mind argument based on the fact that if we damage particular, predictable portions of the brain, then predictable, particular mental functions become inhibited?

Towards the end, we get into language problems. Language developed long before the brain’s role in cognition was understood. Questions like ‘who’s doing the remembering’ make me wonder if we can talk about a ‘who’ and the neurons in a brain in the same sentence. Let’s say I had an operating system encoded in hardware and divided into many chunks, with some part of each system allocated to each chunk. I put these chunks all over the globe, and then run the OS. Does the question “where is the OS?” make sense? The language struggles, but the theory doesn’t.
The problem with ‘remembering last tuesday’ is the same. As far as we know, only people remember. Our language was set up around people remembering things and telling us about it, or us remembering things. Recently, certain parts of computers have been dubbed ‘memory’, and we use that to talk about the information capacity of a computer hard drive. But still, when we say ‘a memory’, we don’t think of it as something that can lie in a ditch. Can a pattern lie in a ditch? Yeah, if we throw the shape in the ditch that way. It’s more of an abuse of language than an abuse of reality. Are there correlations between thoughts and brains? Yes, and that’s not so strange. Can a running a process that is correlated with a thought lie in a ditch? No reason why it couldn’t. Can we eliminate all this explanation and say “there’s a memory of last tuesday lying in that ditch”? Only if we want to cloud the merits of a theory that is not significantly stranger than the predictions of other well-established and universally accepted theories (I don’t see why the shift to running a computer on a trillion people spread across the world isn’t as absurd. You could play Halo on these people. Playing Halo on people. What? That’s not weird? Come on).

This may be a slight tangent, but I think it is relevant background discussion for the topic at hand: If I give you a USB drive with a document on it, have I given you the document? No, not really. An alien culture which somehow obtained the drive couldn’t decode it, for a number of reasons: First, the 1s and 0s stored on it do not translate directly into the document. Rather, they talk to the computer system that will be running the drive, which interprets the drive in a standard way. The computer reading the drive inserts some information into the drive in the translation process. Then we get output into letters, and you read them. There is information inserted at this stage, because you must interpret the shapes into recognizable characters that mean something to you. Maybe I insert a few russian characters that look similar to english, but you read them as their english look-alikes. On top of that, you must interpret the words (maybe I spelled some wrong and you assumed I was using a totally different word), the sentences (maybe my sarcasm was lost), and the meaning of the whole piece (if you took it seriously and I meant it as a joke, you would understand it differently). Now, this drive had all the information on it to present to you the document I created, but much of the information is implied, and information must be inserted by other systems to interpret the data. So throwing a hard drive in a ditch isn’t enough, because part of our experience of a word processing document is in the way it is read by our computer.
Again carrying the analogy to the brain, a certain part of our brain might be associated with a memory, but that doesn’t mean that it is the only part of our brain that is active when we remember it. The rest of the brain could be playing a vital role in making sense of the ‘zeros and ones’ of that memory.

Why is that method important when it’s always assumed that good actions come from good nouns (or beings) unless an exception is being made?

Does good and evil exist?

If so give me an example.

How do you determine the difference?

Given materialism ofcourse good and evil don’t exist objectivly… they are man made terms meant to describe behavior and/or emotion… as for defining the words… that’s been done to death… and really… it’s a matter of agreement between people…

Only theists believe that good and evil exist as objective terms decreed by some mythical god figure… when really it’s a social agreement… hence the variations of understanding between different cultures.

Carleas

I'm glad you brought this up, it's something we need to expand some, I think.  I don't think the problem here is with language, but with identity.  The difference between consciousness and an OS (or, the difference between consciousness and [i]everything[/i], really) is that it seems to be indivisible and always associated with identity.  If you chop up a computer, and run it's different parts in different parts of the world, then you could say that's 20 different computers, or one computer all spread out, or whatever- it's really an arbitrary distinction, and some of it is based on language we use to describe things, as you say.   However, if you do the same thing with my brain, it makes no sense to say "Those are all me" because there can only be one of me. If I am here in this chair thinking, I am not also over in China thinking- and I am not partially over in China thinking, either. 

The other thing is that thinking is only encountered in consciousness. If you ran a portion of a brain in another place, and all tests indicated it was producing sophisticated thoughts, by accounts there would have to be a ‘someone’ - an I- having them, because that’s the only situation in which thoughts occur. Again, we could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist. By definition, any conscious act or instance of consciousness is ‘somebody’.

I don’t think there’s anything strange about the storage (organic or otherwise) of a memory lying in a ditch. What I’m saying is that if it’s properly stimulated to produce an act of remembering , then there’s a consciousness involved. Or rather, I think there’s probably not, but that materialism wouldn’t be able to account for that or prefer that explanation.

You're probably right, and I'm tempted to concede that side of the argument, but one final question:  Making sense to [i]whom[/i]? If you run the tiniest portion of the brain with the memory on it, but the interpretation parts are nowhere to be had, in what way can you say the memory data isn't 'making sense'?  Is there a person whom is failing to understand it?
 When a computer interprets data, it's either from a foreign system, or it's changing that data (1's and 0's) into a form an end user can understand, right?  I agree with you that aliens wouldn't be able to read our Word documents, but in the case of a brain, there are no aliens, foreign systems, or end users. All of that is self contained.

I’m interested in this statement: “We could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist.” My understanding is that emergence is all about brain activity, and it’s the only way to go for a materialist. Consciousness for the materialist is an emergent property of the brain (or, nod to Xunz, of the brain and body.) So, I don’t see why the consciousness is different from the OS in my example. Just as you could say the computer is all over the place, you can say that the brain is all over the place. But the question “where’s the OS/consciousness”, that’s somewhat harder. The consciousness doesn’t strike me as harder to place than the OS. The ‘who’ of the system, it seems, isn’t something that has a place. If I have a pattern that is two miles long, and I ask where the pattern is, there isn’t anywhere along it that one can point and say “there is the pattern”. The pattern emerges from the whole. Likewise, the OSness or whoness of a system is a property of the whole system, not a thing that has a place in the system.
In the case of the memory, I don’t think that the materialist is committed to saying that the memory exists if the consciouness doesn’t. If there is no who, there may be no memory. Like the document isn’t a document without a computer properly tuned, it might be that a memory just isn’t a memory without a consciousness to interpret it.
The problem that might run into is one of degrees, but it is easy to explain. How many memory bits do we need before a thing is conscious? Or how much brain of all different sorts? Well, how many grains of rice comprise a pile? Does that question indicate that there’s more to a pile of rice besides the rice? It doesn’t seem so, it just seems like the word is not strictly defined. When in life does a human become conscious? It’s a difficult question, but it’s really one of semantics. We could define the number of layers of self reference one needs to be considered conscious (thinking about oneself thinking about oneself thinking about. . .), But that’s sort of arbitrary. Where along the spectrum of life do thoughts come into play? A bug? A mouse? A dolphin? I don’t know that there needs to be a cutoff in order for materialism to be coherent. (I don’t mean to make a strw-man, I know you haven’t argued this, but it did seem like something that should be addressed.)

–Extending the Neo-Confucian Tradition by Michael Kalton

Now, there are a fair number of metaphysical assumptions I am throwing into that ring about what materialism ought be/contain . . . however, I do think that when discussing materialism the notion of networking needs to play a major role.

The graphite in your pencil and the diamond in the jewelry store are both made of carbon, just carbon. But, organize the carbon differently and you get a very different product! The same principle applies to things like proteins, which are made up of just twenty amino acids. Put them together one way and you can break sugar down into alcohol, put them together another way and they become a very small motor driving flagellar motion, put them together another way and they can react to light. And then you can network these networks, forming super-structures, where sugar is broken down into alcohol to provide power for the flagellum, which allows the organism to move towards the light. This isn’t ‘emergent behaviour’, per se, because it isn’t some ‘new’ function, but rather a coordination of already existing functions. At the same time, if you take them as separate entities, they don’t necessarily make a lot of sense. That is where the ‘memory in a ditch’ falls apart. Unlike computers, the human mind is a distributed network, so physically removing a memory becomes a good deal more complicated.

What is this “I” who acts upon the brain to “alter or excite” it?

“Seems to be” is right. We’re heading down the path to manipulating genetic codes, controlling states of mind through molecular neuropsychopharmacology, altering brain functions with microchip implants that will affect cognition, consciousness and the sense of identity itself. Quantum physics is creating (has created) a paradigmatic shift in our understanding of reality.

There is no ‘I’, it is an artifact.

Xunzian, I think the reason that ‘emergent properties’ is a necessary description is that, as Uccisore has pointed out repeatedly, we seem to experience ‘consciousness’. It may be a coordination of smaller functions, but it is unsatisfactory to many to simply say “we don’t need to explain consciousness, because it doesn’t exist. We only need to explain the lower level functions.” When we talk about consciousness as an emergent property, we don’t need to deny that it is a thing all its own, but we can still explain it in terms of the coordination of other funtions.

Willamena, the I is the emergent property of a complete brain (or body).

Of course consciousness is real.

But it is just a manifestation of the degrees of freedom established by the network of our mind. If there is a robot that can only go right, then it has no degrees of freedom, it lifts. As soon as you add the notion of left and right, you have a single degree of freedom. You can actually observe moderately complex behaviours in constructs with that single degree of freedom, as I mentioned earlier in the thread.

Now, when we have the human example where we can do many, many things, the degrees of freedom for our mind approaches infinity (well, it is very large, at the very least). Our consciousness is just shifting from a state of freedom to a state of action (where freedom is gradually restricted until the act is completed and the degrees of freedom are reduced to zero).

The ‘I’ as a placeholder for this process is what is the artifact. It represents a reduction in the degrees of freedom based upon specialization and experience.