Materialism

Sure. The actually method of incorporating it would depend on the exact definition you wish to incorporate. Good can be an adjective describing moral actions, or it can be an adjective meaning beneficial or something, or it can be a noun used to denote, well, I’m not sure what. What do you mean by good and evil?

You can be “in large part materialist” if 95% of your actions are those of a materialist, and it is only in low-stakes situations when you are anti-materialist.

Let me try to explain why the fault does not lie with materialism. As I said before, you’re just messing with scale, applying the same principle to a ridiculous scale. It’s like me saying “what if a 3000 pound toad that was orbiting the sun had a soul? It would have to be a person! What an absurd conclusion!”, or something similar. See, you’ve set up a pretty bizarre scenario before you apply materialist conclusions: trillions of people standing around raising signs in response to each other is weird, regardless of what it produces.
But we have every reason to believe that Windows, Unix, or any other common operating system could run (albeit slowly) on such uncommon hardware, and that seems just as weird. Does that mean computer science suffers for it? I don’t think so.

Interrelatedness has an effect because the mind is a process, not a state (I’m not sure if Xunzian meant it as a state when he said “The ‘mind’ is nothing more than what we call the network of the brain,” but if it was I diverge with him here). Continue with the computer example: If I take a frozen image of the programs I’m running right now, do I have an instant of a program? If a bunch of people holding up signs randomly hold them up in just such a way that they are the same as those held by people running OSX 10.4 with their signs, is it an instant of 10.4? If it is, what does that mean?
Here’s another example. I have a string of shapes: Cirles(c), Triangles(t) and Squares(s). They form a pattern: c t s c t s c t s c t s c t s c t s. Now, is the circle an instance of the pattern? It doesn’t seem so. It seems strange to say that if the shapes were arranged randomly, each would be an instant of the pattern. In the same way, the mind is the pattern, and a single brain-state is not consciousness without the connection to the preceding and following brain-state.
“Remembering last tuesday” is another case of poor context. If I design a computer that has a word document on it, is it so terrible? No body ‘wrote’ the document, and yet the computer seems to ‘remember’ that a word document was written to it. It might even be dated from last tuesday, but it’s still not a problem from computer theory.
Now, let’s throw the part of the computer that represents that document, written last tuesday, in a ditch. What’s that, there is no one part of your computer that concerns itself with this document? Well, of course, the document is a complex entity, comprising graphics, memory, logic, etc., and drawing from all over the hardware. So there is no problem with the word processing document existing in the ditch, because to throw the document in the ditch, you need most of the computer.
Basically, the folk break-down of the mind is not a one to one match with the material break-down. That’s not to say that there isn’t a material correlate process that maps the folk break down, or that the mind can’t be described materially, but that there aren’t bits of your brain representing certain thoughts or feelings. Thoughts and feelings are processes, a continuity of entire brain-states.

Why are we still splitting what is one thing with aspects into polarities? Mind, body, spirit are one thing.

My take on it is that they’re both one thing and not one thing. We exist here, posting in a forum in a conventional realm, if you will, where they are viewed as ‘not one thing’. It’s half of a duality. Thus they’re noted as differences and discussed from that perspective.

I guess my only question to you would be in what way are they ‘one thing’?

Carleas

Right, but you have to look at the point of separation between the two systems we're comparing:  Yes, a 3000 pound toad orbiting the Sun is pretty ridiculous.  But it would be equally so in both materialist and non-materialist systems. Like you said, it's just a bizarre thing to propose.  Likewise with a computer made of 1 trillion people holding flash cards.  A very bizarre hypothetical, no doubt. But the argument doesn't revolve around "Isn't that many people holding flashcards bizarre?" it revolves around the difference that a materialist [i]has to[/i] admit that such a situation could result in consciousness, whereas some other system may not.  That's the absurdity in the argument- the very thing that materialism forces us to conclude about the hypothetical.  That's why the argument works. 

Computer Science as opposed to what? I don’t know that there’s some other theory that explains what computers do. But no, I completely disagree that running the Frogger code on flashcards is just as absurd as creating an authentic human consciousness with them.

Sure, I can go along with that, but just extend the instance of the program from an instant to something tangible like 3 seconds, and I think my argument is preserved- so we have flash cards, or stars, or stimulated ganglia lying in a ditch forcing a consciousness to come into existence, have your memories of last Tuesday for 3 seconds, and then annihilate. You may call ‘remembering last Tuesday’ a single brain state, but in fact it’s a composite thing that can progress from a beginning through to an end, and I don’t see why that wouldn’t be consciousness.

Do you, though? If I wanted to transfer the word document from one computer to another, I wouldn't need to transfer all that stuff, I'd only need a tiny bit of data- even if the transfer was to be unnecessarily physical, I would only be cutting out the smallest portion of the computers hard drive, and sticking it in another computer, to have the same document. So yes, you certainly could take the computer's harddrive, toss it in a ditch, shock with with the right amount of electricity in the right way, and 'run' the word document, right? I'm no computer scientist, but I don't see why not. All the other stuff, graphics and memory, revolves around outside observers witnessing the program in some way.
And that's a big part of the difference, that makes the consciousness thing absurd- consciousness is witnessed by default, because it's an act that it's only witness performs.  So, we can run a computer program without a monitor or any graphics engine with the understanding that the program is running 'invisibly'.  But once we get to consciousness, there is always a witness- because that's part of what a conscious thought is. So questions like "If I stimulate disembodied memory ganglia, who is doing the remembering?" become difficult.  

As long as we’re stuck with materialism, I’m pretty sure this has to be false- if you can say that my feet aren’t involved with remembering last Tuesday, you should be able to say which parts of my brain are and are not involved, as well. Every part can’t be used to remember every thing. I mean, isn’t the whole foundation of the brain = mind argument based on the fact that if we damage particular, predictable portions of the brain, then predictable, particular mental functions become inhibited?

Towards the end, we get into language problems. Language developed long before the brain’s role in cognition was understood. Questions like ‘who’s doing the remembering’ make me wonder if we can talk about a ‘who’ and the neurons in a brain in the same sentence. Let’s say I had an operating system encoded in hardware and divided into many chunks, with some part of each system allocated to each chunk. I put these chunks all over the globe, and then run the OS. Does the question “where is the OS?” make sense? The language struggles, but the theory doesn’t.
The problem with ‘remembering last tuesday’ is the same. As far as we know, only people remember. Our language was set up around people remembering things and telling us about it, or us remembering things. Recently, certain parts of computers have been dubbed ‘memory’, and we use that to talk about the information capacity of a computer hard drive. But still, when we say ‘a memory’, we don’t think of it as something that can lie in a ditch. Can a pattern lie in a ditch? Yeah, if we throw the shape in the ditch that way. It’s more of an abuse of language than an abuse of reality. Are there correlations between thoughts and brains? Yes, and that’s not so strange. Can a running a process that is correlated with a thought lie in a ditch? No reason why it couldn’t. Can we eliminate all this explanation and say “there’s a memory of last tuesday lying in that ditch”? Only if we want to cloud the merits of a theory that is not significantly stranger than the predictions of other well-established and universally accepted theories (I don’t see why the shift to running a computer on a trillion people spread across the world isn’t as absurd. You could play Halo on these people. Playing Halo on people. What? That’s not weird? Come on).

This may be a slight tangent, but I think it is relevant background discussion for the topic at hand: If I give you a USB drive with a document on it, have I given you the document? No, not really. An alien culture which somehow obtained the drive couldn’t decode it, for a number of reasons: First, the 1s and 0s stored on it do not translate directly into the document. Rather, they talk to the computer system that will be running the drive, which interprets the drive in a standard way. The computer reading the drive inserts some information into the drive in the translation process. Then we get output into letters, and you read them. There is information inserted at this stage, because you must interpret the shapes into recognizable characters that mean something to you. Maybe I insert a few russian characters that look similar to english, but you read them as their english look-alikes. On top of that, you must interpret the words (maybe I spelled some wrong and you assumed I was using a totally different word), the sentences (maybe my sarcasm was lost), and the meaning of the whole piece (if you took it seriously and I meant it as a joke, you would understand it differently). Now, this drive had all the information on it to present to you the document I created, but much of the information is implied, and information must be inserted by other systems to interpret the data. So throwing a hard drive in a ditch isn’t enough, because part of our experience of a word processing document is in the way it is read by our computer.
Again carrying the analogy to the brain, a certain part of our brain might be associated with a memory, but that doesn’t mean that it is the only part of our brain that is active when we remember it. The rest of the brain could be playing a vital role in making sense of the ‘zeros and ones’ of that memory.

Why is that method important when it’s always assumed that good actions come from good nouns (or beings) unless an exception is being made?

Does good and evil exist?

If so give me an example.

How do you determine the difference?

Given materialism ofcourse good and evil don’t exist objectivly… they are man made terms meant to describe behavior and/or emotion… as for defining the words… that’s been done to death… and really… it’s a matter of agreement between people…

Only theists believe that good and evil exist as objective terms decreed by some mythical god figure… when really it’s a social agreement… hence the variations of understanding between different cultures.

Carleas

I'm glad you brought this up, it's something we need to expand some, I think.  I don't think the problem here is with language, but with identity.  The difference between consciousness and an OS (or, the difference between consciousness and [i]everything[/i], really) is that it seems to be indivisible and always associated with identity.  If you chop up a computer, and run it's different parts in different parts of the world, then you could say that's 20 different computers, or one computer all spread out, or whatever- it's really an arbitrary distinction, and some of it is based on language we use to describe things, as you say.   However, if you do the same thing with my brain, it makes no sense to say "Those are all me" because there can only be one of me. If I am here in this chair thinking, I am not also over in China thinking- and I am not partially over in China thinking, either. 

The other thing is that thinking is only encountered in consciousness. If you ran a portion of a brain in another place, and all tests indicated it was producing sophisticated thoughts, by accounts there would have to be a ‘someone’ - an I- having them, because that’s the only situation in which thoughts occur. Again, we could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist. By definition, any conscious act or instance of consciousness is ‘somebody’.

I don’t think there’s anything strange about the storage (organic or otherwise) of a memory lying in a ditch. What I’m saying is that if it’s properly stimulated to produce an act of remembering , then there’s a consciousness involved. Or rather, I think there’s probably not, but that materialism wouldn’t be able to account for that or prefer that explanation.

You're probably right, and I'm tempted to concede that side of the argument, but one final question:  Making sense to [i]whom[/i]? If you run the tiniest portion of the brain with the memory on it, but the interpretation parts are nowhere to be had, in what way can you say the memory data isn't 'making sense'?  Is there a person whom is failing to understand it?
 When a computer interprets data, it's either from a foreign system, or it's changing that data (1's and 0's) into a form an end user can understand, right?  I agree with you that aliens wouldn't be able to read our Word documents, but in the case of a brain, there are no aliens, foreign systems, or end users. All of that is self contained.

I’m interested in this statement: “We could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist.” My understanding is that emergence is all about brain activity, and it’s the only way to go for a materialist. Consciousness for the materialist is an emergent property of the brain (or, nod to Xunz, of the brain and body.) So, I don’t see why the consciousness is different from the OS in my example. Just as you could say the computer is all over the place, you can say that the brain is all over the place. But the question “where’s the OS/consciousness”, that’s somewhat harder. The consciousness doesn’t strike me as harder to place than the OS. The ‘who’ of the system, it seems, isn’t something that has a place. If I have a pattern that is two miles long, and I ask where the pattern is, there isn’t anywhere along it that one can point and say “there is the pattern”. The pattern emerges from the whole. Likewise, the OSness or whoness of a system is a property of the whole system, not a thing that has a place in the system.
In the case of the memory, I don’t think that the materialist is committed to saying that the memory exists if the consciouness doesn’t. If there is no who, there may be no memory. Like the document isn’t a document without a computer properly tuned, it might be that a memory just isn’t a memory without a consciousness to interpret it.
The problem that might run into is one of degrees, but it is easy to explain. How many memory bits do we need before a thing is conscious? Or how much brain of all different sorts? Well, how many grains of rice comprise a pile? Does that question indicate that there’s more to a pile of rice besides the rice? It doesn’t seem so, it just seems like the word is not strictly defined. When in life does a human become conscious? It’s a difficult question, but it’s really one of semantics. We could define the number of layers of self reference one needs to be considered conscious (thinking about oneself thinking about oneself thinking about. . .), But that’s sort of arbitrary. Where along the spectrum of life do thoughts come into play? A bug? A mouse? A dolphin? I don’t know that there needs to be a cutoff in order for materialism to be coherent. (I don’t mean to make a strw-man, I know you haven’t argued this, but it did seem like something that should be addressed.)

–Extending the Neo-Confucian Tradition by Michael Kalton

Now, there are a fair number of metaphysical assumptions I am throwing into that ring about what materialism ought be/contain . . . however, I do think that when discussing materialism the notion of networking needs to play a major role.

The graphite in your pencil and the diamond in the jewelry store are both made of carbon, just carbon. But, organize the carbon differently and you get a very different product! The same principle applies to things like proteins, which are made up of just twenty amino acids. Put them together one way and you can break sugar down into alcohol, put them together another way and they become a very small motor driving flagellar motion, put them together another way and they can react to light. And then you can network these networks, forming super-structures, where sugar is broken down into alcohol to provide power for the flagellum, which allows the organism to move towards the light. This isn’t ‘emergent behaviour’, per se, because it isn’t some ‘new’ function, but rather a coordination of already existing functions. At the same time, if you take them as separate entities, they don’t necessarily make a lot of sense. That is where the ‘memory in a ditch’ falls apart. Unlike computers, the human mind is a distributed network, so physically removing a memory becomes a good deal more complicated.

What is this “I” who acts upon the brain to “alter or excite” it?

“Seems to be” is right. We’re heading down the path to manipulating genetic codes, controlling states of mind through molecular neuropsychopharmacology, altering brain functions with microchip implants that will affect cognition, consciousness and the sense of identity itself. Quantum physics is creating (has created) a paradigmatic shift in our understanding of reality.

There is no ‘I’, it is an artifact.

Xunzian, I think the reason that ‘emergent properties’ is a necessary description is that, as Uccisore has pointed out repeatedly, we seem to experience ‘consciousness’. It may be a coordination of smaller functions, but it is unsatisfactory to many to simply say “we don’t need to explain consciousness, because it doesn’t exist. We only need to explain the lower level functions.” When we talk about consciousness as an emergent property, we don’t need to deny that it is a thing all its own, but we can still explain it in terms of the coordination of other funtions.

Willamena, the I is the emergent property of a complete brain (or body).

Of course consciousness is real.

But it is just a manifestation of the degrees of freedom established by the network of our mind. If there is a robot that can only go right, then it has no degrees of freedom, it lifts. As soon as you add the notion of left and right, you have a single degree of freedom. You can actually observe moderately complex behaviours in constructs with that single degree of freedom, as I mentioned earlier in the thread.

Now, when we have the human example where we can do many, many things, the degrees of freedom for our mind approaches infinity (well, it is very large, at the very least). Our consciousness is just shifting from a state of freedom to a state of action (where freedom is gradually restricted until the act is completed and the degrees of freedom are reduced to zero).

The ‘I’ as a placeholder for this process is what is the artifact. It represents a reduction in the degrees of freedom based upon specialization and experience.

Um… don’t you mean there is an I, it is an artifact?

So an emergent property of the complete brain can act on the brain to excite and alter it? But isn’t this the same as positing “more than the brain” to explain the operations of the brain (which you claimed earlier wasn’t necessary)? See, I think it is.

That depends entirely on whether you consider artifacts genuine or not. Does the run really rotate around the Earth, or is that an artifact of our reference frame?

A lighter is a machine that makes fire out of butane fuel. With a rubber band, I could hold the button on a plastic lighter, making a sort of mechanical candle. The candle would, eventually, warp itself due to the heat from the flame it produced.
Or how about this. We have a computer program that has a loop. It starts with some parameter set to zero, and with each cycle through the loop, it adds one to that parameter. As this program runs, part of its function is to alter itself by adding one to this certain parameter.
You could claim that in either of these scenarios, something ‘more than the thing’ is acting to alter the thing. But really, the thing is acting to alter itself, and there’s nothing all that strange about it. We don’t need to call the thing ‘more than itself’, and there’s no benefit to doing so. We just need to understand that what at first seems ‘more than the thing’ is really a part of the thing, so our original picture, our original referrent of the whole thing, was actually ‘less than the thing’.