Materialism

Towards the end, we get into language problems. Language developed long before the brain’s role in cognition was understood. Questions like ‘who’s doing the remembering’ make me wonder if we can talk about a ‘who’ and the neurons in a brain in the same sentence. Let’s say I had an operating system encoded in hardware and divided into many chunks, with some part of each system allocated to each chunk. I put these chunks all over the globe, and then run the OS. Does the question “where is the OS?” make sense? The language struggles, but the theory doesn’t.
The problem with ‘remembering last tuesday’ is the same. As far as we know, only people remember. Our language was set up around people remembering things and telling us about it, or us remembering things. Recently, certain parts of computers have been dubbed ‘memory’, and we use that to talk about the information capacity of a computer hard drive. But still, when we say ‘a memory’, we don’t think of it as something that can lie in a ditch. Can a pattern lie in a ditch? Yeah, if we throw the shape in the ditch that way. It’s more of an abuse of language than an abuse of reality. Are there correlations between thoughts and brains? Yes, and that’s not so strange. Can a running a process that is correlated with a thought lie in a ditch? No reason why it couldn’t. Can we eliminate all this explanation and say “there’s a memory of last tuesday lying in that ditch”? Only if we want to cloud the merits of a theory that is not significantly stranger than the predictions of other well-established and universally accepted theories (I don’t see why the shift to running a computer on a trillion people spread across the world isn’t as absurd. You could play Halo on these people. Playing Halo on people. What? That’s not weird? Come on).

This may be a slight tangent, but I think it is relevant background discussion for the topic at hand: If I give you a USB drive with a document on it, have I given you the document? No, not really. An alien culture which somehow obtained the drive couldn’t decode it, for a number of reasons: First, the 1s and 0s stored on it do not translate directly into the document. Rather, they talk to the computer system that will be running the drive, which interprets the drive in a standard way. The computer reading the drive inserts some information into the drive in the translation process. Then we get output into letters, and you read them. There is information inserted at this stage, because you must interpret the shapes into recognizable characters that mean something to you. Maybe I insert a few russian characters that look similar to english, but you read them as their english look-alikes. On top of that, you must interpret the words (maybe I spelled some wrong and you assumed I was using a totally different word), the sentences (maybe my sarcasm was lost), and the meaning of the whole piece (if you took it seriously and I meant it as a joke, you would understand it differently). Now, this drive had all the information on it to present to you the document I created, but much of the information is implied, and information must be inserted by other systems to interpret the data. So throwing a hard drive in a ditch isn’t enough, because part of our experience of a word processing document is in the way it is read by our computer.
Again carrying the analogy to the brain, a certain part of our brain might be associated with a memory, but that doesn’t mean that it is the only part of our brain that is active when we remember it. The rest of the brain could be playing a vital role in making sense of the ‘zeros and ones’ of that memory.

Why is that method important when it’s always assumed that good actions come from good nouns (or beings) unless an exception is being made?

Does good and evil exist?

If so give me an example.

How do you determine the difference?

Given materialism ofcourse good and evil don’t exist objectivly… they are man made terms meant to describe behavior and/or emotion… as for defining the words… that’s been done to death… and really… it’s a matter of agreement between people…

Only theists believe that good and evil exist as objective terms decreed by some mythical god figure… when really it’s a social agreement… hence the variations of understanding between different cultures.

Carleas

I'm glad you brought this up, it's something we need to expand some, I think.  I don't think the problem here is with language, but with identity.  The difference between consciousness and an OS (or, the difference between consciousness and [i]everything[/i], really) is that it seems to be indivisible and always associated with identity.  If you chop up a computer, and run it's different parts in different parts of the world, then you could say that's 20 different computers, or one computer all spread out, or whatever- it's really an arbitrary distinction, and some of it is based on language we use to describe things, as you say.   However, if you do the same thing with my brain, it makes no sense to say "Those are all me" because there can only be one of me. If I am here in this chair thinking, I am not also over in China thinking- and I am not partially over in China thinking, either. 

The other thing is that thinking is only encountered in consciousness. If you ran a portion of a brain in another place, and all tests indicated it was producing sophisticated thoughts, by accounts there would have to be a ‘someone’ - an I- having them, because that’s the only situation in which thoughts occur. Again, we could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist. By definition, any conscious act or instance of consciousness is ‘somebody’.

I don’t think there’s anything strange about the storage (organic or otherwise) of a memory lying in a ditch. What I’m saying is that if it’s properly stimulated to produce an act of remembering , then there’s a consciousness involved. Or rather, I think there’s probably not, but that materialism wouldn’t be able to account for that or prefer that explanation.

You're probably right, and I'm tempted to concede that side of the argument, but one final question:  Making sense to [i]whom[/i]? If you run the tiniest portion of the brain with the memory on it, but the interpretation parts are nowhere to be had, in what way can you say the memory data isn't 'making sense'?  Is there a person whom is failing to understand it?
 When a computer interprets data, it's either from a foreign system, or it's changing that data (1's and 0's) into a form an end user can understand, right?  I agree with you that aliens wouldn't be able to read our Word documents, but in the case of a brain, there are no aliens, foreign systems, or end users. All of that is self contained.

I’m interested in this statement: “We could say that consciousness is somehow emergent and not dependent on brain activity indications of thought, but that’s hard for a materialist.” My understanding is that emergence is all about brain activity, and it’s the only way to go for a materialist. Consciousness for the materialist is an emergent property of the brain (or, nod to Xunz, of the brain and body.) So, I don’t see why the consciousness is different from the OS in my example. Just as you could say the computer is all over the place, you can say that the brain is all over the place. But the question “where’s the OS/consciousness”, that’s somewhat harder. The consciousness doesn’t strike me as harder to place than the OS. The ‘who’ of the system, it seems, isn’t something that has a place. If I have a pattern that is two miles long, and I ask where the pattern is, there isn’t anywhere along it that one can point and say “there is the pattern”. The pattern emerges from the whole. Likewise, the OSness or whoness of a system is a property of the whole system, not a thing that has a place in the system.
In the case of the memory, I don’t think that the materialist is committed to saying that the memory exists if the consciouness doesn’t. If there is no who, there may be no memory. Like the document isn’t a document without a computer properly tuned, it might be that a memory just isn’t a memory without a consciousness to interpret it.
The problem that might run into is one of degrees, but it is easy to explain. How many memory bits do we need before a thing is conscious? Or how much brain of all different sorts? Well, how many grains of rice comprise a pile? Does that question indicate that there’s more to a pile of rice besides the rice? It doesn’t seem so, it just seems like the word is not strictly defined. When in life does a human become conscious? It’s a difficult question, but it’s really one of semantics. We could define the number of layers of self reference one needs to be considered conscious (thinking about oneself thinking about oneself thinking about. . .), But that’s sort of arbitrary. Where along the spectrum of life do thoughts come into play? A bug? A mouse? A dolphin? I don’t know that there needs to be a cutoff in order for materialism to be coherent. (I don’t mean to make a strw-man, I know you haven’t argued this, but it did seem like something that should be addressed.)

–Extending the Neo-Confucian Tradition by Michael Kalton

Now, there are a fair number of metaphysical assumptions I am throwing into that ring about what materialism ought be/contain . . . however, I do think that when discussing materialism the notion of networking needs to play a major role.

The graphite in your pencil and the diamond in the jewelry store are both made of carbon, just carbon. But, organize the carbon differently and you get a very different product! The same principle applies to things like proteins, which are made up of just twenty amino acids. Put them together one way and you can break sugar down into alcohol, put them together another way and they become a very small motor driving flagellar motion, put them together another way and they can react to light. And then you can network these networks, forming super-structures, where sugar is broken down into alcohol to provide power for the flagellum, which allows the organism to move towards the light. This isn’t ‘emergent behaviour’, per se, because it isn’t some ‘new’ function, but rather a coordination of already existing functions. At the same time, if you take them as separate entities, they don’t necessarily make a lot of sense. That is where the ‘memory in a ditch’ falls apart. Unlike computers, the human mind is a distributed network, so physically removing a memory becomes a good deal more complicated.

What is this “I” who acts upon the brain to “alter or excite” it?

“Seems to be” is right. We’re heading down the path to manipulating genetic codes, controlling states of mind through molecular neuropsychopharmacology, altering brain functions with microchip implants that will affect cognition, consciousness and the sense of identity itself. Quantum physics is creating (has created) a paradigmatic shift in our understanding of reality.

There is no ‘I’, it is an artifact.

Xunzian, I think the reason that ‘emergent properties’ is a necessary description is that, as Uccisore has pointed out repeatedly, we seem to experience ‘consciousness’. It may be a coordination of smaller functions, but it is unsatisfactory to many to simply say “we don’t need to explain consciousness, because it doesn’t exist. We only need to explain the lower level functions.” When we talk about consciousness as an emergent property, we don’t need to deny that it is a thing all its own, but we can still explain it in terms of the coordination of other funtions.

Willamena, the I is the emergent property of a complete brain (or body).

Of course consciousness is real.

But it is just a manifestation of the degrees of freedom established by the network of our mind. If there is a robot that can only go right, then it has no degrees of freedom, it lifts. As soon as you add the notion of left and right, you have a single degree of freedom. You can actually observe moderately complex behaviours in constructs with that single degree of freedom, as I mentioned earlier in the thread.

Now, when we have the human example where we can do many, many things, the degrees of freedom for our mind approaches infinity (well, it is very large, at the very least). Our consciousness is just shifting from a state of freedom to a state of action (where freedom is gradually restricted until the act is completed and the degrees of freedom are reduced to zero).

The ‘I’ as a placeholder for this process is what is the artifact. It represents a reduction in the degrees of freedom based upon specialization and experience.

Um… don’t you mean there is an I, it is an artifact?

So an emergent property of the complete brain can act on the brain to excite and alter it? But isn’t this the same as positing “more than the brain” to explain the operations of the brain (which you claimed earlier wasn’t necessary)? See, I think it is.

That depends entirely on whether you consider artifacts genuine or not. Does the run really rotate around the Earth, or is that an artifact of our reference frame?

A lighter is a machine that makes fire out of butane fuel. With a rubber band, I could hold the button on a plastic lighter, making a sort of mechanical candle. The candle would, eventually, warp itself due to the heat from the flame it produced.
Or how about this. We have a computer program that has a loop. It starts with some parameter set to zero, and with each cycle through the loop, it adds one to that parameter. As this program runs, part of its function is to alter itself by adding one to this certain parameter.
You could claim that in either of these scenarios, something ‘more than the thing’ is acting to alter the thing. But really, the thing is acting to alter itself, and there’s nothing all that strange about it. We don’t need to call the thing ‘more than itself’, and there’s no benefit to doing so. We just need to understand that what at first seems ‘more than the thing’ is really a part of the thing, so our original picture, our original referrent of the whole thing, was actually ‘less than the thing’.

As the conversation drifts towards science, I’m going to have less and less I can say without making a fool of myself, I’m sure (whether or not that will stop me remains to me seen). But, staying with philosophy, let me say a couple of things:
There are many new discoveries that can be made through science or observation. I can go walking down the street, look through a telescope or microscope, interview experts, and come to learn a great deal. But there are limits. For example, I cannot, through reading books, come to learn that I am illiterate. It doesn’t matter how much I read, or how well put together the arguments are, I cannot come to learn of my illiteracy through reading books. If I were illiterate, the information would have to come in some other way- a verbal explanation, perhaps.
If you accept the nature of that limitation, then I would submit that there is nothing, nothing I can do to come to learn that I, ‘the self’, or consciousness doesn’t exist. I can’t even take it on faith reasonably. The best I can do is use compartmentalized thinking to become convinced by an argument or evidence, while separating that from the automatic defeater that we all have.

[b]Carleas[/b]  The difference between consciousness and operating systems isn't one of location so much as multiplicity.  You were right to point out that a clump of cells is insufficient to be a memory- but then you stand with me, a memory, properly understood, is pure [i]qualia[/i], the thing is the experience itself.  If that's the case, then consciousness is a requirement for a particular memory. So all those difficult questions come flooding back- if you recreate a memory somehow, have you created a new person? Have you somehow created another of the same person?  And we come back to the index cards- if the cards are doing what a brain does, then you have a consciousness.  If the cards are doing what [i]your [/i]brain does, then we have [i]your[/i] consciousness. And that kind of thing is absurd.  Consciousness only makes sense as an irrevocably unique, and that can't happen under materialism.

I consider “I” and the apparent rotation of the earth to be genuine artifacts.

To the idealist, the “emergent property” is the bit that transcends its programming (has existence only in spirit), that has the perceived ability to alter its programming (will) and is still an integral part (result) of the program (“I”).

The lighter’s flame falls short as an example of “emergent property” of the lighter, but the flame’s apparanent capacity to “feed itself” would be an example of an emergent property of flame. In the computer’s looping, it is the apparent evolution of the program that is emergent of the program.

I think that it is utterly necessary to utilize an “I” as a “self” more than the sum of mind and body to define ourselves --not only because without it we would have to restructure most of our spoken and written languages, but because it has all the apparance of being quite real --it is what real being is to consciousness. Without consciousness there is no emergent property “I”, and without the emergent property, we cannot describe consciousness.

But you’re right, there’s nothing strange about it. :slight_smile:

You are gonna have to clarify that position.

Uccisore, I think the point is valid that you can’t believe that you don’t exist, and I’m avoiding saying that. It’s not that you don’t exist, it’s just then you look very different from one angle than from another. This is why I don’t think that materialism has to be reductionism. I clearly exist, my consciousness is obvious to me. To say that it doesn’t exist is not only unproductive, it’s not justified: why prioritize the level on which my consciousness doesn’t exist? It’s not right to say the pattern isn’t a pattern, it’s just shapes.
But the pattern is shapes too, and if you have the same shapes in the same sequence, you have the same patter. The thing is, if I make a perfect replica of my brain, it’s not me, I’m me. I don’t have it’s consciousness. It’s consciousness would react identically to mine in identical situations, but it won’t experience identical situations. In any reproduction, the thoughts aren’t the same thoughts, just functionally the same thoughts, until it is altered by its experiences enough that it is only functionally similar, and eventually I and my replicate may even disagree. And the global index card me won’t act like me, because it will process too slowly (even slower than flesh-and-blood me). So, yes, there are scenarios that are a little bit weird, but only because they are novel. They don’t self-contradict or break any more-sacred rules, they just do things we haven’t yet been able to do. I suspect our childrens children will find them downright commonplace.