Fodor, CTM and LOT

For those who are familiar with Fodor and computational theories of mind, I would like to try to make some progress on the following issue:

Does a radical darwinism via natural selection render Fodor’s CTM improbable? How could such a LOT have evolved? What would previous stages have looked like?

For my own part, I can only see a mutation as accounting for how it came about, and it would never have evolved as it would already have been perfect to do the task in question without any possibility for improvement (i.e. Pinker’s ‘Boxes’ already set up and the LOT able to source beliefs vs. desires and create intents already all setup). Even so, for the LOT to be particular to all species, how could we explain the mutation?

Does this make animals rational beings, given that the CTM is an unconscious form of reasoning?

Do you see an alternative to the mutation argument or a way in which it can be improved?

The body is a system of information, which learns during its lifetime. An example of this would be: the immune-system.

I do not believe evolution is due to random mutations.

I believe that evolution is hereditary, biological consciousness…

For example, the kind of frame-of-mind a mother is in during pregnancy, is medically/statistically recorded and drastic effects upon the child’s health. During development, the new generation gets spiritually and slightly-genetically imprinted with the consciousness of the parents.

I’m not familiar with the theory, but I can try…

It must have happened quite early on, I guess. Any organism that can receive, interpret, and react correctly to information is greatly evolutionally better off than a counterpart that cannot. The process could begin with a single photosensitive cell.

I don’t see the problem.

Sorry Dan, that’s not really relevant to this thread.

thezeus - the problem is only understandable if you understand the theory. It’s detailed, so unless you tell me you would really like to learn it I won’t explain it right now.

Ob-1,

I studied Fodor and the philosophy of AI a couple years ago. I think I have an answer to your question but I need to know what LOT is. I don’t remember and it’s too common an acronym to search.

From what I recollect of the theory, I’m not sure that I’d say that radical darwinism via natural selection renders Fodor’s CTM improbable for one very large reason: The more an animal has the ability to appreciate variation and POTENTIAL variation, the more likely that animal is to survive; that is to say that with the constant being change in the environment, the animal most likely to survive in an environment where change is perpetual would be the animal that has the ability to anticipate and react to those changes moreso than the animal who is built simply to survive a given circumstance. This way of looking at the issue necessarily involves “computation” and would favor the species that had a higher computational ability.

Given all of that, I’d say then that a species such as humans rather than being a mutation would be an expected consequence sooner or later; the ability to anticipate changes and handle them is an ability that comes in quite handy for survival.

Intermediate stages might have looked less sophisticated in terms of inference abilities and reasoning. Computationally more limited might be the best way of putting it. A dog has a fair ability to infer casual links…if you toss a rock at one from behind it looks back at you as the entity that tossed the rock…not up at the heavens. A dog, then, has some ability to compute and that ability serves it well when it is trying to take down prey. The more scenarios it can theoretically “see” and think about, then it should be theoretically a better predator.

Those that do not have a particularly high ability to compute are thus eliminated rather quickly if it happens to be that they are selected against for FORM rather than their ability to respond to change.

Okay, but put this in your list of things to do while feeling particularly generous. Whenever you feel like distibuting your massive philosophical intelligence, I’m here.

Thezeus- I didnt mean what I said to sound condescending, which is obviously how you took it. It was meant sincerely. I just dont want to waste my time writing it out for your benefit (and no benefit to me) if you dont first tell me you’ll be interested enough to make it worth my while. Isn’t that fair?

Gob- sorry, by LOT here I mean Language of thought.

Shinton- thanks. Glad we’re still on speaking terms. I think I need to clarify my question. Assuming Fodor’s nativist CTM and his LOT, we must literally take the mindbrain to be a computer that deals specifically with beliefs and desires. The output of his CTM is intention. Pinker’s metaphor, which I reference, talks about how these three parts might be stored in the brain physically, which leads us to the idea that it is where a particular language symbol resides that is part of whether it is a belief or not. Mainly though it is a functional/causal determination.

So given that he says all beings have this LOT in a nativist sense, there is no room for evolution to have happened. Why not? Well, there is no way that this system could work if it did not have 100% of what it has right now. If you remove, or reduce any single part of it (which is presumably what “lesser” stages in a radical darwinian NS evolution process might look like) then it ceases to be able to do what it must do.

So we are left to explain, given the context of evolution, how it is that such a CTM managed to get into our mindbrains in the first place if not via NS. Hence my postulation of a mutation that suddenly presented the LOT already completely 100% ready to go, but that would have to have been quite early in the timeline of evolution from bacteria to human and as an interesting side point, as such this theory would have massive moral ramifications - because as soon as animals appear we have rational animals.

Does my problem make more sense? Im trying to explain it clearly.

Okay, but I didn’t mean to sound sarcastic. That was in earnest.

And I don’t want to waste your time, but what’s the difference between a rational animal and one that isn’t?

I’m not so sure this is the best way of looking at it exactly. To be able to do what it does NOW precisely it needs all three parts, but then again, there’s nothing to suggest that it started out functioning quite that way. The trick here is in thinking the brain has always computed in the same way with the same fundamental elements. Just as a Commodore 64 doesn’t quite compute things in the same way as a Dual Core Pentium, it’s probably a fair bet to say that the brain hasn’t either.

I’m not sure what elements would be necessary in order for it to be said that something IS computing. It seems like these days everything computes–even atoms when asked the right way.

Well, again, we get into definitional issues here, but I would say that as a neural network becomes more complex then a LOT can also become more complex. I’m not sure the pieces presented here are the only configuration by which a LOT can arise. Part of the problem here may be the “broadness” of the definitions at work. I would say that in defense of the configuration of a LOT that the varying differences structurally in animals would hint at the fact that a LOT can be arrived at through many different ways–if it be the case that animals do in fact have a LOT.

The main issue that I see is that this model assumes that the only way to “think” has to do with beliefs and computation and the like. I’m not sure that’s true.

I basically agree - and I dont like Fodor’s CTM. However, as an academic exercise I am trying to shine the most positive light on it possible. Remember it is a nativist theory, so we have no choice but to look at it as either 100% or nothing. So putting your and my general feelings about the validity of the theory as a whole (and perhaps, about representational theories of mind in general…) to one side, can Fodor’s CTM be defended in a radical darwinian NS context? Where the hell would it come from?

zeus - a rational animal is an animal that is able to reason rather than simply react instinctually. So certain animals can be observed to do certain things in reaction to certain stimuli over and over again many times. The main point most people would say distinguishes humans from other animals is our ability to reason and be rational. To avoid doing things just based on our instincts, which in turn obligates us to ethical action, some might argue… and so on. Some might say animals cannot have rights, because animals cannot complete duties or obligations as they are not rationally capable.

I am happy to explain all this stuff if you’re interested, just say the word. Itll be about 2000 word long post though… !

Well, whenever you find the time, I’d be happy.

Fodor’s Representational Theory of Mind

Fodor is a Philosopher of Mind, and he is getting on a bit now - about 70 I think - and is probably the most famous living Philosopher of Mind. He developed a controversial theory about how the mind works. People call his theory different things, but it’s basicaly a version of the traditional Representational Theory of Mind. You can look that up on google and it should be fairly easy to get. Fodor’s version could be called the Computational Theory of Mind (or CTM for short).

What Fodor says is this: intentional states of mind are relations to mental representations so that expressing an intentional state involves expressing a mental representation to which one stands in an appropriate relation. Really confusing, right?

A mental representaion is basically a symbol. So maybe you have a mental representation of “Cat”. That representation is a symbol in your mind. A symbol is just a thing with meaning. Something which means something. And it exists in your mind. By the way, for Fodor, the brain and the mind are one and the same thing.

Fodor also reckons that the representations which are used for thinking about things, or thinking things through, have a language based symbol. So for example, you might have the belief that Obw is a total moron. Believing that Obw is a total moron means you have a language based symbol in your mind which holds the content “Obw is a total moron”. That’s the content of the symbol. This language which our mind uses is called the Language of Thought (or LOT for short). You shouldn’t think of LOT as a normal language like english or spanish, but rather a non natural language which all things who have this system can understand. Even if they do not speak English, or spanish, or whatever. It’s not a language we can consciously use.

Languages usually have vocabularies of symbols, which in English are words. We use words to hold the meaning for our language, so the words in English are the symbols of the english language. Each particular word exists within a particular category such as ‘verbs’ or ‘nouns’. That should be fairly obvious, but it’s worth making explicit to avoid confusion when talking about LOT. Languages also have a certain number of rules and ways in which we connect symbols. These rules tell us what are valid ways of saying things and what are not. For example ‘Obw is really rather handsome’ makes sense because it follows the rules of English language, but the same symbols - the same words - could be put like this: “Is rather really Obw handsome” and the meaning is lost. So the meaning of a sentence is a result of the meaning of its individual symbols and its overall structure in accordanc with the rules of that language. This goes for LOT too. If you went and looked on Google for Representational Theory of Mind then you already know how Fodor’s theory here is different to the traditional RTM - he doesn’t think we have pictures as representations, but rather some language based symbols. So normal RTM says when we believe ‘Obw is a total moron’ we actually have a picture of Obw being a total moron in some way, in our minds. It is this picture which we refer to when we believe the meaning ‘Obw is a total moron.’

Another important term to understand about Fodor is that he should be considered a physicalist. So he believes that the particular symbols of LOT can be physically embodied. This means he is saying that when you believe ‘Obw is a total moron’ you actually have a physical state of brain which corresponds exactly to the meaning ‘Obw is a total moron’. We could say that the belief is constituted by the physical brain state itself. You might be thinking ‘What is the nature of this physical brain state?’ which is a good question to have at this point. It’s complicated. This state of brain is a state within which its parts are identical to the imagining of the LOT symbol/word for ‘Obw’, another part which is identical to the sign of the LOT symbol/word for ‘total moron’ and so on. The structure of the sentence itself is transferred from or encoded by the brain states own internal physical structure. I.e. the physical relations of its parts constitute the structure of the sentence itself.

Any system which uses a LOT is able to put simple symbols onto simple internal physical states that ths overall system is able to indicate or express. It will also be able to map out syntacticl relations between simple symbols over onto physical relations that the components of complex internal physical states can have toward one another. As a result of this, it will also be able to map sentences onto complex internal physical states that the system itself is able to express. In this way, any LOT is referred to as ‘multiply realisable’ and can be physically encoded in all possible different ways, so the physical form that the actual symbols of LOT in your mind have might be completely different between person to person.

Bear in mind, all this is not metaphorical. The theory is that this is exactly what your brainmind is like. It’s also one of the best theories to date.

When you believe that Obw is a total moron, the belief involves being in a belief relation to a physically constituted LOT analogue of the sentence ‘Obw is a total moron’. Desiring that Obw is a total moron involves being in a desire relation to an expression or “token” of the very same sentence. The difference between believing it and desiring it is not to do with the symbols involved, but rather the functional role of the sentence. Whether or not the sentence expresses a desire or a belief or an intention depends on how the sentence is processed by the mental ratchets that have access to it. Sentences that express a belief are processed in a particular way that is characteristic of the belief relation, and desires in a way that is characteristic of the desire relation. Which in turn dictates how they are physically encoded. There is a useful metaphor for trying to understand this, thanks to a guy named Pinker. Think of three boxes in your head. One is for beliefs, one is for desires and one is for intentions. If a belief is processed by a mechanism that is characteristic of a belief relation then that sentence is physically encoded in the belief box. It can the be accessed as a belief by the system. If you had a desire in future to meet Obw, your system would be able to access the relevant belief that Obw is a total moron and as such you might change your desire or your intention.

Fodor adds a theory of how we do the thinking processes themselves to help support his overall theory about intentional states. Fodor says cognition involves the manipulation of LOT symbols via computation. So, the mindbrain is a computer. Fodor thinks a computer is basically a system that manipulates symbols that can take syntactically structured symbols as input and generate syntactically structured symbols as output by means of a set of symbol manipulation rules (a program!). So as you might imagine, this system could involve symbols being generated in the middle of a process and other symbols being manipulated (i.e. found, identified, used).

The rules that such a computer might useto manipulate symbols doesnt have to be represented within the computer itself. Symbols have semantic properties and as such a computers activity can be described in semantic terms (like ‘solving problems’). While this is the case, the computer will not have access to those semantic properties. The computer is just a mechanical device that is capable of identifying syntactical properties of symbols. Let’s look at an example before your head explodes with all this new information:

If I reason that all philosophers are morons and that Obw is a philosopher, the conclusion is that Obw is a moron. A computer ratchet in my braintakes a pair of symbols of LOT from my “belief box” which is where all my beliefs are stored. It takes these beliefs as input. It then generates a third symbol of LOT as output which it then places into my belief box. The input sentences are:

“all philosophers are morons”
“Obw is a philosopher”

And they are LOT analogues. The output is then the LOT analogue of:

“Obw is a moron”.

It could be shown like this:

1 [LOT analogue of “all philosophers are morons”] [Belief Box]
2 [LOT analogue of “obw is a philosopher”] [Belief Box]

INPUT> 1+2 [Whirrr…whiirrr…whizzle] OUTPUT > 3

3 [LOT analogue of “obw is a moron”] [Belief Box]

Finally, you might be thinking what reason he has to suggest this is all literally true. And remember, his theory is that this is how the mind really literally is. He is not being metaphorical (apart from the bit about boxes, from Pinker).

Fodor provides two main arguments for why this CTM is the right theory.

  1. It explains the many facts about intentional states and processes, such as:

a) The systematicity of thought - anyone able to believe that X is in relation R1 to Y is also able to think that Y is in relation R1 to X

b) The intensionality of though - we can believe that X is P without believing that Y is P even though X and Y are one and the same.

c) The productivity of thought - there are many distinct intentional states that a person is capable of expressing.

d) That thought processes are normally rational or semantically coherent.

  1. It has some scientific support. When he wrote the Language of Thought, he basically argued that the thinking of the day surrounding things like concept learning, decision making and perception were fully supportive of CTM. Fodor actually states that there is no real difference between science and philosophy and that philosophers cannot legitimately construct theories of the mind without having a knowledge of developments in psychology and other sciences.

Right, did any of that make any sense? I expect I explained parts of it less than brilliantly so just ask.

Well, here’s a thought, brushing all objections aside. Let us assume that the trinity of components IS necessary. We’ll run with that for the time being.

Sometimes, things that seem complex that LOOK as though they should have come about in a stepwise fashion may STILL have come about in a step-wise fashion, but not in the way one might expect.

Let’s take a quick stop by abiogenisis land for a minute for the sake of discussion. Certain pieces that randomly appear IF they survive are going to necessarilly attract certain other kinds of pieces. In other words, once something exists, it limits the possibility pool of things it can exist with because of the structural requirements that must be met such that piece A will click in with piece B. Let us suppose then that through transcription and replication and primoridal soup that the amino acids for the particular genetic strip strand that is the “mindbrain” begin to form. First, we get some piece that comes into existence presumably due to some element of chance and some element of existence dictating further growth that will eventually fit in with piece B. Piece B then quite naturally necessitates a kind of piece C. The environmental conditions govern how these pieces come together, but finally we have a “chunk” of some very abstract thing that resembles the “mindbrain”.

Now, how this “chunk” interacts with the REST of the system it is in depends wholly upon the environment that has shaped the informational structure of that thing. In other words, we’ve got a basic building block now that resembles some “mindbrain” thing, and it works “kinda” the same across all creatures but takes on different kinds of functioning with regards to the rest of the system–the system here being defined as other creature’s biological makeup. In all these scenarios, we can say that this piece has all of the qualities it does within a human (or the model), it simply isn’t expressed in quite the same way. Hence, we COULD if we were so inclined say that while piece is “irreducible” in some sense within living forms it is “very reducible” in the sense that it was a component that didn’t HAVE to be part and parcel of brains in creatures–it just happened to come about that way structurally through some combination of chance and environmental circumstances.

What we are seeing then, in creatures through evolution over time, in essence, is the modification of a “chip” that was made long ago in a gradual way. For whatever reason, it was easier for nature to simply modify this existing chip as necessary as opposed to scuttling the whole affair. These modifications might include additional LOT symbols or the ability to remember more or something along those lines.

I think this explanation is strong in the sense that it shows how a component that looks irreducable could come about in a gradual way, and then simply be subject to edits from thereon out.

I quite like that. I don’t fully understand it right now, but it sounds like the right sort of idea. Thanks.

Hooh, wow. Thank you.

First, what is an intentional state of mind? I think that’s the only thing I didn’t get.

Don’t get that either.

Wouldn’t we all be telepathic then? I know that’s a stupid question but I don’t think I agree with the highlighted phrase. For example, while we’re looking at the brain as a computer:

The Nintendo 64 only had a 32 bit processor. I found this out because I was wondering how the Nintendo could use a 64 bit processor when AMD only came out with the Athlon 64 much later. The trick was, or so my pop told me, metaphorically, in a normal computer, about half the processing power is used to communicate the location of the information. In a Nintendo, the location of the information is always the same, meaning that the other half of the processing power can be reserved for more information. The Nintendo’s code, its Language of Thought, was optimized for it’s task and hardware. Would not our LoT be similarly optimized?

So, let me try and restate the CTM. When a sense is stimulated in a certain way, it triggers a binary switch, one of many. This switch is triggered only by that one signal, others won’t have any effect. Now, when a ton of switches are triggered simultaneously it makes a pattern. This pattern is associated by the brain with whatever it was triggered by. So this pattern is a symbol for the combination of senses that it was triggered by. This is basically the representational theory of mind, right?

More later.

Intentional states of mind are things like beliefs and desires. Not things like emotions, which are “unintentional”.

I’m happy you read that far!! Basically- how do we account for the obvious fact that our thoughts seem to have intention? I.e. How do we explain the fact that our thoughts are thoughts OF things? Our thoughts seem to “point” to other things. They are meaningful. That’s what the intensionality of thought is all about.

I don’t see that it would make us telepathic- to be telepathic we presumably need a way to transmit information and receive it. It’s the manner of the transmission and reception of information that defines telepathy. Having the same internal language system in our minds does not make us able to know what other minds would be doing at any given moment in time just as having the same cheese grater in our kitchens does not help us know what one another are having for supper.

It’s a good question. Be careful not to confuse the overall system LOT with individual programs that are used within the LOT to do various tasks. Does that answer it?

Well, the computer part of the theory is a machine that manipulates symbols. The computer just deals with the symbols themselves, and therefore has no idea what the semantic properties of the symbols actually is. This is an important distinction to get. RTM basically say that intentional states of mind are “represented” by things symbolically. So either an image of Obw being a total moron to represent the intentional belief that Obw is a total moron, or in Fodor’s case the language based symbol semantic analogue.

But, if the purpose of the CTM is, as you said, to explain how the brain actually works, and not just provide a metaphor for it, it would make sense that somebody would have to actualize the theory. I think it’s great, just what I’ve been looking for, but it doesn’t say what these symbols are made up of, or how the information in the senses is transformed into symbols. I think I understand it, but I’ll read through it again.

Yea read it through again- then if youre not sure about what symbols are made up of Ill try a different explanation. The theory is great, but i may not be great at explaning it!

I imagine that:
Even though Fodor rejected W’s picture theory (which restricted the world to the type of logic that languages are capable of, or alternately, would have committed language to the multiple logics of science,) his theory suffers from formalism. It is a linguistic theory that says things about language. In this domain, it can be understood. There are elements such as words and propositions that fit together according to grammatical rules of construction.

When he takes the further step of describing the workings of mind, he is jumping into the theater of thought, which is not necessarily linguistic. Thought is expressed as language, but its logic is no more restricted to linguistic logic than is the world.

A physical implementation, such as one that is at the organizational level of brain, relies on the current state of science, which is a moving platform. Serial sequential logic is physically insufficient to process the amount of thought in a fraction of a second that we observe is the case. Facial and other image or musical recognition is almost instantaneous. So at the very least, parallel processing models are called for.