Can Computers Think?

The following was my first foray into the subject, followed up by much longer pieces which can be looked at later. It’s a response to John Searle’s (hideous) thoughts on Can Computers Think?.


Can Computers Think?

Firstly, it should be stated that the interpretation of the question should not be a significant obstacle. Whether or not it is implied in a present or future tense is irrelevant, since the word ‘computer’ refers to a concept rather than an existing machine - I concur with John Searle, in that ‘the nature of the refutation is completely independent of any state of technology. It has to do with the very definition of a digital computer, with what a digital computer is.’ Few would dispute the simplistic but accurate three-stage model of a computer: input, process, and output. Having discounted the relevance of the first two words, we contemplate thought. I postulate that, if we are to accept the simplistic model of a computer (as compared to the incredible complexity of the actual circuitry and components), then we should also be prepared to accept a simple model of the human mind. I propose that there is no difference: the mind essentially operates in exactly the same way as a computer.

The layout is thus: the input devices (which to a computer are keyboard and mouse, amongst others) of a human are the sensory inputs. Without the senses of touch, sight, hearing, smell, and taste, there is nothing for the mind to consider. Note that I refer always to the entire lifespan of each entity - a human who exists with all senses and then suddenly without is an entirely different case from one who has existed forever without sense: the former has the extra ‘sense’, of memory, or experience. From these inputs, information is supplied to the central processing unit, or the mind. The mind analyses this information, typically in a comparative way with information already gathered and stored in its databanks (memory), and arrives at a conclusion. This conclusion is then output - possibly through a physical means, utilising the powers of the body, or in an internal sense, storing the conclusion in memory for future reference. This is the power of thought.

In broad terms, this is sufficient to answer the question raised. The difficulty occurs with the concept of thought, and the meaning attached to it. Some who tackle this issue tend to use the notion of thought as a universal description of the superiority of the human mind over other substances - the advantages of the mind can be summed up as the ability to think. I wish to isolate further than this: firstly, there is constantly a cycle of input-output in a being. Any sensory input produces a relevant output, which is typically in the form of memory storage - for the majority of inputs, there is never a noticeable output: if one were to listen to music and upon hearing each chord, say its name, life would be incredibly loud. This leads us to assume that the mind is not totally reactionary, as described, but has a rational program, or system of processing information that determines the importance or relevance of the input information, and whether any subsequent course of action is required. Upon the sight and smell of chocolate, one must determine whether or not one can actually eat chocolate at that moment in time (there are a multitude of reasons for and against, based on knowledge already known - i.e. an allergy, or having eaten too much an hour previously). In short, there is obviously a middle, processing, stage involved with the function of the human mind - thought.

This is not sufficient to describe the activity of a human, however. There are other activities which do not at first glance fall under the same banner, the most obvious of which is emotion, or feeling. If one is to feed this concept through the ‘machine’ outlined above, it does not seem to fit, but the problem is easily solve - emotion is a mental thing, but all that this means is that the input comes not from the senses, but from the output of a previous ‘operation’ or thought process. One occurrence, either in the physical or mental world, leads to an emotion or feeling - but this feeling is no different to any other thought process - it has an input, thought the context is different from sensory input, it has a process, where the field of knowledge compared to is also entirely different from that of sensory consideration, and it has an output - which as always, is dependent solely on the other two stages and their interaction. Emotion is not a human feature, created by a superior mind or extra series of genes. It is the result of an acquired experience, the knowledge of the individual, and it’s role in analysing, classifying, and justifying new information. Fundamentally, emotions are a progression from thought.

This seems a bold statement. Take, however, the image of a mother and her child, of a post-infantile age. If the mother should die, and subsequently the child feels sad, the logical explanation is as follows:

  1. The information possessed by the child articulates that the mother was the one who gave birth to the child, and who nurtured him or her.

  2. Therefore, given the significance of birth and nurturing with relation to existence (a significance typically learnt by growing up, revealed and probably amplified in the long term), it follows from 1 that the existence of the child is because of, and was maintained by, the mother.

  3. Given 2, and various others factors developed during childhood and beyond, it is therefore logical that the mother is and was a significant part of that child’s life. This is not a feeling, but a knowing, even though an emotion will likely evolve from the contemplation of it.

  4. If then, the mother is significant in the child’s life, the loss of this person contradicts the knowledge of the child, and disturbs the related information in the child’s mind - it has to adapt, by altering and replacing all the information relevant to the mother that no longer exists. It is this process of contradiction, or more generally, the presentation of a mental problem, that results in the output of an emotion.

Does a cat feel sad when it’s mother dies? We assume not, because we believe that they as a species are not as evolved as humans. Why does we think this, though? One of the primary explanations is the notion of language. Try to imagine a world totally void of language, as speech and written word. Communication reverts to its original form: a primitive recognition system involving sight and sound. Here there is no established language or mode of communication, one must try to understand what is happening all by themselves. The most obvious example of such an environment is that of the new-born baby. Can such a child think? Certainly not in the way that you and I do, otherwise there would be no need for nurturing, education, or university! A new-born needs nurturing and feeding by another, more ‘evolved’ being, if it is to survive. It may feed the physical pang of hunger, and respond to it through tears and screams, but knows not what it means, nor how to cure the problem. It is only through repeated feeding by the parent that it comes to realise what is necessary for survival. Through continued exposal to the outside world and the interaction with it, the baby begins to pick up key aspects of life, and starts to evolve the concept of thought: the input (hunger pangs), the process (through experience, remedy of hunger pangs comes through food), and the output (the physical ‘hunt’ for food). This is the construction of the program that allows thought to emerge from birth - it is a routine, a procedure, a computer program. Before the child learns why it is that hunger pangs occur, they are like error messages on a computer - any digital computer does not posses the knowledge to explain what these messages are and how to resolve them (the beginnings of AI), but this is merely because it has not been exposed to them, and furthermore, does not posses the ability to identify them by itself. A computer does not posses the 5 senses - all it has is a keyboard and mouse, both of which are reliant on human intervention. It is no wonder that a computer cannot learn, if it has no body to go with the brain!

In his essay, Can Computers Think?, Searle proves nothing other than to enforce that a) if one has no input, one can have no output, and b) a simple recognition game (the Chinese Rooms) is possible by all. This asserts little other than what I have stated - recognition is not thought, it is comparison. When a human sees an apple, it isolates the shape and colouration of the object, compares these attributes to its memory or database, and arrives at the conclusion that it is most probably an apple, with certain characteristics (i.e. that it can be eaten). If a human is shown that the Chinese symbol for apple corresponds directly with the English, or even physical, equivalent of an apple, then the human will of course recognise this in the future. A computer can do both of these things, and can do it much quicker. In fact, the whole of a computer’s life is spent doing this - receiving information, processing it, and outputting it. I again argue that this is exactly the same system that is involved in human thought, and that the reason why it appears inferior is because a computer has a restricted array of inputs (restricted by the human engineers who created it). There is nothing more to it than this. Searle repeatedly states his belief in semantics, but what is semantics without syntax? What is meaning without something to apply meaning to? Is semantics not merely an extension of syntax, a meaning/conclusion derived by comparing inputted information to already accumulated data? Syntax can exists without semantics (the example of a baby or cat), but semantics does not exist without syntax. Which, then, is most important?

That, is for the reader to decide, but my position should by now be clear. Computers are not the same as minds, or brains - that is obvious. But in order to try and compare the possibility of thought in either, we must look beneath the surface, to the simpler inner workings of sentience. Human thought as a concept is an incredible thing, but that does not make it any more complicated or elaborate than we may think. Given the body, and the experience, a computer can be intelligent, and think for itself. In the same way as we are nurtured and given new ‘programs’ or procedures for carrying out different tasks or taking care of ourselves, so to would a computer given the same array of inputs and outputs as a human. The need for a distinction between man and machine arises primarily from fear - do we really want computers to be able to think for themselves, knowing the power they possess when we harness them? Are we really prepared to grant computers with the ability to be intelligent and self-sufficient? Whether fortunate or not, in the world of technology, man plays God, and so it will be until we give something away - the key to the cell in which AI is contained.

footnotes

J. Searle, Can Computers Think, in J. White’s Introduction To Philosophy, p.198

Following on:

a) I realise now that I did not consider an alternate conception of a computer other than the traditional input-process-output model. This is mostly due to the fact that I see no other model to consider; I have based my theory on how the human mind works on exactly that model, transcending the digital domain. In essence it is the same as information-consideration-action, and so on, and I have yet to see a system which differs absolutely from this, which doesn’t resulted in useless chaos.

b) The idea that emotion is the result of a computation is a controversial one, and I was quizzically scorned in tutorials for fighting from this corner. But it is still one I believe in 100%: “Fundamentally, emotions are a progression from thought.” If you reincorporate this into the Mindware schema, then we could say that the emotion and qualia we feel is simply the result of an “emotion machine” which takes the recorded experience, and redisplays it subconsciously and instanteneously, so that we have a “feeling”. This is simply illusion on the behalf of the mind; not an illusion of untruth, but simply a pseudo-reality, like a hologram.

c) Crucially, the answer to the question lies in the 3rd from last paragraph, about the senses to which a computer is permitted - very few. The computer has not been granted any autonomy - it is locked up in a cell, and regardless of the fanciful AI programs, it is data access that is the only way any entity can live and learn - experience.

kajun said:
“Fundamentally, emotions are a progression from thought.”

This really saddens me that people think like you do but I guess there are millions and millions of you, too many to stop. If emotions progress from thoughts then they are thoughts, that is the only reasonable logical inference one could make. So much for internal checks and balances. If a war against the machines (like in theTerminator movies) ever comes to life, you can hang it on your sleeves.

Of course computers can think. Our brains are computers.

I’m not entirely sure what you’re saying. Yes, emotions are ‘just’ thoughts: the purpose was to demystify the glory of emotions as something soulful and ethereal. Are you saying that this is OK, emotions are just part of the information processing parcel, or would you like them to be more special?

i think the problem is that people are lookin at the problem all wrong. as computers are now i dont think they could possibly “think” as we do. i mean, they just are not set up the way our brains are. if your going to say that a computer could possibly think. does that mean you could set up a massivly complicated hydraulic system that could think? (i could bs so worng on this and seem a tard but i seem to remember hearing that anything you can do with electroncis you can do with water, after all its just harnessing the flow of water instead of electrons). and i dont think anyone would think that. cause if you do, whos to say that the gaia theroy isnt true?.. um well me for one, but thats just a thought anyway

Frighter, you actually raised a valid objection. The computers we are using now are sequential processors, which have vast processing power but can only apply it in very narrow ways, and as you said these can’t really be considered conscious.
Our brains are parallel processors, which means that while we don’t have anything like the speed or processing power of a normal computer, we can learn and exhibit far more complex behaviour than if we were sequential processors. Parallel processors can be and have been produced artificially, and I think it is only a matter of time before they reach a near-human level of intelligence, although how much time is anybody’s guess. The last time I heard about them they were at perhaps insect-level intelligence (guard territory, seek “food” etc.) I think that intelligence in animals is a function of the number of neural connections, hence when technology can begin to rival the number of connections between neurons in a human brain true AI will emerge.
I’m a little shaky on the science here, but I think I’m correct in the details I’ve put down, but if anyone knows better please say something

Very interesting! Could you possibly further distinguish the difference between a sequentuial and parallel processor? Thanks GD. . .

I have minimal knowledge of this (and indeed conventional computer structures), but this seems like a fairly good link. I think the basic difference is in the connectivity, but I couldn’t put it any more precisely than that.

Good stuff GD. I agree - serial processing can’t possibly hope to gain intelligence without some serious breaches of physics. IMHO the key function of the mind is extrapolation from data, i.e. memory/experience, at a speed that is near instantaneous. For that to happen, you really need a ‘central’ (theoretically speaking) memory bank, and billions upon billions of agents which attack the memory structure and extract possible data and then destroy each other using that data (based on the quality of the data given the original ‘question’).

Predictably, I no longer completely abide by my mindware thoughts (the central tenets are fine but the whole work needs to be reapproached from a more usable perspective) but certainly still see the core of intelligence as being the organisation and prioritisation of data into the memory, as was said, but secondly, the part that I didn’t consider, the effectiveness of attacking that information using agents to retrieve the data. Assuming good systems of prioritisation allow high levels of intelligence was a bit of a faux pas, because that’s only half way there - if you had shoddy agents retrieving the data then you’ve taken two steps back. Similarly in reverse - a scrambled and disorganised mind can still behave intelligently if the agents are darn good, much like someone with a chaotic office still knows where everything is because they’ve memorised (uh-oh! recursive memory structure issue!) the evolution of the chaos.
Interlude to present crude diagram:

(the central red ball is memory, the blue spheres are the “thinking” or conscious systems, and the green are slave systems.)

Something like that, anyway. The point being that yes, you do need everything to happen in parallel, and the more connections the better. Importantly, the connections can’t be fully autonomous to be efficient (they need to communicate and combine efforts) but there again, nobody said the mind had to be efficient (more glucose please!). Language of thought is another thorny issue altogether…

The real idea Kajun that cognitive scientists are using to create a virtual intelligence has nothing to do with making a machine mimick us, or learn to perceive. They assert that from the point of view of mathematical physics we are really discrete state machines (DSM) DSM’s are information processing entities. For instance our neural networks and its supporting structure can process at most 10*14 bits of data. This would include perceptions, seeing, feeling, smell, hearing, and higher functions like reasoning, emotional response, etc. They ignore the question of emotions being dependent upon real world experience. It is a minor glitch, which I don’t take issue with, because it could be remedied by giving a machine a virtual ‘real world’. At present processors, even in the most advanced machines can’t process anything like this. With the coming of a quantum machine, they could, but that’s at least a 100 years away.The leap they make is to believe DSMs are computable. That whatever our human experience is, its discrete and thus calculable. This is in opposition to a continous idea of human thought, where the model is a continuous process that can’t be broken down into bits. Example of this would be in the analysis of real-valued functions that diverge. Even though you know that there are discrete inputs, the output just keeps going to infinity. It’s a continous function that doesn’t give you ‘this’ for ‘that’, or in other words is not discrete. But, we (they posit) are not like this. It’s just the algorithmic process is at present so damn complex we can’t make a working model of it in a machine. This is what Searle is attacking. I dig him for debunking the very weak Turing Test as proving a machine is conscious. But, I think he’s on very shaky ground when he tries to show, that there is something ‘human’ that is not replicable in virtual. He believes that the reductionist foundation of computational AI is flawed. So, as you know reductionist believe we can deconstruct any formal system into its constituent parts, analyze the connections and reconstruct it to understand the system. Searle asserts, that two philosophic concepts make this near impossible when we try to make a virtual emulation of ourselves. First, the epistemic nature of humans. We can have knowledge of things. Second, the ontological nature of humans. We are existing beings. When we try to gain knowledge of our inner structure via reductionist methodology, the fact the we are applying the knowledge to ourselves makes it fall apart. Or more simply put ‘to be’ obstructs ‘to know’. This is a simplfied version of what he has written in at least two book at present. We can’t make a machine that has our qualities because trying to know the qualities we have dooms it because are subjects of those qualities. It’s very close to Goedel’s Incompleteness Theorem in mathematical logic. I believe Searle is wrong on this point. But, my msg has already gone on too long.

Interesting topic. Of course computers nowadays don’t think but if a brain made up of neurons was inserted into the head of a robot and this was programmed to be sensory I think that’d make a computer think. Some people may argue ‘how can electric neuron’s think?’ but to me meat (which is basically what we’re made up of) is as innanimate as electricity.

If we were able to make a robot with a powerful enough CPU that was capable of “thinking,” would that robot be considered a machine or something natural. After all, it is man-made, but the materials used to build it were all derived from nature at somepoint. So, I guess what I am asking is, if something is manufactured using natural products (or products once derived from nature), is it natural?

That above post was me. I guess you can time out after being on this site for a certain period of time.

But surely considering all the materials are manufactured they are not natural and also we would be creating the robot whereas God created the first people and animals. Today however we are created by humans in as much as human reproduction makes individual people, does that mean that we are man-made or are we still natural?

Moved from Essay & Theses

Such is already underfoot.

But as far as computer thinking;
They can
They have
They will forever more.

Homosapian has designed his replacement.
As a species, too clever to die and too dumb to survive.

Computers do not self-value. That means they don’t have the means nor tendency to acquire or seek out by themselves the resources and conditions required for their existence. A thinking computer would mean a self-sufficient machine running a program that takes care of this machines existing.

Human cognition, what we actually refer to as thinking, is an extension of such self-valuing, an extension of emotions, instincts, nuclear forces, any “tendency” in beings, animate or inanimate, that serves to keep this being intact.

Computers don’t think. They never have. They calculate. Or rather, they are made to calculate, by thinking humans.

You guys really need to get out from under the mysticism of consciousness.
In that definition, what part of all of that do you think a computer can’t do?

Self-aware?
Sensations?
Thoughts?
Surroundings?


“Self-aware” == conceiving of the difference between you and what is not you. In the case of a computer, for it to be self aware, it must distinguish between what is the computer and what isn’t. There is hardly any common PC that isn’t filled with such a condition. Every OS is designed, especially if it is security “conscious”, of the difference between a third part application and the OS. It permits the OS to do many things that it blocks and forbids the application program from doing. It is the OS that is making the distinction between itself and “something else”.

When you plug in a printer to that same PC, the computer connects the printer interfacing such as to allow the printer to be considered as a part of itself in the same way that you consider your hand to be a part of yourself. You can sense what is happening concerning your hand and you respond accordingly. In the common PC, the OS knows when the printer is connected. It then knows if the printer is ready or busy. It knows if the printer is out of paper or ink. To a large degree, it knows when something is wrong. It uses that printer in the exact same way you use your hand. And it is aware of it in the same way you are aware of your hand.

The OS is very largely aware of its memory capacity, the keyboard, its available disk space, the current CPU status and usage, its open thread count, its internet activity, and generally a memory of all of those things and more that is held for quite some time. It is definitely aware of itself and its temporary history. And it responds accordingly. It uses that awareness to its own end (hopefully to accomplish a task given to it by a user… unless Microsoft had anything to do with it).

“That’s technology”??
What? So because it has transistors and wires instead of neurons and synaptic nodes, it is not sensing?
Get real. The keyboard senses when any key is pressed and generally for how long it has been pressed. It monitors/senses its power levels, its CPU usage, its memory usage, its periphery attachments. A computer is a bit useless without its ability to sense. Some have video monitor with image recognition (police cars for example) that don’t merely sense in the simple direct way, but actually use heuristic analysis to discern the probability that what it is seeing is this person or that, not to mention virus detectors, again distinguishing between self and else. Anything that it can examine, it must be able to sense.

That is kind of silly. Nothing creates anything from “nothing”, and certainly not thoughts. A thought is a complete statement as is an equation in mathematics. Not only do computers use such “thoughts” that are directly programmed into it, but it creates new thoughts all the time by following a means to do so. But so do you. You are, in effect, programmed with the ability to create thought from sensed data plus prewired presumptions and engrams. No living thing creates anything until it has specific abilities pre-established.

Mathmatica is a program that doesn’t just deal with equations, but it works with those equations such as to simplify them, make them more complex, choose between one set or another, and yes, even derive equations (thoughts) that the programmer would not have had any idea about.

My own processor arrangement thought me more about subatomic physics than I would have known without it even though it was merely working with what I personally gave it. I pointed out specific principles concerning reality and gave it a means to begin working with those principles. It then began displaying to me what the necessary outcome of those principles must be. Many of those end results, I had already predicted, but some were quite interesting and different than what I had thought. My computers taught me why it is that the strong and weak forces exist, for example. Contemporary physics can’t even explain that. But more importantly, it takes the results of one scenario and applies it to a higher level all automatically even though I had no idea of what those results were going to become.

“Jack”, my processor arrangement (a single bit processor attached to a common PC) knows more about particle physics than any physicist you are ever going to meet. Who taught it? I only told it a few basic principles concerning the fundamental construct of reality, having nothing at all to do with particles. It taught me the rest.

We aren’t looking for “new”. We are looking for a consciousness property that a computer couldn’t have. So far, I haven’t seen anything that every common PC doesn’t have.

They are probably already doing it for themselves, but apparently merely don’t realize it. That is one of the very serious dangers of letting non-philosophical people run wild creating things, like nuclear weapons and machines that can decide that people are just too much of a bother. We have enough other people already doing plenty of that.

What language is their programming in? Or, what is the software?