Can Machines Think?

Are machines able to think? Sure from the outside, A.I.'s, appear to give intelligent answers but is it truly thought up? Do they just shuffle around simples in their processors or do they truly understand the responces they are giving?

Enjoy

I suppose it depends on what you mean by “think”. To me thinking is “purposeful imaginings”. So if we can make a machine imagined, with raw thinking material supplied in some database or even over the Internet, and it could synthesized them into useful things, like a story, a solution, or an apparently useful idea, then I have to conclude it is “thinking”. Turing have devised a similar test to test machine intelligence.

The contemporary philosopher, John Searle, believes that only a machine can think. I’m not prepared to go that far, but I do believe that human bodies are machines.

Mechanical engineering texts refer to a simple lever and fulcrum as a machine. But given that there’s no limit to the maximum complexity of a machine, I don’t see why humans can’t be described as biological machines. I can’t imagine why the term would be demeaning. Our heart is a pump, our kidneys are filters, blood vessels are conduits, and nerve impulses are electro-chemical phenomena. What part of our body doesn’t perform some machine-like function?

cs.berkeley.edu/~ravenben/humor/meat

Michael

edit: grammar

Artificial Inelligence does not exist and will never exist. Well atleast we won’t know whether it exists or not. How can we ever know that any entity other than ourself is fully conscious and self-aware? We cant, atleast not now. And furthermore, we don’t even have a definition what completely entails consciousness… so we will never know if we create it or not.

Vance,

Your second statement defeats, or re-qualifies your first statement. Your maximum claim, as stated above, is that you don’t know whether or not humans will ever construct a thinking being. Neither am I certain that the human species won’t perish tomorrow in a mushroom cloud or an asteroid collision, but leaving aside apocalyptic thoughts, I think we most likely will create other forms of intelligence. And whatever intelligence we come to create won’t be the least bit artificial; it will be the real thing.

A laboratory rat has a measure of intelligence. Is that intelligence real? What about a spider? When the web built by a spider is damaged, the spider decides whether to repair it or whether it makes “more sense” to abandon the web and start over with a new one. It’s a complex problem that requires a measure of intelligence, real intelligence, to solve.

Now, do you suppose that if we (mankind) continue to improve our command of technology that we might someday be in a position to create a “machine” possessing the intelligence equivalent to, or exceeding that of a spider or a rat? I think the odds are pretty good that we will. And if you agree that a rat or a spider has real intelligence then our machine’s intelligence will be just as real. From there, it’s a matter of stepping-up the complexity of the machine from rodent to dog, from dog to George W. Bush, and from George W. Bush to the equivalent of a real, thinking human being.

Some folks used to think that heavier-than-air flying machines were impossible. And shortly after they were proven wrong, comments were made that real airplanes are made exclusively of fabric and wood. Of course, we now know that metal aircraft fly quite well. I think of this fact when I hear people say that intelligence can only be made of organic molecules. Intelligence is no more dependent upon carbon than flight was dependent upon cellulose.

I expect someday that silicon or gallium arsenide based beings will look up at the night sky and wonder what it’s all about. They’ll ask themselves why their life is worth living, why they find something beautiful and why they fall in love. James Trefil wrote:

“The goal of humanity is to build machines that will be proud of us.”

Best wishes,
Michael

good post… but because we will never know if it will exist or not… you could safely say that it will never exist. It does sound contradictory but those statements are closely related… because if we never know if it has true “AI” then it will never exist… I do believe to be possible to an extent, dealing with humans here, but I think there has to be some sort of heavy psycho-somatic connection… almost taking away the full machine and replacing it with more of what we call a cyborg… it would be easy to replicate the intelligence of a rat or dog but would it have the true essense that is dog? Would it have the instincts? And I agree… I think we will create REAL intelligence ever before we create artifical. It’s just that there are so many unanswered questions about the mind and the brain. Plus the definition of intelligence is still sketchy and furthermore how we measure it. It’s going to be a lot of work but I think it is feasible… to an extent :wink: [/u]

In my opinion, if it walks like a duck and sounds like a duck then it’s a duck. As a student in Computer Science I’ve been taught that there is no way of knowing - or computing, for that matter - if two mathematical functions are the same. I could write a factorial function as:

int factorial (int n){
if (n == 1) return 1;
else return (n * factorial (n - 1));}

Or write it completely differently:

int factorial (int n){
int temp = 1;
for (int i = 2; i <= n; i++) temp *= i;
return temp;}

They’re the same functions but they’re not. The implementation for the two is completely different. One uses recursion, the other a loop. Natural flying creatures are carbon based. Artificial flying objects can use metal. From the users perspective, the two functions above don’t even look different. They are both called by typing factorial(x) where x is any integer. Both also return the exact same result for any given input.

I claim that artificial intelligence all ready excists. Ever play a game of Madden or maybe Street Fighter versus a computer and that computer kicks your ass? How bout a chess game versus the computer? What do you call that? It’s artificial intelligence but it certainly isn’t ai simulating a human being.

I do think that in our lifetime we will see a robot simulate a natural spider. Hell, they’ve allready got spycams on robot bees using parts of the flying algorithm that a natural bee uses. Could we see an artificial dog in our lifetime? That’s a little hard to say but it could happen. Flag goes up telling the dog it wants to be pet, begins master locating algorithm inputing optically, establishes contact, outputs “bark” wave for attention, and waits to be pet. It’s about desires; prioritising them and finding the means to fulfill them. All you would need then is to plug in some Freudian superego and a strong ego and “bam” you’ve got a human… to put in few words.

So I think it’s possible.

Does thought exist without language? I reckon that any stimulus (either external or internal) that we’re exposed to creates a sensation; but I think sensations are unarticulated thoughts. For instance, say I saw an airplane for the 1st time in my life, I’d probably feel something intense without being quite able to place it. In that sense I think that language and thought are very closely linked. On the other hand I don’t know whether I’d go as far as to say that thought is a strictly human activity.

Hello Karolina,

The vast bulk of our mental activity is entirely unconscious. We’re not, for example, normally conscious of the signals controlling the rate of our heartbeat or the automatic feedback loop regulating the temperature of our bodies. And at any given moment we’re only conscious of a tiny subset of all the data delivered to our brain by our senses. In his book Cerebral Organization and Behavior, Karl Lashley wrote:

No activity of mind is ever conscious. This sounds like a paradox, but it is nonetheless true. There are order and arrangement, but there is no experience of the creation of that order. I could give numberless examples, for there is no exception to the rule…Look at a complicated scene. It consists of a number of objects standing out against an indistinct background: desk, chairs, faces. Each consists of a number of lesser sensations combined in the object, but there is no experience of putting them together. The objects are immediately present. When we think in words, the thoughts come in grammatical form with subject, verb, object, and modifying clauses falling into place without our having the slightest perception of how the sentence structure is produced… Experience clearly gives no clue as to the means by which it is organized.”

It’s his view that the process known as “thinking” isn’t a conscious process. Our conscious-selves are handed the final draft of all this unconscious gear-turning in a form it recognizes; in a form of language. Homage is paid to this view by the common quip: “How do I know what I think until I hear what I say?”

Have you ever given up on finding a solution to a problem, only to have it later come to you "out of the blue.”? It happens to me occasionally, especially in mathematics. It’s as if I placed an order for a solution, and had to wait for the answer until my unconscious gears had spun 'round a sufficient number of times. My unconscious self will only stop thinking after I’ve died. Even as I sleep it continues to preserve my breathing and my heart rate. It stands guard to alert me of any unusual noises or threats.

But to say, “It stands guard to alert me,” presupposes that my unconscious mind is merely a handy tool of my true conscious self. But given that the vast proportion of my thinking is unconscious does it really make sense to identify myself with such a limited component of this overall thinking; the tiny part that appears to be self-aware?

Words are not the fundamental components of thinking. Words are, themselves, ideas. Ideas result from thinking, they aren’t the stuff thinking is made of. Plato and Kant both remarked that thinking is a matter of talking to oneself. It’s true in the sense that our ideas come to us packaged in the form of language, but we shouldn’t imagine that thinking is fundamentally a process of assembling sentences out of atomic words. Our thinking goes much deeper than language.

David Chalmer’s, among others (myself included), would tend to say that thermostats engage in very primitive thinking. Our brains are composed of some hundred billion interconnected neurons. If a mad surgeon began removing neurons from my brain, exactly how many would he have to remove in order that I would no longer be a thinking being? We normally wouldn’t say that a single neuron could think, but what about a million? As with so many ideas in philosophy, we’re usually led-back to the problem of vagueness, to the so-called Sorites problem.

Michael

yes and no,

Yes machine will be invented that will appear to think, nut

No, its all just extension and projection of our own human consciousness. machines will only “think” with our minds.

Are humans able to think?

Define “think” or “thought” first. Then ask the question again.

I hope you have a well-defined idea of what “intelligence” is, because your statement “Artificial Inelligence does not exist and will never exist” makes this particular AI student think you have no clues.

If machines had thought, that would quickly lead to imagination, and that would lead to (their) life and (our) death. They would soon make themselves indestructable, and control everything in the world, and we would go to a Terminator-esqe life.

So I like to think not.

Why?

Why?

Why?

But then again, they could help us prosper.
Look at humans. The most powerful are usually the most greedy. The machines (might) feed off of that, and then adapt themselves. It could be the end of life as we know it. But then again, everything is the end of life as we know it. That os life.
If you replace all the ‘woulds’ and ‘wills’ with ‘maybes’ and 'mights’m, then thats what I meant. If it doesn’t make sense, just dont bother.

To think something, you need an independent mind and hence consiousness. Machines are programmed to only respond, not think, even in doing high calculations where everything is not fed to them, they are still responding. If they can exist independent of programming in them then they can of course think. But since they can’t exist outside of programming, they can only respond to it and so they can’t ever develop consciousness and therefore, they can’t become independent entities and THINK. :laughing: Ok, ok, that sounds kinda circular reasoning but it’s not.

But then isn’t that just programming? It’s not actually thinking freely or on its own… It just does what it has been programmed to, it follows a set of rules. I think “thinking” is closely related to having “free-will” … Imagine thinking about what you’re doing but still not being able to control yourself. For instance, imagine someone tells you to pick something up and against your will you do so, even though there is no reward/punishment (which would mean any ‘effects’ from your picking it up: no disappointment from the other person, no physical abuse, no reward for picking it up, etc). I think whether we truly have free-will or not is irrelevant here, because we at least THINK we have free-will… When you choose to eat pancakes for breakfast, even if it was already pre-determined, you still feel like you chose to eat pancakes. You didn’t really eat those pancakes against your “free-will.”

(I understand that there are certain circumstances where people do things which in other occasions they know they shouldn’t have done, but did anyway. In that particular moment, however, they weren’t necessarily thinking clearly (meaning, they weren’t thinking as they ‘usually’ do), so it’s not the same as what I meant to say…)

I think just putting in an ego would create independance. It should then be able to make independant decisions.

Well… yeah, I guess so… so that pretty much answers the question, since machines don’t have egos… they can’t think independently from what they’ve been programmed to respond to… :astonished:

The Turing test is actually a test that determines whether a machine is intelligent.

Here’s the way it works: you put a human and a computer in two seperate rooms; there is an interviewer in a third room. The interviewer asks the computer and the human questions. The answers are givin to him as printouts, and yes, lying is alowed on both sides.

If the interviewer can’t determine which is which more than 50% of the time, the computer must be intelligent (because you can’t tell the difference).

This is the best test for AI that I have heard of.