Strong or Weak AI?

Do you believe in Strong AI or Weak AI?

  • Strong AI
  • Weak AI
  • Other
0 voters

Do you think that it’s possible for a computer to become truly intelligent, and conscious like a human being (Strong Artificial Intelligence), or not (Weak Artificial Intelligence)?

From the Wikipedia, http://en.wikipedia.org/wiki/Artificial_intelligence:

Also check out http://en.wikipedia.org/wiki/Strong_AI_vs._Weak_AI

suppose there is no higher beings or super natural phenomena, then conscious is nothing beyond chemical composition. We obviously contain the correct composition for conscious. So, I see no reason why humans, given enough time for the knowledge and technology to progress to that state, should be unable to construct another conscious being. We just have to learn the chemical formula and the algorithm to construct it. We would call it a robot, but at least in the beginning, I think it would highly resemble a human.

Russel said theorems in mathematics is nothing more than an extension of the axioms you start with. This AI question, like many others, follows this line of thought. The answer is nothing more than an extension of your assumptions. For example, if it takes a god to create consciousness, then we could not create it, of course assuming we are less than that god and we can not become him. Even that has different assumptions! Philosophy is a rough life =)

I’d go for strong and hope the machines can make a better job of it than we did!

Strong? Yeah, sure, strong. The strongest. Hey, it’s evolution, baby.

Strong AI is akin to the fountain of youth, a myth and fantasy, it will never happen. I have never heard a good argument for strong AI and we have never even come close to doing it or even proving it’s hypothetically possible regardless of advances in science.

Awareness is the key to thinking on your own, we cannot make awareness.

maybe our great or multiple great grandchildren will have to worry about this shit, but I don’t fear my saftey one bit currently from AI.

are cyborgs considered AI?

part man/part machine and the man cannot function without the machine…

if so, we already have that…

-Imp

If you are not a substance-dualist, you have to assume that AI is possible. It must be possible for “mind” states to be reproduced in other mediums, since mind cannot be in anything else…there is no “other” substance.

The question is “what is the nature of agreement between a robot and a human being…and could the human being be sure there was an understanding between them.” If only a language was known, while the robot could not see a color or smell a biscuit, he could only speak about those subjects by using syntactical probability. He couldn’t know anything about the actual stimulus itself- the color or the biscuit- but only about the ways in which those words have been used before and to what degree are those combination in memory.

You could extend this scrutiny further and ask “if language acts to express an experience, and the robot has no sensory organs, what language could he possibly use?”

Without similiar sensory impressions, knowledge about the world cannot be gained and language cannot be used. The robot is simply calculating the "most probable response within his vocabulary " to a question like “because the biscuits smelled so good, and we were so hungry, what did we want to do, Robot”?

In a split-second the Robot has already processed billions of data bits. He went Boolean on your ass and comes up with an answer in the time it took you to wipe your nose.

Yes, this Robot is alive…or…we cannot prove that each other are not robots. Prove it. You cannot. All you have is language and your parents old Twilight Zone episodes on VHS, which doesn’t help. But we cannot say that a robot which uses the same vocabulary as we do will not respond to, or create, meaningful sentences.

[quote="kingdaddy"SI have never heard a good argument for strong AI and we have never even come close to doing it or even proving it’s hypothetically possible regardless of advances in science.[/quote]

just becuase you haven’t heard a good agruement or the fact that we at this point in time can not create conscious does not mean it is absurd to consider. i forget which logic faliacy it is, but it is one of them. =P

It’s just an invitation to show one. What else is there to discuss, hypothetical maybes?

I’m with Searle on this one. The Functionalists make some good points, but I really don’t think that even Strong AI is actually thought in the same way we make it.

I debated this topic a couple years back and made analogies of devices that replicate our behaviour efficaciously. When a compter beats you at Chess, was it “playing” Chess the same way you were? Can a vibrator have sex or make love?

whether a submarine can swim is semantic. whether a computer can think is not semantic because thinking is more than a function. to think implies being aware. cognition is manipulating the form of awareness. awareness is not something that only has meaning functionally, or one could not be aware if he could not move, and we wouldn’t be able to tell what we’re thinking because we wouldn’t really be thinking it until we say what it is we’re thinking. so functionalism doesn’t make sense.

as to whether computers can be conscious, no matter what the algorithm in the computer, the computer is essentially doing the same thing. it’s just pushing electrons around in logic gates in different patterns. the difference between one algorithm and another is superficial. and awareness is singular, yet in one split second of time every gate might as well be separate from every other gate and one cpu cycle separate from the next cycle. so if a computer can be conscious, every computer is conscious and every logic gate is conscious.

The real question is even if it were possible for a strong AI, how would we verify it. Sure we could create artificial tests like the “Turing” test, but those always come up short. There is no test for qualia, so I think this question is moot.

A better question is:

“Can one create a program that allows for evolution in code.” For instance, you somehow program the code to evolve and change in tangent to external stimuli. If you are a programmer, I am not saying having things that merely interact with their external world, which isn’t too hard to do, but machines that will take an input and then actually modify their code in response. I guess it has to do with machines understanding meaning, or the ability to recognize such things as what self-preservation means and how to avoid self-destruction, which doesn’t fall into merely programming “If depth >= 5, then Stop();” Rather it would be something like “If long-term memory has analogous experience of “other” being inoperative after event enter depth >= 5, then evaluate if circumstances may cause in operation in structural integrity. If evaluate = true, then long-term memory = store event ();”

I think I lost something in this because there has to be something random that modifies coding because if the hardware and software is too hardwired to not allow evolutionary coding, then there is no space for change or “learning”.

I think if you added the line “With our current technology” to this poll, almost everyone would agree that “Strong AI” isn’t really a possibility. Do look, though, at the curve of progress.

How brush-fires became campfires became torches became candles became lightbulbs became LCD screens?

Technology increases exponentially. Each year that passes we learn and discover more than the year before. Look at the curve. With any invention, the time leaps between one step to another of progress get smaller as time goes on.

Perhaps we can’t see “Strong AI” 5 years, or even 25 years from now.

But a hundred years from now?

A thousand?

The problem of other minds is difficult enough with humans, but when applied to “conscious” robots it is formidable. Because of that, one is unable come close to knowing if the qualia that accompanies thought and emotion, the what-it-feels-like of experience is accompanying computer experience. The only way I can see this happening is with merging biological components with mechanical, then maybe the problem of other minds could be overcome.

Biological components …yes, though we run into so many other problems. The problem of ‘the soul’, for example, and the problem of 'is the machine truly sentient, or is it just doing what it is programmed to do, acting out a careful simulacra of sentience?"

How do you define “Life?” How do you define “A machine?”

If we clone a human being from a few hair cells in a laboratory, are we creating “artificial” life?

Already we are seeking to improve on nature’s design. Gene therapy, biogenetics, biotechnology, and so on. Already we are integrating mechanical parts into our biological bodies …how is this different from manipulating the biological parts? Under this definition, couldn’t you consider our bodies nothing more than very complex machines? And yet, we have intelligence, we have life. That’s why I mention the problem of the soul. I don’t see a way around it.

I think, a thousand years from now, mankind will have created a ‘machine’ (biological, mechanical, electrical, …or maybe even a coded program) that will have achieved sentience. There are those who will recognize it as such, and those that will never accept it. The problem of “Strong” Vs. “Weak” AI is not in the AI’s actual capabilities, but in our definitions and perceptions.