Artificial Intelligence

On the face of it, a philosophy forum might not seem the best place to
do what I am about to do, but upon closer inspection, you should see the
relevance.

You see, my claim is that I have invented Artificial Intelligence. My
definition of intelligence is the ability, through human language, to
learn simple facts and to subsequently pass a test based on these facts.

I took me about three and a half months to develop a program that could
perform the basic functions that would satisfy me as being the first
working example (of which I am aware) of true Artificial Intelligence.

The reason why I claim to be the inventor of AI is because I have tried
my darndest to find another working example of AI on the internet, but I
failed to find it. I would have even been satisfied with circumstantial
evidence of a non-web-based AI entity that was developed in some lab
somewhere, but I couldn’t even find that.

Obviously, however, the vast majority of my time has been occupied with
the actual development of an entity that could pass as “intelligent”
rather than doing research on who else is working on this subject.

At this point, I figure that the title, “Inventor of Artificial
Intelligence” is wide open. Again, by “Artificial Intelligence,” I
simply mean a computer program that can mimic an “understanding” of
natural human language. I do NOT mean some kind of chess playing
program or anything else of that nature. My definition is very strict
here.

It doesn’t take a genius to understand the “philosophical nutshell” that
is cracked once functioning human language processing programs are
unleashed upon the world. Hello, Terminator? Hello, The Matrix?
Hello, the entire field of literature known as “Science Fiction”?

Of course, I am not posting here simply to make a whole bunch of claims
that I cannot back up. To try out this thing for yourself, you will
need to go to http://geocities.com/ai_project_1 for further information and instructions.

==============================================

Edited on 1/18/07: Changed the website link to the current one at geocities.com/ai_project_1. Also, please send an email request to ai_project_1 at yahoo dot com, so I can send you the program as an attachment.

Artificial intelligence, this is definitely the place for it.

The creator of the simulated intelligence, or the perfected version of it, can only claim the invention once the vehicle for AI yeilds unexpected results which lead to unexpected use that has more value than the intended design. It must show the potentiality for producing and expanding on its own capabilities.

Thats the first time I’ve ever defined it that way, so i may be wrong, but after studying it for ahwile, this is my present conclusion.

That said, most likely someone like you, tinkering around in their spare time will solve this major quandry.

I advise you to enter it in the next Turing-test tournament.

I’m pretty sure that AI only has implications in philosophy if can mimic things like empathy and love and stuff.

You can define artificial intelligence anyway you want, but unless your system can pass the Turning Test, it isn’t artificial intelligence.

Are artificially intelligent computers “sentient”? I don’t think we can know, because I’m not conviced it can be shown that humans are sentient. Manipulating language and learning from experience with language doesn’t mean there’s a “mind” in there, if the “there” is made of silicon or flesh and blood…

or if we lose the ability to feel in a mechanical desensitized world, the computer would automatically match the mood of man. Success!

I’m interested.

OK, I’m back.

It has taken me quite a bit of time (since I’ve done the vast majority of the work) to psychologically “heal up” and get ready to take this bull by the horns again.

My quandry is whether I want to release the code that I’ve written so other people can try this program directly on their own computers. The program is written in Perl, which means that you can see all of the code, in plain text. This is different from most computer programs, which are compiled into “machine code” that isn’t human readable. So, I’m slightly worried that someone might take apart the code, learn from it, and try to pass it off as their own.

At the moment, I’m thinking that I will email the program as an attachment to interested people rather than posting an anonymous link for all in the world to download. Then, after it becomes established, beyond a reasonable doubt, that this is my deal, then I will be more comfortable releasing it to the world.

By the way, the link I posted in the first post of this thread is busted, so don’t even worry about that.

Rather than boring you with all of the details of the grammatical rules that the program follows, let me show you a little example of a “conversation” that I recently held with it.

==================================================

Me: Who do you know?
Computer: I do not know anyone.
Me: There is a crazy boy and a wild girl.
Computer: Okay…
Me: He is named Joe.
Computer: Confirm the name of the boy is: Joe.
Me: Yes.
Computer: Thank you.
Me: Her name is Sally.
Computer: Confirm the name of the girl is: Sally.
Me: Yes.
Computer: Thank you.
Me: There is a green pool.
Computer: Okay…
Me: Yesterday, Joe swam in it.
Computer: Okay…
Me: He did that because he is crazy.
Computer: Okay…
Me: There is a hilly park.
Computer: Okay…
Me: Sally ran in it.
Computer: Okay…
Me: She is 2 meters tall.
Computer: Okay…
Me: The boy is 5 feet tall.
Computer: Okay…
Me: Who do you know?
Computer: I know a boy named Joe and a girl named Sally.
Me: What is Joe like?
Computer: Joe is crazy.
Me: Why is he like that?
Computer: I do not know.
Me: Where is the boy?
Computer: As of 1-14-07 Joe was in the pool.
Me: What is he doing?
Computer: As of 1-14-07 Joe was swimming.
Me: Why did he swim?
Computer: Because Joe is crazy.
Me: Is Sally crazy?
Computer: No, Sally is not crazy.
Me: She is wild.
Computer: Yes, I know.
Me: Where is she?
Computer: As of 3 minutes ago Sally was in the park.
Me: Is Joe taller than Sally?
Computer: No.
Me: What is the park like?
Computer: It is hilly.
Me: Where is it?
Computer: I do not know the location of the park.

================================================

This kind of simple logical ability is exactly what I want to build upon as I continue this project. I am interested in finding others out there who are interested in this concept, and who would like to take part in it. You may have heard of something called “chatbots”, but this thing is far from being anything like that. Chatbots are filled with nothing but pre-programmed responses: they have no ability to “learn” in the way that my program does.

The program is built to run on Linux. If, by some small chance, you are running Linux, then you’re ready to go with it. Even Mac OSX should be pretty simple to get it up an running, because it’s based upon the same Unix-style system. For most of you Windoze users, you will need to download something called the “Cygwin environment”, which is available at http://cygwin.com. All you have to do is make sure the necessary packages are installed. I’m pretty sure that the perl package is the only one that isn’t installed by default.

I appreciate all of the feedback that I can get. Who knows, if any of you helpful enough, I would have no problem sharing the credit!

I have setup an email address specially for this project for you to contact me with: ai_project_1 at yahoo dot com. You can also pm me if you want, but I will still need an email address to send you the program.

So, let me know if you want to give this thing a try…

Check out some of John Searle’s work on strong and weak AI. Check out his Chinese Room thought experiment. http://en.wikipedia.org/wiki/Chinese_room

Syntax is not sufficient for Semantics :sunglasses:

I do not know whether to respond to your dialogue or to your First Metasphysics, which I have just read. Let me just keep it brief, then, and compliment you on both. I would like to aks you to take a look at this thread:

ilovephilosophy.com/phpbb/vi … 63#1858763

In my last post there I relate quite elaborately to the difference of existential ‘I-world unity’ and scientific temporality, in very different terms but, I believe, from a rather similar perspecitve, as unlikely as that may seem. I strongly relate to Zen in my thinking and have found in Heidegger, after studying Nietzsche for a long time, a way of intellectually reestablishing the ‘enduring I.’
About the other thing; I’d be very interested in parttaking in your project, and would quite possibly be inclined to invest much time into such an endeavor.

I know all about the Turing test. My whole problem with it is that it concentrates upon the idea of “fooling someone” rather than logical correctness. In fact, there is only one Turing test held yearly: it is sponsored by a guy named Hugh Loebner. The website for it is: http://www.loebner.net/Prizef/loebner-prize.html. All you have to do is look at the transcripts for the “winners” to see the quality of work that is coming to us in the name of Mr. Turing.

I’m not interested in the slightest in fooling anyone into thinking that my program is really a human. I am only interested in writing a program that can parse and register everday human logic, and can pass a test based on what it has “learned”. To me, the Turing test is more of a question of the intelligence of the judges than of the computer programs themselves.

All of my work is based upon the notion that human language is based upon a set of rigid rules. There is no reason why these rules cannot be programmed into a computer. If a computer program is able to successfully follow these rules, then to me, is passes as “Artificial Intelligence”.

I have no interest in machines that “seem” emotional, or anything like that. Much less do I have any interest in the question of whether machines will one day ever actually have “consciousness”. Questions like that are all jokes. That recent Spielberg movie, to me, has only done a dis-service to the everyday project of writing computer programs that can effectively parse and register human logic, and then pass a test.

D, Your approach to AI is an ontic one, within the temporal frameworld of cause and effect. What do you think of the suggestion of an ontologically based, transcendental AI? This would amount to direct consciousness and response to physical immediacy on top of or instead of logical induction. You have allready said that you do not think computers can become conscious - but after reading your paper, I am inclined to ask this. - M

dkane75, Did you read up on Searle and his Chinese Room? What do you make of it? Here is a short summary of what it concludes:

"The conclusion of this argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

The core of Searle’s argument is the distinction between syntax and semantics. The room is able to shuffle characters according to the rule book. That is, the room’s behaviour can be described as following syntactical rules. But in Searle’s account it does not know the meaning of what it has done; that is, it has no semantic content. The characters do not even count as symbols because they are not interpreted at any stage of the process."

This is your program dkane75. :sunglasses: Did anyone else read what I posted?

I’ve read the Chinese Room argument before; it makes sense. But I’ve also heard some responses to it; one of them is that Human Beings aren’t too different from that. We basically just respond to stimulus, and almost blindly walk through life. Of course, the response to that is that we do understand things; but do we really? We don’t understand gravity, the most basic force out there. We don’t understand quantum mechanics. And we still believe that we have free will. What DKane made just seems like a human intellect; it gains knowledge, and then goes through situations using the knowledge that it’s gained.

Perhaps it can be argued that humans can only manipulate symbols and not really have semantic content. But I wanted to make the point that dkane’s AI is completely founded on the languages that he wrote it in. It is just manipulation. Not real intelligence or AI. It is weak AI. Syntactic symbol manipulation. 101101001010110101001

You said that his AI was “completely founded on the languages that he wrote it in” and that you couldn’t really call it an intelligence because of that. I’m just trying to clear something up in my pile of a head by you answering this:
When you refer to a language, do you mean a programming language, or a language-language (English, Chinese, etc.)?

dkane? What do ya think about some of what I have said?

Bdhanes, that question about language was meant for you.

I was refering to the programming language he wrote it in…

oops… my boss walked in… I gotta act like I’m working, I’ll post as soon as I can.

What Searle was saying is that AI is impossible. What if you created a program that completely models itself after the human brain, as in the programming is set up to act that way. It acted like we did. Would that count as AI?