Could ARTIFICIAL LIFE possibly see truth more the we can?

This is for everybody but specifically for…

Those of you who beleive that truth can be known logically and assigned abstract values mathematically, isnt it then possible to make an artificial mind which could deduce the same truth as you?

If our senses could be improved upon through an artificial lifeform, isnt it then possible that this life form could know more truth then we?

If not, then would that be an indication that logic is not an absolute foundation of truth?

With the same basic logical deduction capabilities, which artificial life form would have a more valid opinion on the nature of truth? Conjecture…

-one that was designed to be increasingly self-aware?

-one that was designed to analyze the out-side world?

-one that was designed to serve and protect others?

-one that was designed to evolve over you?

Any of these could be designed with the intention to do harm or good to people, but Im just trying to get a philospher’s point of view as to which could really exeed their original capabilites based on what each one is striving for with a simple logical mind.

Personally, I think that artifical intelligence will find some of its most inspiring insights when it is forced into a strategic situation which forces it to both cooperate with other AI and compete with other AI.

I really don’t see the logic of the created evolving beyond the creator. It just doesn’t make sense :astonished:

AI cannot think think creatively like humans do. They can only think logically and chronologically. Given 1+ 1. AI immediately calculates that it’s 6. Humans, on the other hand, may think outside the box. Could 1 + 1 mean 11? Or maybe even other things (which I could not think of right now :stuck_out_tongue:)? AI has only one perspective when it looks at things, while humans can see things from multiple perspectives.

I see no reason why it couldn’t exceed us and have a higher capacity for understanding than we do. Certainly nothing we can build now is likely to do so, but who knows what advances will come with time?

I think a machine could exceed its creator as easily as a child can exceed its parent. There’s no contradiction there.

Americano,

We are Artificial Life, created by the technologies of matter.

Dunamis

You may like to look at this link:

ilovephilosophy.com/phpbb/vi … p?t=142298

Something similar is proposed there, and the problem you are asking is much larger than you think…

It does seem that we can create artificial lifeform’s that could at least mimmick us, and yes exceed us in many ways, but due to Godel’s Incompletness theorem we know that for any finite machine we create there will utimatley be some truth that the machine cannot understand, yet we can.

Machines already exhibit mental predicates.

Consider when you play a chess program, you ascribe mental predicates to it. i.e. “it’s thinking if I move here then it will move there to attack my bishop”

This may not be an advanced mental predicate but they performs other simple ones too

“I calculate that x + y = z”

Again a mental function. We ascribe mental predicates in this way according to Dennets Intentional Stance so in reality A.I. will only differ in scale not qualitively just as we, in my opionion, differ from animals merely in scale not qualitively.

Rounder: your use of Godel was…well great.

Americano: There is a book that deals exactly with this question if you are interested. Its called Boomeritus and its written by Ken Wilber. the last name could be wilbur…
anyhow, wilber was a student at MIT working with AI. He latter became a co-founder/editor of “what is enlightenment” and these sort of publications. most philosophy people believe he is new agey…but i think thats too loaded of a bias. the man is logical and bright. he brings real ideas to the table. this book especially talks about computers reaching enlightenment before humans and then drawing humans into enlightenment (instead of matrix warfare, etc).
the military is spending a whole lotta money on war robots. what if they become “conscious”?

i wonder if consciousness necessarily includes emotion…or lets say compassion?
thanks

If you mean the cartesian sense of councsiousness this could prove difficult since it doesnt exist.

If you mean concious as in performing mental function.

I.e. I got knocked out I was unconcious - I ceased mental function.
I woke up I was concious - I resumed mental function.

Then this is already happened, I mean check it out “Yuxia calculates that 3 x 4 = 12” - “My computer calculates that 3 x 4 = 12” we are both performing calculations, okay this is not a paticularly advanced predicate not like say philosophizing :wink: but it is none the less a mental one.

Yuxia,

"Then this is already happened, I mean check it out “Yuxia calculates that 3 x 4 = 12” - “My computer calculates that 3 x 4 = 12” we are both performing calculations, okay this is not a paticularly advanced predicate not like say philosophizing but it is none the less a mental one.

Excellent point. In 1997 Deep Blue beat reigning World Chess Champion Kasparov in a well-reported match. And in fact beat him soundly. (Since then there have been various draws between humans with other programs). This instance certainly would be “artificial life” seeing more of one kind of truth, than the best that human beings could offer at that time. It certainly “saw” more “truth” than any of his/her/its chess playing programers could, each of which would not stand a chance against him/her/it.

Dunamis

I suspect that Alexis was talking about consciousness as sentience. Qualia and all that jazz. :sunglasses:

Those who might be looking for a lighter approach to this question would probably enjoy reading Prey by Michael Crichton. It presents an interesting take on artificial life, the evolution of replicators and intelligence.

All we perceive are just electrical impulses, I see no reason why we can’t design something to perceive them better.

Sentience is slightly different again, being aware of ones enviorment as it were.

Really I think Conciousness should just be the ability to perform a bundle of mental predicates but I think many of them would require behaviour and action. A simple text based computer is not going to be able to convince you that for example he is happy.

If a program flashed up “I am happy today Yuxia” then I am not going to really believe that my computer is happy.

If a robot came in jumping around going “wahoo and yay” and I say “Whats up robot?” and he says “I am happy today Yuxia” I am more likely to believe that he is happy, and possibly unstable :wink:.

But you see the difference, to ascribe some more advanved mental predicates we must introduce a theory of rationality - namely one that includes action and the associated beliefs and desires etc in order to understand the being.

xanderman, you are right on.

i’ve just done a little research on qualia but i’m kinda too busy (i’m trying to learn phenomenolism and semiotics) to get into it deeply. They are the “intrinsic properties of subject experiences” to some, and the “qualitative content of mental states”-Dennett, to others.
can you elaborate on your understanding of qualia? i’d appreciate and its pertinent to this thread.

To everyone: are we slip sliding into a materialism…or really epiphenomenolism? If so…whats to stop the rampant nihilism?
This is something the book Boomeritus addresses directly. Hence “Boomer” refers to the boomer generation and “itus” is the disease of nihilism that is predatory to enlightenment (evolution of consciousness—yes, consciousness…that dirty c-word).
thanks

Qualia are what cartesian style philosophers call the raw feels or what it is like to be. I.e. the feel of the wood, the feel of the pain, the sound of the car, etc.

There are traditionally 4 (or sometimes 5) catagories.

(1) sense perceptions i.e. the taste of marmite, the sight of green. (These are esentially sense data)

(2) Bodily feelings i.e. hungry, having a headache, orgrasm etc.

(3) Dispotional moods i.e. depression, happiness, optimistic etc

(4) Feelings i.e. excitement, joy, anger, etc.

[Sometimes included]

(5) External morality i.e. feeling righteous, feeling responsible.

It is very important to note that Denett is not a cartesian philosopher he is very strongly in the physicalist vain and only introduces The Intentional Stance to understand mental creatures not to explain what they actually are.

I don’t know about you guys, but electrical impulses aren’t all I percieve. I’m not even sure what one looks/sounds/tastes like at all.
We can already make machines that ‘figure out’* things better than we do. I don’t think any of them are artificial consciousness, and I’m not convinced that we could create such a thing- at least, not using the sorts of technology we play with today. No computer will become artificially intelligent just by giving it more and more RAM, and more and more sophisiticated behavior simulating software.

*- I use the term ‘figure out’ loosely, as I don’t think computers engage in any actual thought process. It would be better to say that we have machines that allow us to figure out things better than we could without them.

Ucc.,

“and more and more sophisiticated behavior simulating software.”

What distinguishes consciousness from its behavior?

Dunamis

There’s a lot of ways to word it. Awareness of the meaning of the behavior is one, “X is such that there is such a thing as ‘what it’s like to be X’” is another I’ve heard. I’m thinking of Searl’s Chinese Room Argument here. Let me know if I should explain it.
Unless your counting understanding as behavior in your question?

Dunamis

As I understand it, conscious behavior is an “action” of “will” while unconscious behavior is the “reaction” from desires. It’s very easy to confuse desire and will.

Ucc (and Nick),

Perhaps I didn’t ask my question accurately. (From the outside), what is the difference between consciousness and behavior? Since whatever (artificial) intelligence we create must be judged “from the outside”, what is the difference?

Dunamis