Philosophy of Artificial Intelligence

Given the pace of modern technology, how long is it going to take until we’ve developed a true AI?

Why so many breakdowns, and what’s the ‘breakthrough’ we’re waiting for?

Is it machine learning? Computer vision? Quantum computing?

Are we setting the stage for the awakening of a machinic unconscious?

Are we going to ‘kickstart’ the process, or is it going to develop gradually from radically simpler systems integrating?

Is it going to be a massive programming achievement, or a slow convergence of interdependent networks of information-processing?

What do you think?

In my view, both the programming and technology: lets down the progression of fully functioning AI units - when this progresses to a more integrated level: then they’ll really be cooking on gas (if it’s still around) or, failing that, solar-powered food-heating apparatus.

I want Sony, in irobot, as my best friend - how cool was he, and more loyal than any human friend one could ever have - my friends are bitches, and are always competing over gods knows what… but to have an ali…?

check out:

exitmundi.nl/singularity.htm

Technological singularity. We modify our minds - emotion circuits, create new emotions, pain/pleasure circuits, sense organs, virtual realities etc. Super intelligent computer - man minds going wild, reaching pleasure equilibriums or emotional engineering or … unknown states

Intelligence (and consciousness) is greatly overvaluated. It is just a sequence of configurations that a system goes through to get from one instability to another and in the process “subjectively” gain as much as possible. It is just a deterministic path of manipulations mass - energy executes to find a new equilibrium. But the starting and ending states are really just equivalent, they differ only in how much pain or pleasure they provide to the system of mass energy observing and manipulating their surrounding reality.

The way the surrounding environment influences the mass energy item that is observing, perceiving, analyzing and feeling the environment is arbitrarily hardwired by natural evolution - laws of physics, they are simply quirk assignments that do not have any “intelligence” or value compared to any other arbitrary assignment. This great evaluation of intelligence is only a very subjective evaluation on behalf of human minds that are simply measuring it with their own devices but are just equivalent to any totally random sequence of symbols or information quantums or measurements.

I would just like to throw out the distinction between strong and weak AI. Strong AI would be more akin to what humans possess, that is sapience or continued consciousness. Weak AI is something along the lones of a thought tree, where the ‘intelligence’ is just a heuristical search engine through a series of basic truisms.

I would argue that technologically we are heading towards super-advanced Weak AI, where the intelligence may become so powerful it can outcompute the human mind by leaps and bounds, but the attainment on consciousness I remain skeptical of.

On the issue of AI, I’d go with John Searle and say that computers, no matter how powerful, only have syntax, but lack semantics. They display objective behaviour, but lack subjective intentions. A case in point is the recent news that the game of checkers has been “solved”, ie. the best possible play by both sides has been computed (it ends in a draw). A computer is therefore able to play a perfect game. But does it want to win a game? I don’t think so!

I would go with nameta9’s view above. The intentions or goals must start with a machine “feeling” something, with a force or a drive that pushes it to go in a certain direction. It must start with some kind of necessity that avoids the machine from remaining in a stable state and forces it to search out a different equilibrium. So the fact is that pain/pleasure sensations or emotions or something similar must be the starting point of its behavior and then its thoughts and its calculations. But only biological machines feel sensations as pain/pleasure and constantly measure the difference and their present configuration as compared to any other alternative or in the case of humans, any alternative they can imagine or devise or calculate.

But the way the pain/pleasure information (or any other similar feelings or sensations, like satisfaction or urges or anything else) is connected to the machine’s sensory inputs determines what thoughts and what paths the machine will follow. So in a sense, intelligence doesn’t even really exist, we have a very subjective and distorted view of it because we view it according to how we use it and how successful it is according to us. A computer chip or internal combustion engine is just a bunch of metal parts put together in a totally random and bizarre way but that makes a logical sense only to us humans according to its use and the intentionalities that are associated with them.

What we view as regular patterns like memory chips are in reality just totally random, we view the regularity aesthetically and mostly artistically. Mathematics is also just a pure invention where we attempt to validate regularity in some metaphysical sense, but that in reality has no validation at all if not in our bizarre and quirky minds.

Searle’s Chinese Room only refutes that a formal system can be conscious. The conscious alife machines that I’m convinced will one day appear will be more than formal systems.

Going back to the question, I’m not sure how it will arise. The lame answer is that it’ll be a combination of the things mentioned. One thing I’m sure of though is that the machines will have highly complication decision making algorithms, and lots of inputs and outputs.

I don’t believe that AI will appear distinct from modified human intelligence.

Self-replicating nanites and genetically engineered cells will eventually interact for human implants or industrial machinery. They will not just influence organs like eyes and heart, but even serve functions between neurotransmitters. In essence, technology will mingle with our minds and bodies. Natural and artificial life will become difficult to distinguish. I can dig up examples for how much of it is happening today. Life and machine, human and animal- all of these things will become more blurred. Perhaps we needn’t be thinking of artificial intelligence as much as we should consider an identifying mark between “sentient” and “non-sentient” beings. For all we know, we are already abusing ethical rights of thinking machines, but we need time to understand the difference.

I don’t believe that the image of AI should be underpinned as the Terminator the way popculture tends to portray. Nor do I think a sentient AI would be a consistently benevolent conformist. Part of being sentient is being capable of deviating from an ethical norm.

My favourite exchange on this subject would have to be an episode of Star Trek (yes I’m a nerd) when there’s discussion of dissecting Data for the better of humanity. People are all awkward around the table, and Picard breaks the ice by saying that there’s no real difference between data and the others. He’s simply electromechanical and they are biochemical (something like that).

I think our attitudes can reflect that by saying the difference will be very trivial when we come to realize it. We will say “why was human and non-human sentient considered this big tabboo boundary? What a primitive ridiculous people we were” the same way that we say today “Why were blacks and whites considered to deserve separate roles in society? What a bunch of arrogant morons people have been (and obviously still are).”

AI as imagined may very well never occur. What goal would the machine have ? Would it have a very elaborate and complex goal and why ? Most likely as soon as it discovers how to manipulate its pleasure/pain/emotion centers it would simply explore those centers deeply and maybe create and engineer very complex elaborate “feelings”, emotions, sensations, sense organs and as these are attached to images and mental information and internal virtual realities.

They would end up being a simple “solid state” society or being, no longer interacting with the external environment or just interacting minimally. And these singularity minds - societies - civilizations would probably only last a short time before disappearing into oblivion.

In fact it is very possible that technological singularities have already occurred in our universe, many times over in the past and may even be present today only that we cannot discover them because we can’t communicate anything with them, we cannot relate in any way to mass - energy structures that are so different from us (maybe they are inside stars as intelligent plasmas).

Aside from the fact that maybe technology is very limited in the end and no matter how hard we try, maybe we cannot manipulate matter past a certain point. For example why don’t we have robots that do all the complex heavy work in agriculture and civil engineering ? In the 1970s it was imagined that most labor would have been automated by the year 2000, even picking things from the ground like in agriculture. And yet there is absolutely no sign of this happening any time soon. I really wonder why, considering that we have some CPUs that have a billion transistors inside them…

It would be a hot piece of AI kit indeed: that could independantly navigate the world: free from the constraints of it’s processors and set semantics - on that basis, it does seem impossible to achieve: unless you add biological tissue into the mix: to allow the free-will required for a sentient AI.

I crave to see such technological advances in my lifetime, but don’t hold out much hope: for a Data, a Seven of Nine, an autobot (Transformers, robots in disguise) LOL - saw the film at the weekend/thought it was great, but made me even more bored with a world devoid of such technological innovations.

This is interesting in that you zero in on “intentionalities”. This implies another mind that creates something for your “use”. But almost everything we see and use has this property of intentionality implied.

Imagine a world where there are no longer any humans but you left, how would the world appear then ? Would it become some kind of surrealistic dream knowing that there are no other thinking minds besides you present in the entire world ? Could you go insane ? Even better, imagine that there are people but they are all perfect robots simulating people to such a perfect degree that you would never imagine that they are just machines having no emotions or consciousness or thoughts.

But this indeed may very well be the case, we cannot demonstrate in any way that other people are alive, have emotions and consciousness and are thinking beings like you (and me ?). So what if all the people you see and know are perfect AI machines already designed by another civilization that has nothing to do with us ?

That would mean that around 95% of people on the planet are AIs then: what with their set ways and beliefs, and limited capacities for most things…

I think the question’s presuming an answer, to a certain extent. I think it assumes that the result is there, just waiting for a fast enough chip or something.

I don’t think it is. I’m with Searle on this one. I don’t think real (or “strong”) AI is possible, and I don’t think it’s processor or coding speed that’s stopping it.

As Penrose intimated in “The Emperor’s New Mind”, there’s a human approach to ditching a question when it gets into a logic loop that cannot be anticipated or programmed.

Äs Searle said, cleverly manipulating syntactically opaque symbols for convincing results is not thinking in the same sense we are.

As in Jackson’s “What Mary Didn’t know”, facticity and physicality aren’t real knowledge. Understanding the data of the world is not the same as an experience of it.

I think the only thing about AI that’s going to change with advance technology is that the questions will get more and more presumptuous.

It stands to reason that human cognitive and reproductive abilities will not be able to create an intellect equal or greater than it’s own.

It stands to every other reason that we won’t get even close.

This should be in psychology, the aspiration to the AI is akin to the aspiration to an extraterestrial lifeform who exists on our plane of existance or reality(they more than likely don’t). It is the basic need for something out of a society one has become ostracized or shunned or rejected from.

I say: don’t feel the need for a society, failing that, fix what repulsed it from you. Failing even that, create your own!

Following that logic, I couldn’t use my strength to create a mchaine that’s physically stronger than me either. And yet, we make machines that are stronger than us all the time.

There’s a lot of valid reasons why I reject the assumption that we can make strong AI, but that ain’t one of them

Maybe someone can help me here…

What defines a state of somthing being artificial?

artificial = man made

As to the original question: IMO we are extremely far away in multiple scientific fields to create real AI. We understand very little about how the human brain works, or how it evolved, thus programming such a system, even if possible via hardware and software, will not be an approachable tasks for I’m betting at least a couple of hundred years, if not longer.

Is there a definition or theory that elaborates and clarifies a bit more of what a state of artificialness means?