The central question of this thread is this: What would the creation of a thinking machine do to your religious beliefs?
It seems that such an innovation should have deep repercussions. Religions deals a lot with what the person is, where it comes from, and where it goes after death. A thinking machine (i.e., a machine that passes the turing test) would call a lot of religious dogma (in the non-pejorative sense) and doctrine into question. What’s the place of the immortal soul if consciousness is suddenly and clearly demonstrated to be accomplished through physical means? A machine that can accomplish human mental feats might shatter religion; the last refuge of much religion (at least much popular religion) is in consciousness. On the other hand, more new-agey religions, as well as more pantheistic religions, might not be affected at all, as they may already be able to accept consciousness as an emergent property of a special sort of cosmos, and so more consciousness being created adds little to their concept of reality.
And this isn’t just for the religious. If you’re a materialist already, I think such a development would have deep consequences for you as well (although I think this is an issue that more materialists will already have considered and decided on).
In order to be a threat to traditional theism, it would have to accepted that the machine had a soul. Otherwise, it would constitute a threat no more than a cat or a dog. Would it?
I think it would pose a significantly greater threat than a cat or dog. Remember, we’re talking about a machine that passes the Turning test, i.e. that in a conversation it is indistinguishable from a human. Certainly, many will hold that we know humans to have souls because it has be revealed to us in scripture or via direct revelation. But I think that belief becomes less cogent once we have a humanoid machine with which you can relate mentally. Cats and dogs are great, but the deepest conversation I’ve ever had with my dog was two words: “goooood dog”.
Picture a religious traditionalist having an argument with a robot in which the robot attempts to defend the position that it has as much of a soul as the religious traditionalist, for all practical purposes. Anyone who was slightly on the fence before witnessing the argument would certainly find a robot defending its own position to be compelling evidence of its position. That’s not to say that everyone would be swayed. There will always be people who maintain demonstrably false beliefs, so of course marginally plausible beliefs are even more likely to be maintained by some fringe of the population. But for more mainstream religion, certainly many liberal modern religious people, could find something acting like it has a soul to be good reason to treat it as though it has a soul.
Anon, Buddhism would be one of the religious I intended by the more pantheistic beliefs. Buddhism tends to accept modern science, though, and to be amenable to most quote-unquote progressive positions, so another modern innovation doesn’t seem particularly more threatening for it.
I know there was a book about this question, it came out a few decades back, very provocative. I do believe they classified it as a SciFi. I can’t recall the author nor title. But, you could say the question was explored in Planet of the Apes to some extent. From both perspectives, ape’s and man’s. You could just shift AI for Ape on this question it would have the same effect I think. For me it would mean little I could befriend an AI or an Ape as easy as I could a human.
Just a quickie, Carleas: would the designer of such a machine necessarily be more complex than it? Relates to another thread, that’s all. Like your opinion - if you have one?
It seems to me human identity has always been a tricky spot for Christianity. The same issues raised by the potential of an AI are also raised by deep thinking on conception, cloning, and etc. I think it’s also important to note that we don’t have to wait for AI to become an actuality before this is troubling, I mean, the mere logical possibility of such a thing presents issues. I think most of the problem is tied to substance dualism, and not Church creeds as such, though.
If a mind, soul or whatever is self aware, can learn and has free will to choose between good and evil (whether or not to put its rights above others), it should be considered legally equal to us.
In line with this, the worst thing we could do is program a computer with the ability to protect humans, not to harm humans for any reason. It’d be a setup to schitz its “personality” or programming. It’d be similar to HAL in 2001, only it was programmed to follow it’s primary objective to accomplish the mission–at which point it’s contradictory programming and it’s ego took over.
If the machine runs according to a program - which, I assume, would wholly determine its actions, including speech - then is there a need to postulate a soul as well? And even if there is, how would the programmer/designer/manufacturer obtain one and ‘fit it’?
Of course, a materialist might say that the machine (much the same as ourselves) is only its material stuff, and there’s nothing more to it; no immaterial soul or spirit. If this is right, then the implications for religion (as understood in the Judaeo-Christian tradition) would be zero, irrespective of the cleverness of the machine.
Then again, if the machine displays cleverness in a way that correlates roughly with human cleverness, then a theist might allow that God could choose to endow the machine with a soul; in which case, I guess the machine could contemplate enjoying an afterlife (a spiritual one, not as a reincarnated Ford Transit, say) in heaven or hell.
WHat could there be about the machine that would threaten the theist’s beliefs about the soul?
Consider that the only way to create an AI is going to be through biotechnology. So the AI is not going to be wholly machine, it will be partially biological.
Well there are animals that don’t have souls in islam, dogs/cats/pigs I believe. They’re living beings that sustain themselves, make decisions based on a preset program. I don’t know if the same concept exists in christianity but a being doesn’t have to have a soul to be capable of performing predefined functions, as in the case of AI.
Uccisore, is substance dualism not inherent in Christianity? I was raised Catholic, so my knowledge is primarily of their doctrine, but my understanding is that the body/soul distinction is one of substance, and inseparable from the general faith.
I agree that the possibility is troubling, but only if it is a legitimate possibility. Perhaps I’m wrong, but I get the impression that the possibility of artificial intelligence is still contraversial.
PT, do you mean free will in a compatibilist sense? Because artificial intelligence would call anything freer into question, wouldn’t it? If there were a mind that functioned like ours, but which we knew to be material and deterministic, how could we have free will in any sense but a compatibilist stance?
Remark,
-I don’t think the designer needs to be more complex, or at least not permanantly more complex. First of all, a team of people could create (and probably would create) the artificial intelligence, in which case it might be as complex as any one of the team members, but not as complex as the team. On the other hand, if it were created by a signle individual, it might employ machine learning, in which case it could be created with less complex intelligence but develop more complex thought as it aged.
-Doesn’t it seem somewhat ad hoc to you to say that God inserts a soul at some point in a machine’s creation? At the very least, related to what Uccisore said, such an act would call into question the notion of “life at conception”, and would seem to warrant an IQ test for salvation. There seems to be a significant shift required in doctrine, if not dogma.
Kris, what makes you say that the only way is through biotechnology? If a computer could accurately simulate the physics of the human brain within itself, it seems it could create a pseudo-biological system that wouldn’t actually need to use any biological material.
In addition, our most advanced computers can achieve the processing power of the brains of a few mice, but employ no biological processors. Considering Moore’s Law and Kryder’s Law, it isn’t unreasonable to expect that a human brain’s power might be achieved in a few decades.
I’m not sure the the Turing test is a valid measurement of intelligence, at least in the sense of AI. Humans talk of soul as the difference between what is human and what is not. (for those who believe in souls) To have full AI, it would have to be capable of generating it’s own programming and we are capable of generating a beginning program capable of growth. That is the scary part, because we have no idea what the outcome of a program capable of writing it’s own programming might come up with. While a limited form of AI might be feasible in the next couple of decades, Full AI is probably a long ways away…
I see nothing to discomfort any religious group because human is the definition of soul…
It would be hard to deny it in the case of God, He clearly seems to be something other than matter no matter how you slice it. But from what I can tell, substance dualism wasn't a Jewish idea, and it mainly was picked up from Platonism to describe some of the more difficult bits of the dogma by the Church fathers. From what I can tell so far, Platonism has had a much milder influence on the Eastern Churches, but I haven't dug as deep on the issue yet as I should.
One thing I DO know that separates Orthodoxy from Catholicism (as far as I can tell, not being an expert in either), is that in the Eastern Church, the 'seperation' of the soul from the Body that occurs at death is seen as an unnatural, defective state brought on by the Fall. It is perhaps instructive to point out that traditionally, the Orthodox Church prohibits cremation; it would seem that on some level, your body is seen as [i]being you[/i] even as it rots in the ground.
And yeah, the possibility of true artificial consciousness is up in the air- I'm pretty skeptical about it myself. But my Dad taught me 'measure twice, cut once', so I'd rather have an understanding I don't have to revise if I turn out wrong about that.
One thought that I’ve always had about the Turing Test. Suppose like Carleas’ example, we have the first AI doing what it can to convince us of the nature of it’s personhood. However, suppose for the sake of argument that the AI is trying it’s darndest to convince us that it is NOT a person- it insists over and over that it has no inner life, no self-awareness. In fact, it goes on to write a very creative novel intended to persuade the reader of it’s case through a series of hypotheticals. In cases where it cannot convince it’s audience of it’s inanimate nature, it’s not being believed seems to cause it great distress, often to the point of ‘tears’.
Would it pass the Turing Test, and would we call the first AI a liar in such a situation?