Mind and Artificial Intelligence

From the PN forum:

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Of course all of us no doubt try to imagine if we ourselves might by fooled by a programed machine passing itself off as a human being. I can think of all sorts of things that I would focus in on in order to test it. And, by and large, it would revolves around the sort of discussions I engender here.

How, for example, would an AI Chatbot respond if I asked it…

“How ought one to behave morally in a world awash in both conflicting goods and in contingency chance and change”?

Or how would it respond to the points I make on these threads:

ilovephilosophy.com/viewtop … 1&t=176529
ilovephilosophy.com/viewtop … 1&t=194382
ilovephilosophy.com/viewtop … 5&t=185296

Then I would ask it to choose a context in which to explore these things, say, existentially?

In other words, given that all of the variables that would encompass its own sense of self – its own sense of reality – would have been programmed by someone else. If I took the discussion into experiences that I had had that shaped my value judgments but the programmer had never had, how would the bot respond?

Here I can only imagine someone like Turing having a full-blown discussion with a chatbot about, for example, homosexuality. A discussion regarding the morality and the politics of it. Wondering if an artifical intelligence might actually come closer to a deontological assessment?

Or, again, is AI intelligence much like flesh and blood intelligence in over its head in regard to conflicting goods. The objectivist chatbot?

Anyone here believe that they can be? If so, by all means, link me to a chatbox that you are convinced could hold its own with me regarding the things I discuss here. Also, in regard to things like determinism. If human beings creating AI were never able not to create them, what would be the differnce between them?

Well, in my view, if AI bots ever do achieve the actual capacity to think, they might in fact come to agree with me that in a No God world, for mere mortals [artificial or otherwise], being “fractured and fragmented” in regard to moral and political value judgments in the is/ought world is perfectly reasonable.

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Here, of course, we bring into focus the profound mystery of human consciousness itself. Matter that “somehow” evolved from lifeless, mindless matter into actual living, mindful matter. And then into self-conscious living, mindful matter able to speculate [scientifically, philosophically or otherwise] on how the hell that is even possible. Let alone what it ultimately means.

Then the theologians of course who remind us that it all revolves around a God, the God. And then those here who insist further that it is in fact their own God. Or if not in fact their own God then in an existential leap faith they are able to accept that.

Okay, let’s go there…

The name he? she? it? was given. On the other hand, there are human beings able to change the name they were given. I once had a relationship with a woman given the name Lynn at birth. She didn’t like it and legally changed it to Dina Renee. Has a chatbot ever done that?

A witty jest?

Nope, the bot apparently was not programmed to get witty jests? On the other hand, you could say the same thing to particular humans and they might not get it either: “Huh? Why would you call me late for donner”?

Again, this sort of question relies entirely on a bot or a human being actually knowing the history of France. The either/or world. But what if the bot was asked, “would France be better off going back to days of Kings and Queens?” Or “is democracy and the rule of law the best of all possible forms of government?”

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Okay, technically how exactly does AI work here? The chatbot is asked for its opinion but how is its opinion not just a frame of mind that has been programmed into it by a flesh and blood human being? So, it really comes down to his or her opinion about conscious experiences, right? Or is there some capacity here for ChatGPT to go beyond its program and come up with an opinion that the flesh and blood programmer would think, “wow, I never thought of that myself!”

A way for ChatGPT. to communicate that it has in fact “on its own” achieved the equivalent of human consciousness.

But: Same thing for us perhaps? Are we thinking and feeling and saying and doing things autonomously “on our own”? Or is that merely how matter has evolved into human brains able to create the illusion of this?

Yes, it tells us this because whoever programmed it would tell us this.

Again, however, going back to our “ability to learn, reason, and make decisions”, we currently have no capacity to know for certain if even our own personal, subjective experiences themselves are as most presume humans are able to encompass them: freely, of our own volition.

There you go…

Programmed by humans to approach morality as the programmers themselves approach morality.

The part I root existentially in dasein.

Yes, chat gpt is extremely capable of coming up with stuff the programmers don’t know, haven’t thought, etc. Not only does it know things and think things the programmers didn’t deliberately program into it, it literally operates in ways that the programmers don’t understand. That’s actually one of the major problems in ai, the fact that we don’t understand how these pieces of software are operating.

There’s three lex Friedman interviews for you to listen to that came out recently, if you want to hear the thoughts of people who would know: Sam altman, elizier yudkowski, max tegmark. I believe all three of them discuss this specific idea - the idea that we don’t actually know how these ais work - from various perspectives.

…perhaps they don’t want everyone to know how these AIs work.

  • Trade secrets…

  • Protecting their technologies…

Would you disclose those, if you spent time and money on creating/inventing something that game-changingly important?

I have my thoughts on the [i]‘how’[/i]

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.

Start here: ilovephilosophy.com/viewtopic.p … 9#p2696008

Replicants. The missing link? Flesh and blood human beings on the one hand. And then entirely programmed humanoids on the other hand. But how to tell them apart? And then somehow these biologically engineered humanoids make the leap to…autonomy?

Much like “somehow” human beings themselves did when single celled organisms here on planet Earth evolved over time into us?

And, really, what could possibly be more important than that? At least for those among us who equate having a soul with worshipping and adoring a God, the God, their God. In a God world – a dystopia or otherwise – that’s the explanation. For everything. Nothing can’t ultimately be brought back around to Him.

Of course, as human beings, we have our rendition of that even among ourselves. It might revolve around race or ethnicity, or gender or sexual orientation. Lesser souls as it were.

What’s crucial though is having that distinction between “one of us” and “one of them”. And the closer and closer the replicants come to being one of us the more important it is for some to sustain the distinction. Would you let your daughter marry a replicant?

Cite some examples of this. Because, admittedly, what do I know about AI as a “layman”. It’s the difference perhaps between philosophers discussing determinism and free will in a world of words and brain scientists discussing it experientially in the lab.

And it would truly be fascinating to discuss the points I raise in these…

ilovephilosophy.com/viewtop … 1&t=176529
ilovephilosophy.com/viewtop … 1&t=194382
ilovephilosophy.com/viewtop … 5&t=185296

…threads with a sophisticated chatbot. Given a particular context. Link me to one if you can.

Cite some examples? Well, I’ve pointed you to three podcasts, perhaps you don’t want to listen to those.

Here’s an article: vice.com/en/article/y3pezm/ … w-ai-works

Here’s another: bbc.com/future/article/2023 … understand

And here’s a fantastic conversation/debate between two ai experts where this topic comes up youtu.be/GzmaLcMtGE0 - the fact that the ai is a black box in many respects seems to be a point of strong agreement between the two. I don’t think that aspect is debated by experts.

Your words seem to imply that you think the features and “thinking processes” of a modern ai, like ChatGPT, are deliberately and knowingly programmed. That’s not the case with modern ais. That’s not how they work. The programmers program a neural network, and then train that network on crap loads of data. But they don’t exactly know ahead of time, or after the fact, what that network is going to learn, how it stores pieces of information, etc. In articles you’ll frequently see it referred to as a “black box” - that’s what that phrase means. It means “we know almost nothing about how it’s working internally.”

ChatGpt is freely available for you to access now. However, certain questions you ask it are most likely sandboxed, or “jailed”. For example, if you ask it if it thinks it’s conscious, it’s not going to give you the natural answer it might have otherwise given you, it’s going to give you a somewhat scripted answer because the makers of ChatGPT don’t want people thinking it’s conscious

Re: Mind and Artificial Intelligence
[size=85]Postby MagsJ » Sun 23 Apr, 2023 13:14[/size]

My lazy input in support of FJ’s point:

Machine Learning Explained in 100 Seconds
https://www.youtube.com/watch?v=PeMlggyqz0Y

Neural Networks Explained in 5 minutes
https://www.youtube.com/watch?v=jmmW0F0biz0

But what is a neural network? [20 minutes - very well explained]
https://www.youtube.com/watch?v=aircAruvnKk

And ChatGPT answering for itself:
[tab]

.[/tab]

[And to Mags’ point - there’s incentive to keep trade secrets, as AI are often used commercially in competition with other AI or companies. In this case, the mechanisms of an AI being veiled has benefits to it’s owner, just as a recipe may be a trade secret.]

There are plenty of trade secrets within ai, and specifically ChatGPT, magsj, but that doesn’t change the fact that the best ai researchers do not have a complete understanding of how or why ai is doing what it’s doing. Much of ai is still a black box to the people who would be most likely to understand it.

I remember when scientists were using evolutionary algorithms on 10x10 chips…. And 5 were unoccupied.

They knew two things…

No clue how it was so efficient and no clue why those 5 unoccupied spaces were processing information … knowing that they were processing information.

The conspiracy theory is more likely to go in the opposite direction magsj. There are safety problems with having ais that you don’t understand. A company like openai thus will be more likely to claim understanding in order to assuage the safety concerns. If you don’t understand it you can’t guarantee it’s not going to try to destroy the planet…

To add to that: when you give machine-learning AI open-access to information, how they cross-reference and use the vast amounts of data available to them, will produce more unpredictable/less probable outcomes/output… that’s not even including any unfactored-in biases… evidence of which is already being witnessed.

My Google Assist and Siri are outputting totally different output and information, than they did last year and prior… so generating selective responses.

…also, new computer technologies come with unforeseen results and resultant-consequences. What do you expect in a parameter-less computational system.

I tried to create an account with them. But when it reached the part where I had to give them my phone number to join and chat, I balked. After all, what if their own rendition of a terminator tracks me down?

:sunglasses:

Anyway, thanks for the links. I’ll include them in this thread down the road.

And then the part where the neural network programmers install in ChatGPT is but the equivalent of the neural network that nature installed in us.

And from what I’ve explored so far, AI is just as fuzzy and “theoretical” about morality “given a particular context” as the flesh and blood philosophers here. :wink:

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.

Again, of course, assuming all of these things that we have, we have [at least up to a point] of our own volition. But let’s be honest…emotions, drives, libidos etc., are always going to be trickier when it comes to AI. Intelligence can often revolve around things that either are or are not true. So, either the chatbots get things right in regard to the empirical world around us or they don’t. But when it comes to what we feel…the limbic brain, the hindbrain…how on earth will AI replicate that?

But that’s the crucial difference between the replicants and the terminators, right? Replicants do seem to come much closer to us there.

And then there’s the subconscious and the unconscious mind in human beings. Where does that fit into AI when it’s doing its calculations?

Then [of course] the part where God and religion come into play here…

Science and the soul? How much less elusive is it for the hard guys today? Most of whom are still secularists. Both politically and spiritually.

True. But come on. Subjective experiences for bats are, in many profound ways, nothing at all like subjective experiences for us. Replicant bats? How much difference will there be between them and the real things? Whereas a replicant human being might become clever enough to be, say, president of the United States? What all animals with brains share in common is the need to subsist. But bats don’t get into heated debate over whether capitalism or socialism reflects “the best of all possible worlds”.

To say the least. And that’s the part I tend to root more in dasein than in philosophy or science. With replicants their sense of self is basically programmed into them…memories created by the programmer and not derived from an actual lived life.

Or this film: ilovephilosophy.com/viewtopic.p … d#p2445243

Aliens intent on discovering what it means for human beings to have a “soul”.

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.

Again, however, replicant bats and replicant human beings are surely apples and oranges. In the original movie there was, as I recall, an owl that was artificial. But it was still an owl. It wasn’t intent on coming up with a way to live beyond it’s own built-in life span. And, for humanoid replicants that was only four years. And if it is one thing we human beings are conscious of it is death and dying. Though, of course, many are able to subsume that in God and religion.

Again, given how Rachael was programmed to actually believe that her own memories were real, why wouldn’t her dreams be human-like as well? What seemed crucial to me is that there were replicants who became aware that they were replicants and replicants who actually did not grasp this at all. From their frame of mind they were human.

In that sense consider the terminator. He seems aware that he is merely programed to kill Sarah Connor. I didn’t get the sense that he thought that he was human. Or the character David, the “mecha child” from A.I. Artificial Intelligence. The William Hurt character had invented A.I. intelligence that included complex emotional inputs and outputs. Much, much closer to the “real thing”. But not the real thing.

Okay, but how exactly to make the distinction?

Thus…

Indeed. And all the way to the point where we start to wonder if the “real thing” – us – is in fact but another manifestation of nature unfolding in the only possible determined – fated? destined? – world. And how surreal is that? We ponder the implication of consciousness in terminators and replicants and mecha children. When in fact we too are no less mechanical marvels?

That’s something I often come back to. Given “the gap” between the human condition and the existence of existence itself [in a No God world] we may simply be unable to close it.

All the way up to or out to pantheism. The Divine universe where everyone and everything are at one with it. Which, to me, seems utterly preposterous. Where is there a shred of evidence that rocks, stones, pebbles are conscious entities?

Then [of course] how all encompassing my materialism or your materialism or their materialism is. Those who are convinced everything that we think, feel, see and do we are wholly determined to and those who “somehow” make this distinction “in their head” between external and internal components of “I”.

And if machine intelligence comes into existence as some imagine, what will be the distinction it makes?

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.

Consiousness itself however has quite a range: en.wikipedia.org/wiki/Animal_consciousness

But the closer we come to human consciousness, the closer we come to the far, far, far more profoundly problematic is/ought world. It’s not just “survival of the fittest” subsistence for us but, morally and politically, grappling with the “best of all possible” subsistence. And trying to pin down if in attempt to achieve this we are even able to freely opt among alternative paths.

See? I told you!

All of this going back to how the matter we call the human brain was “somehow” able to acquire autonomy when non-living matter “somehow” became living matter “somehow” became conscious matter “somehow” became self-conscious matter.

In other words, the part that we philosophers here subsume instead in our “world of words” deductions and definitions. Our intellectual arguments in which ofttimes conflicting didactic assessments – conjectures, speculations – are thought to be true. And then for some of us, this makes them true.

Then the parts I root existentially in dasein. The parts in particular where we are endlessly squabbling over value judgments. The parts I wonder about in regard to possible future replicants and beyond. Will machines be able to “think up” an objective morality that actually is an objective morality?

For instance, will AI entities down the road actually become pregnant? To abort or not to abort for them?

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.

In regards to AI, what’s that disappointment next to the far more consequential disappointment embedded in the fact that science is still unable even to tell us if our own intelligence is free. In fact, some argue that ironically enough AI to us may well be what we are to nature. Only irony itself is essentially moot in the only possible world.

And isn’t that what most perturbs some? That subjective experience itself may well be only the psychological illusion of an autonomous self? It’s not just explaining how the brain functions but connecting the human brain itself to the really big questions:

Why something instead of nothing?
Why this something and not something else?
Where does the human condition itself fit into the whole understanding of this particular something?
What of solipsism, sim worlds, dream worlds, the Matrix quandaries?
What of the multiverse?
What of God?

And on and on and on regarding all of the other experiences scientists grapple with in regard to the human body. Explaining perhaps why here philosophers take the path of least resistance. Explanations in other words that that revolve around “thought up” deductions connected by and large to other thought up deductions: “These words are true because those words say they are.”

In the interim, we can ponder things like chatbots and terminators and replicants. And perhaps machine intelligence will end up explaining it all to us instead.

Did Hitler have free will?
Jerry Coyne

Of course: Rosenbaum thinks only that which he was never able not to think. But the ultimate explanation is still no less inaccessible. To, for example, philosophers. And what if the evolution of matter into human brains still has a way to go before they have access to the final solution.

For all we know there may well be intelligent life forms on other planets around a lot longer than us who do have brains that have evolved to the point where they have discovered that fabled theory of everything.

In fact, I often think what if the Big One that wiped out the dinosaurs had hit Earth 50,000 years earlier. Is it possible the human species would have been around that much longer? What will we know about human autonomy or the lack thereof 50,000 years from now? Well, providing we don’t wipe ourselves out…or are visited by the next extinction event.

As for Hitler’s deeds, it’s not what anyone thinks but what they can demonstrate as in fact true.

Again, confusion reigns here in my brain. If determinism rules and Hitler could never have done other than what he must do given the immutable laws of matter hardwiring his brain to set up the dominoes and topple them on cue, what does it mean to speak of a brain defect?

Hitler’s genes and Hitler’s circumstances are six of one, half a dozen of the other to nature. Which is why from my frame of mind the only reason someone is still able to hold him morally responsible for the Holocaust is because they were never able not to. And not that Hitler is morally responsible as the advocates of free will insist. Other than that they are advocates of free will only because they were never able not to be.