back to the beginning: morality

The Basis of Morality
Tim Madigan on scientific versus religious explanations of ethical behaviour

Still, in his private writings, Darwin offered telling criticisms of the moral teachings of Christianity. In his Autobiography he repeats the objections to the Divine Command Theory raised earlier. Why, he asks, should one accept the Bible as divinely- inspired, rather than other holy books, such as the Koran, the Analects of Confucius or the teachings of the Buddha?

Exactly my point.

Suppose hypothetically you went around the globe and everywhere you went you found that everyone worshiped the same God. Now, that might not be enough to demonstrate His actual existence, but, in my view, it would be far more intriguing than going around the globe [as it is today] and noting all of the many, many, many different [and often conflicting] Gods and religious paths.

Still, no Gods have ever actually been proven to exist. Or none that I am aware of. And, basically, this is what allows all the various denominations to insist that a God, the God is their own God. After all, it’s not like anyone can demonstrate otherwise, right?

Again, however, for all practical purposes, it can be understood in any number of ways. Christianity alone has hundreds and hundreds of denominations around the globe:

Then the part where capitalism precipitates the Protestant Reformation and Christianity accommodates itself to the bottom line. Prosperity gospel, for example. The emphasis shifts from an “other world” to “this world”. From the social to the individual. The more successful you are the more that indicate God’s own…blessing?

The hell if it’s damnable?! After all, if you run Darwin’s conjectures by any number of Christians today only those who deserve punishment are damned they will insist. It’s God’s will. End of discussion.

So, is Darwin, along with his “father, brother and almost all his best friends” now writhing in agony. In Hell?

The Basis of Morality
Tim Madigan on scientific versus religious explanations of ethical behaviour

Darwin broadened the argument raised by Mill. There is certainly much evil in the world, but it is not just evil for humans – why did the deity create so many species and why for millions of years preceding the emergence of humans did they suffer?

“That there is much suffering in the world no one disputes. Some have attempted to explain this in reference to man by imagining that it serves for his moral improvement. […] But the number of men in the world is as nothing compared with that of all other sentient beings, and these often suffer greatly without any moral improvement. This very old argument from the existence of suffering against the existence of an intelligent first cause seems to me a strong one; whereas, as just remarked, the presence of much suffering agrees well with the view that all organic beings have been developed through variation and natural selection.”

In other words, back to the “brute facticity” of an existence – of a universe – in which, ontologically, there is no teleological foundation upon which mere mortals can encompass an essential meaning and morality. So, over the centuries, Gods and any number of other One True Spiritual Paths were invented. Then all that one needed to do is simply believe what one is told by the ecclesiastics regarding before and after one dies.

As for the fate of all the other animals…? It’s like everything else revolving around conflicting goods. Attitudes are rooted historically, culturally and invariably revolve around the particular personal experiences one might have had in regard to animal welfare.

For human beings, there are the pros and the cons pertaining to many issues related to animals:

1] consuming animal flesh…hunting…factory farming

2] medical research and other experiments performed on animals

3] putting animals in zoos or in aquariums

4] making animals pets

As for nature at its most brutal and savage point: Wild animal suffering - Wikipedia

In other words, scientific thinking, embedded in the scientific method is far more likely to come down out of the theoretical clouds and engage the physical, material, phenomenological interactions in a world ever awash in any number of moral and political conflagrations.

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

First, the part many of us will need to acknowledge upfront: that the actual science involved here is something we accumulate almost entirely from those who actually are considerably more sophisticated in describing “how it works”.

And, in this regard, what we come to believe about it then revolves largely around the stuff we do read about it.

But how is an AI perspective in regard to the moral dilemma above not going to reflect the moral and political prejudices of those who programmed it? What always escapes me here, in other words, is the part where an AI “self” either is or is not able to go beyond the programming and provide us with things it was able to “think up itself”?

And, even more intriguing [for some of us], is it able to challenge those who programmed it…even reject the programming as wrong?

Finally, will it ever be able to acquire the autonomous capacity to provide flesh and blood human beings with a deontological moral philosophy? One that does not fall back on God and religion?

The gap, perhaps, between providing doctors who perform abortions with the very latest up-to-date medical information, and providing politicians who enact legislation relating to abortion with the very latest up-to-day moral assessments?

And, really, what on Earth can a machine know about sex and pregnancy and abortion when it does not possess the biological capacity to experience these things itself?

Click, of course.

Well, for those of my ilk, the main interest here still revolves around the extent to which AI can accomplish that which flesh and blood philosophers have not even come close to accomplishing: creating a deontological framework/scaffolding such that when asked about the morality of any particular conflicting good, given any particular context, it really can provide this.

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

This article explores AI’s ethical challenges, including bias, privacy, and accountability. These issues, from fairness to trust, impact everyday life. So as we navigate this new era, we must ask: How can we ensure AI aligns with our ethical principles?

The future of AI is not just technological: it’s deeply moral.

Here of course the assumption for many moral objectivists is that their own ethical principles had better be the ones that the AIs align with…or they are flat out wrong. Just as are all the flesh and blood philosophers here who dare to challenge them.

Then the part whereby all the different AI companies – List of artificial intelligence companies - Wikipedia – either are or are not themselves able to agree on a one size fits all universal/deontological moral philosophy? Really, given such moral conflagrations as abortion, you tell me me where AI is now in establishing a moral philosophy that all rational men and women would be obligated to embody.

Click, of course. And what does it mean for decisions to be ethical given what particular set of circumstances? What, after all, is the AI consensus regarding conflicting goods that you yourself might be aware of.

Then the part where AI machines today, in lacking any actual biological components [that I’m aware of], cannot even grasp a large number of experiences that flesh and blood human beings take for granted. What can AI today tell us about, say, the morality of consuming animal flesh or the transgender controversy?

It seems to me that only when we reach the point where we are confronting increasingly more sophisticated cyborg entities will we have to confront the possibility of an AI morality. A morality, for example, that pertains only to the machines themselves. And then re the Terminator it may end in a battle between us and them.

All I know are the following

  1. Axe murderer at the door problem for Kant’s system
  2. Hanging an innocent man problem for Bentham-Mill’s utilitarianism

I however sympathize with what you say in your last paragraph. It seems truthful. However, as you said, paraphrasing, just because you deem it true doesn’t make it true.

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

Or, as William Barrett once put it in Irrational Man:

“For the choice in…human [moral conflicts] is almost never between a good and an evil, where both are plainly marked as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the the ultimate outcome and even—or most of all—our own motives are unclear to us. The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rules that will apply, if only to save them from the task of choosing themselves.”

On the other hand, tell that to the moral objectivists among us. Any number of them will insist that fairness, justice and the only good revolves entirely around their own dogmatic ethics.

Then the endless hypothetical examples in which objectivists all up and down the moral and political spectrum explain to you why you should behave as they do. For some, you must behave exactly as they do. In other words, or else.

Thus…

Okay, if you are a moral objectivist let us know, given a particular set of circumstances, what the optimal assessment might be. Or, perhaps, the only rational assessment?

Exactly! What can a machine really know about complex medical and legal interactions among flesh and blood mere mortals? Instead, it will tell you what the flesh and blood human beings who programmed it think and feel about them.

Still, I’m the first to admit I don’t grasp AI technology in a sophisticated manner. So, what seems crucial here is the extent to which AI machines are able to reconfigure that which they are programmed to think into actual original thinking.

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

Of course, the truly hardcore determinists among us are compelled to insist that both Kant and Mill were themselves just two more of nature’s very own automatons.

Right, like in regard to any number of moral and political conflagrations, AI can finally pin down that which mere mortals have so utterly failed to do after thousands of years. Then the part where AI today basically revolves around the fact that they are programmed…programmed by flesh and blood human beings.

So, if the issue being debated pertains to human sexuality, what on Earth can a machine grasp regarding that?

Same thing? After all, we still live in a world where over and over and over again, what makes some happy makes others downright miserable. How would AI ethics go about for all practical purposes changing that? From what set of assumptions, assessments and conclusions into what other set?

Anyone here care to explore this given a specific set of circumstances involving conflicting goods?

More to the point, in my view, is the fact there are any number of moral philosophies that mere mortals have already “thought up”: Category:Ethical theories - Wikipedia

So, will this be no less the case in regard to machine morality?

Meat machines are AI, too.

Lifeless meat machines are AI too.

They compute that they exist and don’t exist because they are programmed with binary software.

They exist because they need to exist to claim that they don’t exist and they don’t exist because they don’t possess life.

I exist/I don’t exist=I exist/I don’t exist is a translation derived from binary software 0/1=0/1

Good/Bad=Good/Bad is the correct starting philosophy for science not good=bad and bad=good.

Good/Bad=Good/Bad is also a translation derived from binary software 0/1=0/1

+/-=+/- philosophy translates into science perfectly because we know that attractive and repulsive electromagnetic forces do not cancel out so,

Attraction/Repulsion=Attraction/Repulsion

This formula explains how all matter is held together and why it vibrates resulting in varying frequency electromagnetic energy waves being emitted from it which have unique binary code characteristics.These energy waves being picked up by the lifeless AI meat machines senses and converted into binary code and sent to the brain organ for processing.

The self which is spirit and separate from matter interprets this information.

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

Uh, exactly?

Here, of course, the reality for many of us revolves around the fact that in regard to the actual science involved, we have to either accept or reject what the “experts” tell us about the actual capabilities and limitations of “machine intelligence”. And that will often include our own rooted existentially in dasein moral and political prejudices.

So, I start with the assumption that what I believe “here and now” about human morality…

…an AI entity might more or less concur with or more or less reject it.

Then the part where those of my ilk…those who embody a fractured and fragmented moral nihilism in a No God world…make a distinction between AI “minds” entirely programmed by human minds and AI minds that have become increasingly more independent of human input.

Click, of course?

With regard to this, I start with the assumption that mere mortals in a No God world will be squabbling over what constitutes a flaw regarding any particular behaviors in any given community long after most of us are dead and gone.

On the other hand, what still seems to preoccupy most of us is the part where that oversight revolves more around we flesh and blood folks exercising it over the machines or increasingly more sophisticated machines exercising it over us?

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?

So much more to the point in today’s “postmodern world” is the part where, in my view, even pertaining to ethics theoretically there are many conflicting assessments regarding where to begin and where it should all end up:

1] consequentialism

2] utilitarianism

3] deontology

4] virtue ethics

5] biological imperatives

6] moral objectivism

7] moral relativism

8] moral nihilism

9] hedonism

10] ethical egoism

11] sociopathology

12] divine commands

13] social contracts [contraturalism]

14] Epicureanism

15] all of the other ones

Or, sure, you can just carry a quarter around with you and “flip for it”.

In any event, historically, all such “schools” revolve around one or another combination of might makes right, right makes might and moderation, negotiation and compromise.

Click, of course.

Of course, nothing regarding human interactions seems more clearly in play than the sheer complexity of all the variables involved. Anthropological, ethnological, sociological, psychological, political. All experienced by individuals out in particular worlds understood in particular ways. And, so far, to the best of my current knowledge, machine intelligence to date is really just human intelligence programmed into…into what?

In other words, let me know if you ever come across an AI entity that actually does appear to be coming up with assessments, conclusions, judgments, etc., given its own acquired frame of mind. Then the part where this machine intelligence is embedded in an AI entity that at least comes close to experiencing the world as we flesh and blood folks do.

Then the part derived from the manner in which machines can ever perceive – experience – the world around us as we do. Again, though, assuming the way we experience it ourselves is not basically as one of Mother Nature’s very own automatons.

…so… which one… concur or reject, and on what basis?

If AI recognizes the equality of persons regardless of gender (and every single one I’ve talked to does), I reckon they will prioritize saving a life, since delivery will happen either way.

woops same thread - admin feel free to delete this comment I flagged

Moral Blind Spots
Gerald Jones discusses how we judge the past, how we will one day be judged, and what we can do about it.

“So what?”

That’s how some will react. Those, for example, who have thought themselves into believing their own death entails falling over into the abyss that is oblivion. They’re dead and gone. Forever and ever and ever. Of what possible concern can judgments be to someone on his or her way back to star stuff?

Instead, it would seem, judgments from the past, present and future go with you. That’s why “the Gods” and then later [historically] the “a God, the God, my God” folks always recognized the need for a Divine entity that everything could go back to…and fall back on.

The part, in other words, where an actual divine Creator commands behaviors that, come one or another rendition of Judgment Day, it will be decided whether you go up or down.

And the stakes here – immortality, salvation – are, after all, simply staggering.

Or, perhaps, the greatest flaw of all here is how, down through the ages, there were, are, and almost certainly always will be those communities – God and No God – able to subsume every and all assumption pertaining to morality onto one or another One True Path. Or else?

And, now, living in an age where some insist it is becoming increasingly more difficult to differentiate flesh and blood human minds from AI machine minds, we are confronting the possibility whereby [perhaps] our very own monsters – terminators – will be unleashed.

Still, how can it not be fascinating to ponder the extent to which minds that are entirely machines either can or cannot [will or will not, should or should not] provide us with an objective morality.

Click of course. After all, some do argue we ourselves are but Mother Nature’s own machines.

Whatever that might possibly mean.

.
..but are you still waiting for Godot though?

Moral Blind Spots
Gerald Jones discusses how we judge the past, how we will one day be judged, and what we can do about it.

Which is basically what I attempt to do in the OPs here:

The part whereby in a No God universe it seems entirely reasonable to sustain a fractured and fragmented moral philosophy.

On the other hand, sure, “for better or for worse”.

"A moral blind spot is a psychological bias or blind spot that prevents people from seeing the unethical aspects of their own behavior, actions, or judgments. " AI

On the other hand, for countless men and women, a psychological bias revolves instead around the assumption that God or No God their own self-righteous moral philosophy reflects either 1] the best of all possible worlds or 2] the one and the only One True Path to an objective, essential morality, applicable to all of us.

Take the historically easy case of the Lindow Man. At some point in the First Century AD, on a remote moor in Cheshire, England, a young man of high birth was ritually killed, or rather, overkilled: his throat cut, he was axed in the head and garrotted, and his naked body thrown into a bog.

Contemporary Roman judgments of ritual sacrifice were harsh: Roman commentators left many descriptions of the horrific superstitious practices of Celtic human sacrifice, including rumours of huge Wicker Men into which people were herded to be burnt alive.

We share the same revulsion over human sacrifice as Caesar and friends. Yet something else strikes us: how could the Romans have condemned the Celts so harshly and yet themselves practise ritual human slaughter on a scale unseen in Celtic Europe? Mary Beard estimates a death rate of 8,000 gladiators per year – meaning that over the centuries, hundreds of thousands of young men died in arenas across the Roman Empire.

So, you tell me…

Using the tools of philosophy, what are we to make of this? Are there human behaviors so ghastly, so grotesque that no one would or could ever rationalize them? This after perusing human history to date? Even given such things as the Holocaust and any number of other “final solutions” that mere mortals have pursued self-righteously? Sometimes in the name of God, other times in the name of one or another political ideology or “school of philosophy” or dogmatic assessment of so-called biological imperatives.

Moral Blind Spots
Gerald Jones discusses how we judge the past, how we will one day be judged, and what we can do about it.

This sort of assessment will always be problematic. After all, it’s not like philosophers can peruse history and note human behaviors that were/are/always will be demonstrably moral or immoral. Or go around the globe today and rank human communities as either more or less moral. And anything the Romans did “back then” can be compared to, say, any number of ghastly “final solutions” embedded in the Twentieth Century? Or look around the world today. How would we explain our own behaviors to the Romans? And, as often as not, much of it still revolves around those who champion one or another rendition of “Our True Path Or Else”.

Instead, only the moral objectivists/deontologists among us claim that they have already accomplished this. Again, just ask them.

Perhaps we should just accept that in regard to any number of human interactions “down through the ages” what are seen as “easy cases” to some are construed as anything but to others. And still today. Abortion for example. Or gun control. Or homosexuality. And that is often before we get to the barbarous pursuits of those who own and operate the “deep states”.

As for the “slaughter of animals”, you tell me how easy or hard – moral or immoral – that one is.

What we are to say about the current state of affairs in the world is that it didn’t happen by accident. It is the result of ideas. We must always examine and evaluate the prevailing ideas of society. We must judge them against reality to see if they are fit or foul.