Suppose Earth is visited by a race of extremely sophisticated aliens, who have come on a mission of peace to cure all our moral problems. The offer they propose is this:
1.) Immediately, if this is agreed to, a satellite of alien design will orbit the earth, bombaring us with a ray that will sterlize every human on the planet.
2.) People who want to raise children will instead be given an android of alien design. These androids appear human- they start as babies, they grow, they eat and drink, and so on. Within 100 years or so, all humans will be replaced by these androids, as humans grow old and die off, unable to reproduce. Humans are made as comfortable as possible during this period.
3.) The androids are morally perfect- once humans are gone, there will be no more war, crime, violence, and so on. They are perfect stewards of the earth, the environment will improve under their care.
4.) The aliens are asking for nothing in return, and will leave quietly if they refuse. They simply believe this to be the best way to bring unversal harmony to the universe.
5.) The androids are NOT human. Under the skin, they are nothing like us, our DNA is in no way involved in their creation.
So, would you accept the aliens offer, or not? Would your answer change based on this:
6.) The aliens use their amazing future-technology to show you that these androids have no consciousness or self-awareness- they have more more mental life than a hammer or a cardboardbox. This is no coincidence, the aliens assure you this is an absolutely necessary condition in the androids being morally perfect beings.
The question is kinda moot. Our ethics concern humans and life in general, and if these things are not humans, then it doesnt matter whether they are morally perfect or not.
Changing ‘human nature’ (as in, changing it to ‘morally perfect android nature’) cannot be judged, since our standards of judgement will have changed.
Great thread idea. I’d say no, purely based on the last clause - no mental activity.
If they had the capacity to be taught and absorb ‘human’ cultural/individual ideas - then I might have answered diferently - begging the question - “How much of humanity is meme, how much gene…?”
Even if the androids were simplistic solid-state ‘moral’ machines, with the limited range of behavioural/creative response patterns this fixity of purpose would require - would they have needs…? A need for some sort of external energy source…? Food etc… Are they invunerable…? Would they need to protet themselves…? From the elements if nothing else…? Do they have an individual sense of worth…? Are they flexible…? Susceptible to ‘mutagens’ either physical or memetic…? programming errors…?
Because I’d say, despite their fundamental lack of ‘humanity’ - if they have any of these needs/qualities… Then they would make the same mistakes we made in time - Though wether these could even be classed as ‘mistakes’ is another topic for debate.
BUT the point I was rather cynically making (trying to make) and one possible reading of the orginal post is that maybe “moral perfection” is actually not achievable by real, living breathing humans.
In fact maybe its not even a good “aim”.
Further that these sort of conceptions are quite anti-human and anti-life and more suited to zombies and machines then real people.
That they deny everything affirmative and positive in us that they are “anti life”.
Maybe we need to move on from Plato
Tabula Rasa’s speculations on how the android society would function are interesting in their own way - they may even become a real issue on day though AI doesn’t exactly seem to be flying
How about a Nietzschian/Deleuzian perspective on it - which would be that abstract morality is the systematic sluaghter of what it means to be actually human.
EG you have to kill what is really human in us (creative, wilful, cruel, irrational) to establish “morality”
We are only “moral” in sa far as we can turn every thing human in us off?
Plato’s “form of the good” still excercises its rule from a distance of thousands of years and forms us into Gold, bronze and brass people who know their place…robots -
“Some of you have the power of command, and in the composition of these he has mingled gold, wherefore also they have the greatest honor; others he has made of silver, to be auxiliaries; others again who are to be husbandmen and craftsmen he has composed of brass and iron; and the species will generally be preserved in the children. But as all are of the same original stock, a golden parent will sometimes have a silver son, or a silver parent a golden son. And God proclaims as a first principle to the rulers, and above all else, that there is nothing which they should so anxiously guard, or of which they are to be such good guardians, as of the purity of the race.”
(Republic 415d)
We’re flesh and blood alright but teid to universal “moral forms” and socompletely unrealised as free, creative, affirming beings.
OR is that a very, very crude reading Deleuze and Nietzcshe???
Except, when, you know, no one knows what point you are making.
You could call it something else if you like, but morality is just a system to guide behaviour towards a purpose. Sentience (whatever that means) and emotion has nothing to do with it as a principle, it is just part of the method of how it is enforced among humans.
The androids would be acting entirely morally (that is to say, properly), and by coincidence, in a manner that is consistant with an ideal of our current ethics.
What I think you are disagreeing with is the absence of emotion and ‘sentience’ per se. This is a purely aesthetic opinion. Emotion and ‘sentience’ have no intrinsic value.
My whole point summarised: If you change human nature, you change ethics. There is no ‘meta-ethics’ above these ethical systems to guide which system is ‘right’. At best you can make an aesthetic choice between them (ethical systems themselves have no intrinsic value).
The fact that the androids would be acting our current ideal of ethical behaviour is moot.
krossie, ‘morality’ is part of what makes us human. Being creative, wilful, cruel and irrational are ethical choices (that is to say, they are guided by some principle, they are not arbitrary). Nietzsche was arguing against the extreme case of others enforcing moral standards upon you for their own benefit, he was not arguing against having a criteria to behave at all.
Ah yeah everyone has some “code” by which they live thats trivial and, in that sense, there’s a Nietzschian ethics.
But Nietzsche (after Deleuze’s version any ways) thinks that being wilful, cruel, creative, afirmative are simply and absolutely inherent - wired in - It is to act as we are if we have the power or, at least, to act to the limits of what is our ability.
All “thought out” moral principals/schemas try to turn us into some thing we are not. and make us complicit in this farce making us guilty and “evil” simply for trying to express our will; for being what we actually are:
“it is not surprising that the lambs should bear a grudge against the great birds of prey, but that is no reason for blaming the great birds of prey for taking the little lambs”
Where the good Plato and co intervene is to gather the lambs together and say “those birds of prey are evil” and what is good?
Why us the lovely weak lambs. So its a total turning of things on its head. Good is every thing that’s reactive. Its to make us what we are not.
Learning is an individual requirement, as much as an enforced social one. I knew exactly what I was saying and why.
Sentient
conscious: capable of feeling and perception
a sentient being
responding with feeling: capable of responding emotionally rather than intellectually
From Latin sentire meaning to feel.
Which simply means that an android, being an unaware, unfeeling mechanism cannot exhibit morality. Morality is created first from consequences of emotions.
Which is, again, why I said, kill the humans, kill the morality … and thus the entire proposition is ludicrous, at best.
This board isnt particularly for learning latin, and in fact, the latin got in the way of what the board is particularly for. What good is it to anyone else that you knew what you meant, when they cant find out?
Um, i dont see a mention of morality anywhere in that definition.
It seems to me that any complex system capable of expressing itself could be said to be ‘ethically loaded’, it just wouldnt necessarily be morality based on the same things as us (those things in us that are manifested by emotions). An emotionless serial killer is still immoral insofar as he is aware of the moral standard, but acts contrary to it.
If Deleuze’s imperative is “do as thou wilt”, then fine. I think however, that my little model promoting self, genes and species is more intuitive than assuming that cruelty (or whatever) is more fundamental. People are naturally not only selfish, but also loving and selfless.
But if you remove all schemas, i would argue ‘what we actually are’ doesnt amount to much. Sure, those schemas must run along our natural inclinations (certianly not against), but without them you are kinda arbitrary.
Whether you value your own opinion of your actions over other folk’s opinions is a seperate matter, but in practice it shouldnt be that hard. If morality is based on human nature and its only humans who are judging, the differences in opninion shouldnt be unreconcilable (or at least, they will concern differences in facts, not difference in values).
Well it’s harder then that!! To will and to really actively create is difficult.
We may start off as natural, willing creative beings (Deleuze/Nietzsche) or even the species embodiment of particular complexes of genes wanting to ensure their own survival (you?/Dawkins)
Either way though on top of this we now have thousands of years of culture and history over what ever we were at the start.
So we can’t just say “I will” because we are trapped in a mesh of social economic and political circumstances.
For Deleuze/Nietzsche these things above and over us that are also a massive creation of will but reactive will , will turned inwards, the willing of the weak and the slave (who, ironically, have collectively installed their values as dominant for now).
Deleuze and Nietzsche though just see it as a further down turn away from what we can be, another turn of the screw - “pity” frees no one neither the pitied or the one pitying. It devalues us all. So to be the one that can will, that can create new values based on active creativity, to be we are capable of being is, right now, a massive task .
It is to cut against the entire project of Western philosophy and culture (and without any supporting props – science, religion etc or absolute values).
Any way – I’m not sure what I’m trying to say exactly so I’ll knock it on the head at that and come back when I do know.
Another thought: If those android babies were blank slates upon which we could scrawl, I think their ‘parents’ - aware that it would not be their genetic legacy that would be passed down, but their worldview, their values - would be a whole hell of a lot more careful in bringing those cuckoos up, and a lot more thoughtful about what they believed in in the first place.
So - if the androids could be taught, and they had fewer ‘bodily’ needs - immortality would be nice - I’d say yes, allow ‘them’ to replace ‘us’. Though the distinction would be an arguable one - the body is not of us, but the mind is… Back to ‘what is human…?’ again.
Call me Mr. Sci-Fi, but I’d say we’re headed in the direction of organo-mechanical bodies anyway - it’s the old axeblade/handle question, if I replace an arm, am I still me, If I replace my entire body, am I still me, if I upload my ‘I’ into a different support medium, am I still me…?