monad wrote:Yeah right! I don't notice anything here that wasn't done constantly for the last ten thousand years.
You missing the point. You seemed to be arguing that neutral and lack of hate are pluses. I don't think so. Sure, humans have acted horrifically. Should we add a new agent THAT HAS NO EMPATHY AT ALL to a world with the problems you want us to focus on?
Actually one can add a few more "human defects" to this summary that even a robot wouldn't think of.
YOu have no idea what a robot would not think of. Perhaps an AI driven robot would be curious about torturing killing and resurrecting someone or everyone for millions of years to see what happens to their minds.
Why do we fear intelligence machines so much in the first place?Well, I answered that, at least in part.
Could it because initially we would be "infecting" them with our own codes and if that's the case what then amounts to a greater evil, us or them?So you want to focus on blame. I was focusing on consequences. Sure, the fact that humans would create them, and not just humans, but humans at the behest of a tiny segement of the human population, one that has shown repeated disdain for human and other life, that's a factor. So yes, in part it is the combination of our flaws and hubris that make me very skeptical about what we would create. Then all the possible errors, then what I focused on about having a lot of entities with great power and no empathy.
Generally with humans you have to teach them NOT to feel empathy, generally using some idealogy that classifies other groups as non-human or evil or sub-human, and this justifies the coming violence. With robots there is no need to override the non-existent empathy and caution.
I don't see why you take fear and concern about robots and AIs as some kind of approval of all human behavior. This is a false dilemma. One can be skeptical about the latter and have tremendous concern about the former.