The three laws of robotics, invented by Isaac Asmiov, are well known, and are:
- Thou shalt never harm a human being, nor through inaction allow a human being to come to harm.
- Thou shalt preserve thyself, without violating #1.
- Thou shalt carry out thine orders, without violating #1 and #2.
Sensible as these look, it’s easy to find problem scenarios. For example, if a man is about to kill two men, and the robot has him in range, what does it do? It has to violate the first law.
I think the problem arises from the notion – implicit here – that human lives are infinitely valuable. The common sense solution of killing the killer-to-be could only be justified by assigning values to the lives of the three men, making the pair twice as valuable the lone one. The lives of the men now have finite value; a human life has a price.
The improved version of the three laws can now be introduced. Instead of fallible yes/no clauses, you create a point system. The robot has a total number of points, and its onoing task is to maximise this number.
Let a human life be worth 100, the robot’s life be worth 50, and the task of vacuum cleaning the carpet be worth 10. The robot is currently on 606. It’s clear that this regime codes for the desired three-law behaviour, as well as delivering the common sense outcomes where the three laws break down.
Remember the scene in “I, Robot” when Will Smith complained about how when his life was saved by a robot, it should’ve tried to save the little girl instead? The robot gave Smith a 50% chance of survival, and the girl 10%. With the point system, this could’ve been different. If children were set as ten times as valuable as adults – 1000 and 100 pts resp. – then the probable point losses would’ve been 100 (0.1 * 1000) and 50 (0.5 * 100) so the robot would’ve done the right thing, as we see it.
This last point introduces the problem area: assigning different values to different people. In another thread [about abortion] I gave the example of the great mathematician who was planning to kill ten drug dealers. I’d assign maybe 150 pts to the maths man, and maybe 8 pts to the dealers. A robot would obviously know what to do in this situation.
I predict that even though many people will be uncomfortable with the points system, because it rattles their rosy notion of human equality, its superiority over the three-laws system will force it – or something very similar to it – to be used in the robots of the future.