Humean, i am kind of beginning to feel very much as if this is the coming scene. Just almost simultaneously, i wrote a piece in the off topic forum,‘daily journal’ of the introduction in Japan of robots , actually placed on homebound students’ desks, and learning through this kind of interaction and memory. Almost at the same time you brought in the super intelligence article. Uncanny and strange. However, the article brings to light concerns of the negative aspects of robotics, corresponding reversely to one of usefulness, and even developing technically feasible empathy, as in the Japanese robotic student, being used in Japanese classrooms.
PS i checked the time of postings, Your’s was posted prior to mine, but i had no way to know about it, therefore if there is a connection, (which to me seems to be the case) it is not as if the two blogs were totally unrelated.
The relationship between a developing, benign system (my ‘good’ robot) with the article’s concern with the risk of developing metastatically dubious super-intelligent systems, as measured by the different rates of change of the systems may parallel a coincidental occurrence of divergence of error as risk. If the question , at all, be asked of concurrence of risk management between two different systems, as a new basis to developed , using extra differential systems to intervene between them, to diminish the risk, can such co-incidences as my posting a very similar post, be some indication of the risks of assuming traditional logic formats coming up in the future?
If so, probability-function data should very much be focused upon as legitimate ,correlates of such variables in risk management of super-intelligence.
Just a thought, but traditional hardware, may reach limits to actually afford , not to involve concepts of simultaneity into the equation. (Here the example is very crude, Your blog predated mine by twenty minutes, but then again i had not read Your blog prior to writing mine.