Hi Only,
An algorithm can’t think. Thought is an aspect of consciousness, an algorithm isn’t the kind of thing that could be conscious, for one thing an algorithm is itself a thought, a concept in our minds. Computers like the ones we are using now should be considered in this context as representations of algorithms, representations of our concepts. The same applies to the computers/programs in the article you kindly linked to.
There really isn’t any question about where we draw the legal and moral lines, and the computer programs described with such breathless enthusiasm and naivety in the New Scientist article do nothing to change that.
Computers will never have the moral or legal status of humans because they can’t feel, see, hear, think, understand. They aren’t conscious. And they haven’t become any more conscious as the result of the programming described in the article.
Computers can’t do human style translation because they can’t feel, see, hear, think, understand. To take a specific example, you can’t understand what “good” means in the appropriate way unless you can feel things like toothache, orgasm, disappointment, joy. This applies throughout human language, because language is essentially based on experience.
The article concludes with this quote:
[i]“To match this human ability , we have to find a way to teach computers some basic world knowledge, as well as knowledge about the specific area of translation, and how to use this knowledge to interpret the text to be translated,”.
The speaker here knows there’s a problem, but he doesn’t realise that this is an insurmountable barrier to human-level translation by computer. The basic world knowledge he is talking about can only be gained through experience, through consciousness, feeling, seeing, hearing, and that is something a computer can never have.
I find it fascinating but also a little scary that so many people seem to believe that computers are moving towards consciousness, and that they might have legal and moral responsibilities.