We can point to things that behave in ways that can be described algorithmically. It’s a linguistic tool, not a reification. So…
Well, my suggestion is that an “algorithm” is not an identifiable, tangible process in the brain, but rather an idea about or a description of the processes that go on in the brain. “Algorithm” has a similar ontological status to “equation”. Would you find it plausible that consciousness might be caused by equations?
I find it plausible that it is caused by processes that can be described algorithmically. Or put into equations. Don’t you?
(http://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html)"]
I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
We know exactly what our own consciousness is. We extrapolate that to other beings based on their similarity to us - I have a better idea of my brother’s experience of something than that of a Kalahari bushman’s, and better that than an orang-utan’s; better that than a crab’s; better that than a flatworm’s. I have to say that the consciousness of a crab is a bit of a mystery, but I assume they feel pain because they respond in ways that indicate it, and I assume an oak tree doesn’t because it doesn’t (although it responds physically to damage and heals).
Consciousness doesn’t necessarily mean human consciousness. I’m perfectly willing to grant that a computer won’t know what it is to be human - or vice versa.
Not when we know that it is acting that way because we set it up to look like it is conscious, which is the case with computers now. I genuinely find it a little scary that you can think like that. It’s like you’ve lost sight of what it means to be a conscious entity.
I’m not talking about computers now; we’re at the level of modelling basic invertebrates with a few dozen neurons. I’m trying to get to the root of your argument that “consciousness is consciousness, and computers just aren’t and can never be that.”
If consciousness isn’t the result of neural activity, what is it? It’s certainly easy to drastically modify consciousness by modifying neural activity, and to end it by ending that. It seems a reasonable proposition.
And if it is, why is organically-mediated information processing somehow different from electronically-mediated?
Computers are getting closer and closer to passing the Turing Test. Are you starting to think they should be ascribed some minimal rights now? Or is that going to happen all of a sudden on the day a computer fools the Turing Test Committee?
I think that would depend on how and why a computer manages to do so. If it’s by ELIZA-like language manipulation, then no.
Ah yes: but that knowledge is backed up by the lived experience of others whose reports you are able to place in the context of your own experience. If their lived experience is too far removed from your own, you won’t be able to understand their reports. So for example if you are red/green colourblind you will be able to understand that red and green are colours without being able to understand how to distinguish between them. If you were completely blind you wouldn’t be able to understand colour properly at all.
As Wittgenstein said, “if a lion could speak, we could not understand him.” I’m still willing to grant lions the benefit of the doubt and not torture them for fun, though.