I’ve recently read a couple of ‘accusations’ that someone is an AI or bot.
Von River’s proposed as criteria: briefness, randomness, disconnected, abrupt.
I want to raise the issue of what would give away an AI.
I would Think randomness would not be one of the criteria. I Think this can be avoided. Brief posts would generally be safer, I Think. Less chance to be disconnected or make incorrect leaps or show cultural gaps that seem odd. If I were trying to get a bot in a philosophy forum I would shoot for brief posts and also program it to use questions a lot. I suppose I would add in a ditz factor. Have the AI go on tangents. I would want them connected to the topic or a phrase. But this might cover over any errors elsewhere. ( I suppose this could be what VR means about randomness, but I would avoid having random OPs)
The other poster who was accused of being a bot wrote long posts, with a lot of spiritual information. The language was choppy, grammar poor, semantics odd and the sentences were very dense at times. They seemed to presume a lot of knowledge of terms and specific in-cult associations. I Think that would be a bad choice for an AI team. Though perhaps this means they fooled me.
What are your criteria/warning signals?
some of mine: equivocations, overliteralism, problems with slang and idioms, too many questions that use phrases in the other posters to form the questions, lack of paragraph structure or argument structure - iow lots of somewhat on target ideas, but no real architecture (of course this would apply to most essay writers), no sense of humor and/or sense of humor about the wrong things - reacting as if something was a joke which wasn’t.
And as an aside: who posting at ILP is least likely to be an AI? (jokes about how _______________might be artificial but is not intelligent can be set aside)