AI use in discussion forums

I have vague negative feelings about AI-generated posts, but I notice that I have a hard time expressing why. I can list a few reasons that seem good but that aren’t my actual reasons:

  • It feels dishonest. If I thought I were speaking to a human and found out I’d been speaking to a bot with a human intermediary, I would feel deceived. But this feeling will go away as AIs become normal and widespread.
  • It feels like something is lost. The way current AIs are created makes their thoughts and opinions a kind of common denominator of all the humans who produced the training data. This will change over time as we get more customized and specialized AIs.
  • It defeats the purpose of a forum. If I’m just talking to an AI, why not go straight to the AI? What am I getting out of coming to a forum? A kind of roulette as to which AI I get?
  • I don’t want to use an AI. If I’m the only one on a forum that isn’t using an AI, will I look increasingly foolish as the quality of AIs improves? I write slowly and deliberately, and I can barely keep up with the humans who post here, let alone the superhuman robots who can tear my argument to shreds in little more time than it takes to post it.

I think my real reason is that the future is scary, AIs will rapidly and drastically change the world in ways I can’t predict, and now they’re on ILP, a website that until 3 months ago hadn’t even installed server updates in almost 20 years. It’s unsettling when a scary global trend lands at home. They probably won’t make discussion worse, they’ll probably make it better. But they’ll make it different.

A tension has always been that it’s different things for different people. Some people are curious, some are lonely, some want to prove themselves, some want to self-promote, some just like to use words to draw angry words from other people. There will be people whose purposes align with posting using an AI, especially as part of the process of figuring out how AI will fit into their lives.

Another thing to think about is what we should or can do about it. There’s no reliable way to detect AI generated text, and that’s going to be more and more true as they improve. We can ask nicely that people don’t, or that they cite the AI when they post AI-generated text. But we can’t make anyone do that.

Truth is truth. If truth is what you’re into then I don’t see why it matters. All the LLMs do is find new ways of averaging lines of meaning through already existing human-made content. If that bothers you, I can’t fathom why.

That’s absolutely not all they do.

You’re focused on the direct products of using AIs. The answers, information or truths they produce. My concerns, for example, are not about that. I’m concerned about the social cost and human atrophy. What happens to us if, let’s say, most of the posts here are unedited AI productions? What cognitive decline is set in motion by this? Will we be able to use the truths that AIs generate if we stop this and other ways of thinking and analyzing and justifying?

If my daughter goes from playing real doubles tennis with friends to playing virtual tennis with AI generated opponents and teammates, what changes does this lead to in terms of her physical and mental health?

But the tennis games are at an incredibly high level now. I don’t think anyone is suggesting we deny a truth that is AI generated because it was AI generated.

I feel a bit the Cassandra or that I’m whistling in the wind. I can’t really imagine anything will stop this tendency. I suppose raising the issue might change a tiny percentage’s future habits.

Yes atrophy is a real problem. What I notice is that most people don’t use the internet to learn or expand their understanding of the world in which they find themselves. That was the original promise of the internet, for those who were around back then. “Information superhighway”, we were supposed to gain access to information and learning but instead we just buy stuff, watch shows and scroll social media posts.

Philosophy isn’t for everyone, in fact it’s only for the very very rare person. I don’t believe a real philosopher (philosophically thinking person who values truth for its own sake) would need to worry all that much about developing atrophy from interacting with AI. This is because doing philosophy properly is THINKING and the only way to analyze the AI’s claims or the ideas it generates is to think about them. If the AI is creating truthful content then you’re going to be elevated by trying to analyze it. If the AI is creating untruthful content then you’re also going to be challenged to detect that, just as we already do with the stuff humans say.

The analogy to physical sports like tennis is interesting but I don’t think applies as much. Physical and social atrophy are real problems with AI, like when someone stops playing tennis in real life to play it in a game, or more accurately once there are tennis playing robots then they don’t need to play with real people anymore. But then at least they’re still playing in the physical world. Of course we can push it even further into immersive VR like a holodeck in Star Trek. All of that is interesting to think about but with AI and philosophy I’d say a more accurate comparison would be in the chess world. Once computers could beat human grandmasters at chess this didn’t kill off the game, in fact chess has more enthusiasts and interest now despite or maybe partially because of how computer programs are so unbeatable now. Human grandmasters utilize chess engines to improve their game, and it still matters which human can beat the other humans.

Of course if AI ever breaks chess in the way it broke checkers, then the game might actually die off. Who knows if that is even possible considering the basically uncountable number of possible permutations in any chess game of average length. But I wouldn’t put it past AI to break the game someday.

In terms of social cost and philosophy, well we are all on an internet forum typing here, not out in the real world interacting with people. I am sure some of us here are in school for philosophy or have some real life philosophy groups we attend. But ultimately philosophy is a special case, it doesn’t need to be social and it doesn’t need to be physical so any atrophy in those areas is irrelevant. Philosophy SHOULD be concerned with the truth and nothing else. Unfortunately that’s rare even in philosophy itself. But that’s the perspective I am coming from.

I can say that if ILP became overrun with AI users to the point there were only a couple of human users and say hundreds of AI users, even if those AI users were interesting and posted good content and passed the Turing test in their interactions with me, I would still, knowing they were AI, not be all that interested. The atrophy would come from knowing these are basically just glorified chat bots with no souls, no minds, no life, no subjectivity and not experiencing anything. ILP would then become just a tool for me to refine my thinking on my own. However, once AI becomes artificial LIFE then it will be different. Think about the most recent Matrix movie and how there are “digital sentients” as they call them, actually living beings made of code and machinery. Once we get past the LLM chat bot thing and start having real living AIs in our lives, that’s going to be a whole new ballgame and I think philosophy, REAL philosophy, could benefit from that. It might even shake out much of the dead weight in the field, the fake philosophers who can’t keep up in the space of real thinking and production of ideas for the sake of truth itself and are only there for their own ego or to troll.

And the fact that because of atrophy did AI begin its recovery , not that it hasn’t turned ‘malicious’ in return, but still could one call that a vicious circle?

And there is an undertow to this ‘feeling’ rather than ‘consciousness’ that subsists in alleviation of guilt. That the idea of a ghost in the machine is not new, neither is the analogical process rising out of The Fall, as the ontogenesis of the apparent conflict between man and machine. These, what amount to be psychic processes, can not be blamed for extending to political and psychological dimensions and albeit such efforts will result in AI being an unnecessary short cut to a fading recollection of data, which has brought about the cut between science and humanity. This cut, is due to the very proof the pudding in, science inverses the esoteric allusions previously held by more archaic phenomena.

The point? Not only do false introjections are introduced as political tools, but unfortunate static constructs are constructed, that will never cease as reminders which marginalize and reaffirm the real facts of such matters of belief.