I think it will be a bad trend if people use AIs as the core of their posts. One can use them like any other source of information, but it seems possible we are heading to a future where AI handle our interactions. In response to bring this up in another thread (and one I don’t want to hijack more than I already did), someone wrote…
Someone can be very, very intelligent and yet not be very articulate. In the past such people would “use” ghost writers. “Using” AI is no different. We’re all AI.
Ghost writers were something most people, the vast majority, did not have access to. AI are accessible to everyone and for free.
As far as being articulate: these are discussion forums. We’re not producing finished texts. But more importantly, using AIs reduces the chances of increasing articulateness. If you write your own posts you get feedback on what you are capable of writing, not what an AI is capable of writing. Why didn’t she understand that? Oh, I see what I wrote makes it seems like…Hm, how could I word this to be clearer This increase in ability one can get through feedback doesn’t just affect the writing of philosophy posts, it bleeds into all areas where intelligence, words, interpersonal relations are involved.
I don’t have an exact rule regarding when AI use becomes a negative, but to the degree that it replaces the person and inhibits their learning, I think there is a loss. We can hand over discussion forums to the AIs and just take ourselves out of the equation if we want. First, one uses an AI to present position X, the other person uses an AI critique of that position instead of their own post. And then eventually we can just take the humans out of the picture by giving the AIs our passwords and accounts and they can slug it out and/or collaborate in our stead.
Magnus Carlsen can just enroll his AI in tournaments to play other AIs.
An author can just ask an AI to write a novel with a theme of betrayal, and give the AI three motifs, or, heck, just let the AI choose everything.
We can have ear buds when meeting a potential romantic partner and let our AIs choose the algorithmically-chosen best lines of dialogue while a tiny camera on our shirts analyzes our potential love objects perspiration and eye dilation to modulate the heuristics.
The good thing about all this is we can mull over what we are actually trying to do in these forums. Is it the finished product only that matters? Is the engaging in the process? Is it development of the self? What are our goals?
Perhaps it is better if we hand over our lives to the professionals ('products). Clearly (?) they’ll do a better job of it.
(and just to be clear. I originally asked someone if their post was AI produced, given its format and other qualities. I am not assuming that that person is overusing AIs or is part of the trends I slippery slope above. It was in fact given the respect I had for this person, I thought it good to raise the issue there in that thread, in relation to his possibly having used AI. I don’t know how much this person uses AI or how they use it. I just notice the beginning of a trend in general in discussion forums and think it’s a good idea to mull over before the trend becomes as common as big puffy plastic-looking lips and people pretending to ride what are actually electronically-driven bikes.)