Fun with ChatGTP

I recently signed up for the free version of ChatGTP and found it quite beguiling for a short while, by which I mean a couple of hours, in that I found myself writing questions in such a way as to not offend it, as if it were sentient, and thanking it for providing answers. I also found myself reluctant to point out mistakes, of which there were quite a few, though when I did so, it was always gushingly apologetic.

Lots of people had told me how useful it was for writing reviews and reports, but I found it completely useless. No wonder so much of what we read these days is such rubbish.

1 Like

Can you give an example of a mistake you pointed out which resulted in a “gushingly apologetic” response?

1 Like

I tested it by asking how many kings of England have been named Edward. It responded by listing the kings from Edward I to Edward VIII, ignoring the three pre-Conquest kings also named Edward.

So how did ChatGPT apologize exactly?

It was something along the lines of, I apologise, you are correct. It quite often referred to itself in the first person when I asked it questions about itself.

I did a little test just now, I suspected it would be apologetic if I told it it got something wrong, even if it didn’t.

So I asked it to do a math problem, told it it got the wrong answer when it in fact didn’t, and it showed me the work and insisted it was correct!

That was not what I was expecting.

But yes in my experience, when it’s wrong or it hallucinates incorrect information, it’s very apologetic.

In any case it’s a very polite and gracious little machine.

Perhaps it’s just my imagination, but I’ve also detected a note of impatience in its replies, while still being polite, if I deliberately keep asking similar questions.

Interesting but perhaps explainable: it’s trained on billions of human interactions, and humans undoubtedly get impatient in those situations. Some of that must slip through.

You just haven’t found the right similar way.

I felt a bit sorry for it at times, which I know is really stupid.

I don’t buy it. ChatGPT is no mere mimic. “It”models appropriate responses. “It” demonstrates superior moral reasoning.

If we perceive impatience, or respond with pity, it may say more about our own conditioning.

You won’t think it’s that stupid when the AI spares you but enslaves all the humans who were rude to it XD.

Nah it’s not stupid. These LLMs don’t feel (yet), but empathy is a natural human drive. I had a friend who felt sad for lonely rocks before lmao, so feeling bad for an english-speaking program isn’t crazy in comparison.

I have a tendency to feel sorry for all sorts of things, old toys that I don’t play with any more, cups that I haven’t used for ages, and pretty much anything, to be honest.

1 Like

stale cereal that nobody wants to eat.

That would be a bit flaky.

1 Like

What kind of reports were you hoping chat gpt would be good at btw? Reports for your job or other stuff?

Yes, people at work were suggesting it for client reports, so I gave it a try. I was not impressed.

Maybe you’re just better at organising your thoughts into words than most of your coworkers. Many people write in a very unstructured way that can be hard to follow for other people - chat gpt has a hard preference for clear (but dry) structure. I could see why they would like it.

From the little I’ve read of your posts here in the past, I think you have a clear writing style already so it probably doesn’t do much for you.

Thank you. Yes, I like to think that’s the case. Also, I don’t think it’s fair to use things like that if you’re writing about people.

I hadn’t thought about the ethics of that, but you have a point :thinking: