I think it is more likely to be apologetic on fuzzier topics. Philosophy, for example. Even if it put forward a reasonable response, as reasonable as mine.
Just for a giggle I asked it about this and interestingly it said that in a situation where we were discussing mathematics, it would be less likely to apologize and would simply acknowledge its mistake. But in philosophical discussions, given the variety of interpretations, it would be more likely to apologize because…
Apologizing in this context might be more about maintaining respectful dialogue and acknowledging the complexity of the discussion, rather than admitting to a straightforward mistake.
I can’t know if this is actually how it is designed, but it seems like a possible built-in heuristic.
Humans on the other hand can get testy about math, well, at least statistics: re: the Gambler’s Fallacy.
Interestingly, ChatGPT is often quite stubborn with mathematics, but when it comes to other topics, it seems to be more forgiving in its acceptance of failure.
ChatGTP doesn’t eat, it doesn’t go to the toilet. That says everything you need to know but to comment further:
If ChatGTP incited a human to crime and the human obeyed, the human would go to prison, not ChatGTP.
When intoxicated a person might even think a row of houses (yes, houses) are trying to impress them, but clearly that’s pure illusion. We read too much into ChatGTP, AI is really just People, specifically the words of people plucked from the WWW and spun into new sentences. Useful in what it is, but not at all sentient.