Sounding right while being wrong
AI chatbots are useful machines that are very good at sounding right while being wrong.
Is this only true for AI chatbots?
I don't think so. I remember one of my teachers telling us something true one day: "People who write textbooks make mistakes." So when you read a textbook and you think you're right, maybe you're right.
Sounding correct while being wrong is a human trait. It is not something limited to AI.
The difference is that AI does it at a much bigger scale. Faster. With more detail. With more confidence.
Like with humans and textbooks, in some way I would prefer AI that is less good at sounding good, but actually good.
I could be less active and less suspicious while chatting with AI.
Or maybe it is better like this. Like it has always been. You read, you doubt, you verify, and you form your own opinion.
What about you? How do you deal with systems that are mostly correct?