Have you ever noticed that ChatGPT speaks like a know-it-all?
In this paper, we show that there is misalignment in the confidence LLM’s are perceived to have in their answers, and what their internal confidence values reflect. In essence, LLM’s routinely appear more confident than they should, despite model weights accurately reflecting the uncertainty associated with the answer. We also show how prompting LLM’s to include appropriate uncertainty language in the answer text diminishes this gap between perceived and actual confidence considerably.