Uncategorized

The Calibration Gap between Model and Human Confidence in Large Language Models. (arXiv:2401.13835v1 [cs.LG])



"Large Language Models"For large language models (LLMs) to be trusted by humans they need to be well-calibrated in the sense that they can accurately assess and communicate how likely it is that their predictions are correct.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *