Uncategorized

Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models



[Submitted on 22 Dec 2023]

Download a PDF of the paper titled Don’t Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models, by Priyesh Vakharia and 6 other authors

Download PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) are adept at text manipulation — tasks such as machine translation and text summarization. However, these models can also be prone to hallucination, which can be detrimental to the faithfulness of any answers that the model provides. Recent works in combating hallucinations in LLMs deal with identifying hallucinated sentences and categorizing the different ways in which models hallucinate. This paper takes a deep dive into LLM behavior with respect to hallucinations, defines a token-level approach to identifying different kinds of hallucinations, and further utilizes this token-level tagging to improve the interpretability and faithfulness of LLMs in dialogue summarization tasks. Through this, the paper presents a new, enhanced dataset and a new training paradigm.

Submission history

From: Dhananjay Sonawane [view email]
[v1]
Fri, 22 Dec 2023 00:31:46 UTC (1,735 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *