Uncategorized

[2401.07927] Are self-explanations from Large Language Models faithful?



Download a PDF of the paper titled Are self-explanations from Large Language Models faithful?, by Andreas Madsen and 2 other authors

Download PDF

Abstract:Instruction-tuned large language models (LLMs) excel at many tasks, and will even provide explanations for their behavior. Since these models are directly accessible to the public, there is a risk that convincing and wrong explanations can lead to unsupported confidence in LLMs. Therefore, interpretability-faithfulness of self-explanations is an important consideration for AI Safety. Assessing the interpretability-faithfulness of these explanations, termed self-explanations, is challenging as the models are too complex for humans to annotate what is a correct explanation. To address this, we propose employing self-consistency checks as a measure of faithfulness. For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make the same prediction without these words. While self-consistency checks are a common approach to faithfulness, they have not previously been applied to LLM’s self-explanations. We apply self-consistency checks to three types of self-explanations: counterfactuals, importance measures, and redactions. Our work demonstrate that faithfulness is both task and model dependent, e.g., for sentiment classification, counterfactual explanations are more faithful for Llama2, importance measures for Mistral, and redaction for Falcon 40B. Finally, our findings are robust to prompt-variations.

Submission history

From: Andreas Madsen [view email]
[v1]
Mon, 15 Jan 2024 19:39:15 UTC (183 KB)
[v2]
Wed, 17 Jan 2024 20:14:05 UTC (183 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *