Uncategorized

Evaluating Self-Consistency of Code Large Language Models with IdentityChain



Download a PDF of the paper titled Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain, by Marcus J. Min and 6 other authors

Download PDF

Abstract:Code Large Language Models (Code LLMs) are being increasingly employed in real-life applications, so evaluating them is critical. While the conventional accuracy evaluates the performance of Code LLMs on a set of individual tasks, their self-consistency across different tasks is overlooked. Intuitively, a trustworthy model should be self-consistent when generating natural language specifications for its own code and generating code for its own specifications. Failure to preserve self-consistency reveals a lack of understanding of the shared semantics underlying natural language and programming language, and therefore undermines the trustworthiness of a model. In this paper, we first formally define the self-consistency of Code LLMs and then design a framework, IdentityChain, which effectively and efficiently evaluates the self-consistency and conventional accuracy of a model at the same time. We study eleven Code LLMs and show that they fail to preserve self-consistency, which is indeed a distinct aspect from conventional accuracy. Furthermore, we show that IdentityChain can be used as a model debugging tool to expose weaknesses of Code LLMs by demonstrating three major weaknesses that we identify in current models using IdentityChain. Our code is available at this https URL.

Submission history

From: Marcus J. Min [view email]
[v1]
Sat, 21 Oct 2023 16:14:56 UTC (3,388 KB)
[v2]
Tue, 16 Jan 2024 14:03:10 UTC (3,388 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *