Uncategorized

An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models



Download a PDF of the paper titled PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models, by Haochen Tan and 10 other authors

Download PDF

Abstract:Large Language Models (LLMs) have exhibited remarkable success in long-form context comprehension tasks. However, their capacity to generate long contents, such as reports and articles, remains insufficiently explored. Current benchmarks do not adequately assess LLMs’ ability to produce informative and comprehensive content, necessitating a more rigorous evaluation approach. In this study, we introduce \textsc{ProxyQA}, a framework for evaluating long-form text generation, comprising in-depth human-curated \textit{meta-questions} spanning various domains. Each meta-question contains corresponding \textit{proxy-questions} with annotated answers. LLMs are prompted to generate extensive content in response to these meta-questions. Utilizing an evaluator and incorporating generated content as background context, \textsc{ProxyQA} evaluates the quality of generated content based on the evaluator’s performance in answering the \textit{proxy-questions}. We examine multiple LLMs, emphasizing \textsc{ProxyQA}’s demanding nature as a high-quality assessment tool. Human evaluation demonstrates that evaluating through \textit{proxy-questions} is a highly self-consistent and human-criteria-correlated validation method. The dataset and leaderboard will be available at \url{this https URL}.

Submission history

From: Haochen Tan [view email]
[v1]
Fri, 26 Jan 2024 18:12:25 UTC (7,869 KB)
[v2]
Mon, 12 Feb 2024 03:32:43 UTC (8,091 KB)
[v3]
Tue, 13 Feb 2024 13:24:49 UTC (8,091 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *