Uncategorized

[2309.01431] Benchmarking Large Language Models in Retrieval-Augmented Generation



Download a PDF of the paper titled Benchmarking Large Language Models in Retrieval-Augmented Generation, by Jiawei Chen and 3 other authors

Download PDF
HTML (experimental)

Abstract:Retrieval-Augmented Generation (RAG) is a promising approach for mitigating the hallucination of large language models (LLMs). However, existing research lacks rigorous evaluation of the impact of retrieval-augmented generation on different large language models, which make it challenging to identify the potential bottlenecks in the capabilities of RAG for different LLMs. In this paper, we systematically investigate the impact of Retrieval-Augmented Generation on large language models. We analyze the performance of different large language models in 4 fundamental abilities required for RAG, including noise robustness, negative rejection, information integration, and counterfactual robustness. To this end, we establish Retrieval-Augmented Generation Benchmark (RGB), a new corpus for RAG evaluation in both English and Chinese. RGB divides the instances within the benchmark into 4 separate testbeds based on the aforementioned fundamental abilities required to resolve the case. Then we evaluate 6 representative LLMs on RGB to diagnose the challenges of current LLMs when applying RAG. Evaluation reveals that while LLMs exhibit a certain degree of noise robustness, they still struggle significantly in terms of negative rejection, information integration, and dealing with false information. The aforementioned assessment outcomes indicate that there is still a considerable journey ahead to effectively apply RAG to LLMs.

Submission history

From: Jiawei Chen [view email]
[v1]
Mon, 4 Sep 2023 08:28:44 UTC (3,258 KB)
[v2]
Wed, 20 Dec 2023 11:54:11 UTC (1,827 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *