Uncategorized

[2310.02567] Improving Automatic VQA Evaluation Using Large Language Models



Download a PDF of the paper titled Improving Automatic VQA Evaluation Using Large Language Models, by Oscar Ma\~nas and 2 other authors

Download PDF
HTML (experimental)

Abstract:8 years after the visual question answering (VQA) task was proposed, accuracy remains the primary metric for automatic evaluation. VQA Accuracy has been effective so far in the IID evaluation setting. However, our community is undergoing a shift towards open-ended generative models and OOD evaluation. In this new paradigm, the existing VQA Accuracy metric is overly stringent and underestimates the performance of VQA systems. Thus, there is a need to develop more robust automatic VQA metrics that serve as a proxy for human judgment. In this work, we propose to leverage the in-context learning capabilities of instruction-tuned large language models (LLMs) to build a better VQA metric. We formulate VQA evaluation as an answer-rating task where the LLM is instructed to score the accuracy of a candidate answer given a set of reference answers. We demonstrate the proposed metric better correlates with human judgment compared to existing metrics across several VQA models and benchmarks. We hope wide adoption of our metric will contribute to better estimating the research progress on the VQA task. We plan to release the evaluation code and collected human judgments.

Submission history

From: Oscar Mañas [view email]
[v1]
Wed, 4 Oct 2023 03:59:57 UTC (3,109 KB)
[v2]
Wed, 10 Jan 2024 17:00:05 UTC (3,119 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *