Uncategorized

[2310.14703] Establishing Vocabulary Tests as a Benchmark for Evaluating Large Language Models



Download a PDF of the paper titled Establishing Vocabulary Tests as a Benchmark for Evaluating Large Language Models, by Gonzalo Mart\’inez and 6 other authors

Download PDF

Abstract:Vocabulary tests, once a cornerstone of language modeling evaluation, have been largely overlooked in the current landscape of Large Language Models (LLMs) like Llama, Mistral, and GPT. While most LLM evaluation benchmarks focus on specific tasks or domain-specific knowledge, they often neglect the fundamental linguistic aspects of language understanding and production. In this paper, we advocate for the revival of vocabulary tests as a valuable tool for assessing LLM performance. We evaluate seven LLMs using two vocabulary test formats across two languages and uncover surprising gaps in their lexical knowledge. These findings shed light on the intricacies of LLM word representations, their learning mechanisms, and performance variations across models and languages. Moreover, the ability to automatically generate and perform vocabulary tests offers new opportunities to expand the approach and provide a more complete picture of LLMs’ language skills.

Submission history

From: Pedro Reviriego [view email]
[v1]
Mon, 23 Oct 2023 08:45:12 UTC (220 KB)
[v2]
Mon, 29 Jan 2024 09:26:36 UTC (607 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *