Uncategorized

Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study



Download a PDF of the paper titled Table Meets LLM: Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study, by Yuan Sui and 4 other authors

Download PDF
HTML (experimental)

Abstract:Large language models (LLMs) are becoming attractive as few-shot reasoners to solve Natural Language (NL)-related tasks. However, there is still much to learn about how well LLMs understand structured data, such as tables. Although tables can be used as input to LLMs with serialization, there is a lack of comprehensive studies that examine whether LLMs can truly comprehend such data. In this paper, we try to understand this by designing a benchmark to evaluate the structural understanding capabilities (SUC) of LLMs. The benchmark we create includes seven tasks, each with its own unique challenges, e.g., cell lookup, row retrieval, and size detection. We perform a series of evaluations on GPT-3.5 and GPT-4. We find that performance varied depending on several input choices, including table input format, content order, role prompting, and partition marks. Drawing from the insights gained through the benchmark evaluations, we propose \textit{self-augmentation} for effective structural prompting, such as critical value / range identification using internal knowledge of LLMs. When combined with carefully chosen input choices, these structural prompting methods lead to promising improvements in LLM performance on a variety of tabular tasks, e.g., TabFact($\uparrow2.31\%$), HybridQA($\uparrow2.13\%$), SQA($\uparrow2.72\%$), Feverous($\uparrow0.84\%$), and ToTTo($\uparrow5.68\%$). We believe that our open source benchmark and proposed prompting methods can serve as a simple yet generic selection for future research.

Submission history

From: Yuan Sui [view email]
[v1]
Mon, 22 May 2023 14:23:46 UTC (166 KB)
[v2]
Fri, 20 Oct 2023 08:35:01 UTC (236 KB)
[v3]
Wed, 15 Nov 2023 12:18:39 UTC (237 KB)
[v4]
Sat, 17 Feb 2024 08:28:05 UTC (237 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *