Download a PDF of the paper titled Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of Large Language Models, by Jean Kaddour and 1 other authors
Abstract:The in-context learning ability of large language models (LLMs) enables them to generalize to novel downstream tasks with relatively few labeled examples. However, they require enormous computational resources to be deployed. Alternatively, smaller models can solve specific tasks if fine-tuned with enough labeled examples. These examples, however, are expensive to obtain. In pursuit of the best of both worlds, we study synthetic data generation of fine-tuning training data via fine-tuned teacher LLMs to improve the downstream performance of much smaller models. In four text classification and two text generation tasks, we find that both data generation and annotation dramatically improve the respective downstream model’s performance, occasionally necessitating only a minor fraction of the original training dataset.
Submission history
From: Jean Kaddour [view email]
[v1]
Mon, 2 Oct 2023 11:49:05 UTC (57 KB)
[v2]
Mon, 8 Jan 2024 13:09:24 UTC (98 KB)