Download a PDF of the paper titled A Survey on Fairness in Large Language Models, by Yingji Li and 4 other authors
Download PDF
HTML (experimental)
Abstract:Large Language Models (LLMs) have shown powerful performance and development prospects and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. In this paper, we provide a comprehensive review of related research on fairness in LLMs. Considering the influence of parameter magnitude and training paradigm on research strategy, we divide existing fairness research into oriented to medium-sized LLMs under pre-training and fine-tuning paradigms and oriented to large-sized LLMs under prompting paradigms. First, for medium-sized LLMs, we introduce evaluation metrics and debiasing methods from the perspectives of intrinsic bias and extrinsic bias, respectively. Then, for large-sized LLMs, we introduce recent fairness research, including fairness evaluation, reasons for bias, and debiasing methods. Finally, we discuss and provide insight on the challenges and future directions for the development of fairness in LLMs.
Submission history
From: Yingji Li [view email]
[v1]
Sun, 20 Aug 2023 03:30:22 UTC (286 KB)
[v2]
Wed, 21 Feb 2024 13:52:11 UTC (4,915 KB)