Uncategorized

[2308.13111] Bayesian Low-rank Adaptation for Large Language Models



Download a PDF of the paper titled Bayesian Low-rank Adaptation for Large Language Models, by Adam X. Yang and 3 other authors

Download PDF

Abstract:Low-rank adaptation (LoRA) has emerged as a new paradigm for cost-efficient fine-tuning of large language models (LLMs). However, fine-tuned LLMs often become overconfident especially when fine-tuned on small datasets. Bayesian methods, with their inherent ability to estimate uncertainty, serve as potent tools to mitigate overconfidence and enhance calibration. In this work, we introduce Laplace-LoRA, which applies a Bayesian approach to the LoRA parameters. Specifically, Laplace-LoRA applies a Laplace approximation to the posterior over the LoRA parameters, considerably improving the calibration of fine-tuned LLMs.

Submission history

From: Adam Yang [view email]
[v1]
Thu, 24 Aug 2023 23:06:21 UTC (186 KB)
[v2]
Mon, 28 Aug 2023 00:38:43 UTC (185 KB)
[v3]
Wed, 4 Oct 2023 16:29:23 UTC (624 KB)
[v4]
Sun, 28 Jan 2024 12:23:21 UTC (643 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *