Uncategorized

Lessons learned from activation outliers for weight quantization in large language models



Download a PDF of the paper titled OWQ: Lessons learned from activation outliers for weight quantization in large language models, by Changhun Lee and 4 other authors

Download PDF

Abstract:Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment. To address this challenge, we introduce the outlier-aware weight quantization (OWQ) method, which aims to minimize LLM’s footprint through low-precision representation. OWQ prioritizes a small subset of structured weights sensitive to quantization, storing them in high-precision, while applying highly tuned quantization to the remaining dense weights. This sensitivity-aware mixed-precision scheme reduces the quantization error notably, and extensive experiments demonstrate that 3.1-bit models using OWQ perform comparably to 4-bit models optimized by OPTQ. Furthermore, OWQ incorporates a parameter-efficient fine-tuning for task-specific adaptation, called weak column tuning (WCT), enabling accurate task-specific LLM adaptation with minimal memory overhead in the optimized format. OWQ represents a notable advancement in the flexibility, efficiency, and practicality of LLM optimization literature. The source code is available at this https URL

Submission history

From: Eunhyeok Park [view email]
[v1]
Sun, 4 Jun 2023 06:33:13 UTC (1,629 KB)
[v2]
Tue, 13 Jun 2023 07:31:38 UTC (1,629 KB)
[v3]
Tue, 23 Jan 2024 16:28:49 UTC (197 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *