Uncategorized

Efficient and Extreme Length Extension of Large Language Models



Download a PDF of the paper titled E^2-LLM: Efficient and Extreme Length Extension of Large Language Models, by Jiaheng Liu and 13 other authors

Download PDF
HTML (experimental)

Abstract:Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. Existing long-context extension methods usually need additional training procedures to support corresponding long-context windows, where the long-context training data (e.g., 32k) is needed, and high GPU training costs are assumed. To address the aforementioned issues, we propose an Efficient and Extreme length extension method for Large Language Models, called E 2 -LLM, with only one training procedure and dramatically reduced computation cost, which also removes the need to collect long-context data. Concretely, first, the training data of our E 2 -LLM only requires a short length (e.g., 4k), which reduces the tuning cost greatly. Second, the training procedure on the short training context window is performed only once time, and we can support different evaluation context windows at inference. Third, in E 2 – LLM, based on RoPE position embeddings, we introduce two different augmentation methods on the scale and position index parameters for different samples in training. It aims to make the model more robust to the different relative differences when directly interpolating the arbitrary context length at inference. Comprehensive experimental results on multiple benchmark datasets demonstrate the effectiveness of our E 2 -LLM on challenging long-context tasks.

Submission history

From: Jiaheng Liu [view email]
[v1]
Sat, 13 Jan 2024 02:11:20 UTC (4,255 KB)
[v2]
Thu, 18 Jan 2024 02:18:43 UTC (4,255 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *