Download a PDF of the paper titled Locating Factual Knowledge in Large Language Models: Exploring the Residual Stream and Analyzing Subvalues in Vocabulary Space, by Zeping Yu and 1 other authors
Abstract:We find the location of factual knowledge in large language models by exploring the residual stream and analyzing subvalues in vocabulary space. We find the reason why subvalues have human-interpretable concepts when projecting into vocabulary space. The before-softmax values of subvalues are added by an addition function, thus the probability of top tokens in vocabulary space will increase. Based on this, we find using log probability increase to compute the significance of layers and subvalues is better than probability increase, since the curve of log probability increase has a linear monotonically increasing shape. Moreover, we calculate the inner products to evaluate how much a feed-forward network (FFN) subvalue is activated by previous layers. Base on our methods, we find where factual knowledge <France, capital, Paris> is stored. Specifically, attention layers store “Paris is related to France”. FFN layers store “Paris is a capital/city”, activated by attention subvalues related to “capital”. We leverage our method on Baevski-18, GPT2 medium, Llama-7B and Llama-13B. Overall, we provide a new method for understanding the mechanism of transformers. We will release our code on github.
Submission history
From: Zeping Yu [view email]
[v1]
Tue, 19 Dec 2023 13:23:18 UTC (904 KB)
[v2]
Tue, 30 Jan 2024 12:19:09 UTC (2,065 KB)