학술논문
An Adversarial Example for Direct Logit Attribution: Memory Management in GELU-4L
Document Type
Working Paper
Author
Source
BlackboxNLP Workshop 2024, pages 232-237
Subject
Language
Abstract
Prior work suggests that language models manage the limited bandwidth of the residual stream through a "memory management" mechanism, where certain attention heads and MLP layers clear residual stream directions set by earlier layers. Our study provides concrete evidence for this erasure phenomenon in a 4-layer transformer, identifying heads that consistently remove the output of earlier heads. We further demonstrate that direct logit attribution (DLA), a common technique for interpreting the output of intermediate transformer layers, can show misleading results by not accounting for erasure.