SelfIE: Self-Interpretation of Large Language Model Embeddings
CoRR(2024)
摘要
How do large language models (LLMs) obtain their answers? The ability to
explain and control an LLM's reasoning process is key for reliability,
transparency, and future model developments. We propose SelfIE
(Self-Interpretation of Embeddings), a framework that enables LLMs to interpret
their own embeddings in natural language by leveraging their ability to respond
inquiry about a given passage. Capable of interpreting open-world concepts in
the hidden embeddings, SelfIE reveals LLM internal reasoning in cases such as
making ethical decisions, internalizing prompt injection, and recalling harmful
knowledge. SelfIE's text descriptions on hidden embeddings also open up new
avenues to control LLM reasoning. We propose Supervised Control, which allows
editing open-ended concepts while only requiring gradient computation of
individual layer. We extend RLHF to hidden embeddings and propose Reinforcement
Control that erases harmful knowledge in LLM without supervision targets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要