Characterizing Large Language Model Geometry Solves Toxicity Detection and Generation

CoRR(2023)

引用 0|浏览0
暂无评分
摘要
Large Language Models~(LLMs) drive current AI breakthroughs despite very little being known about their internal representations, e.g., how to extract a few informative features to solve various downstream tasks. To provide a practical and principled answer, we propose to characterize LLMs from a geometric perspective. We obtain in closed form (i) the intrinsic dimension in which the Multi-Head Attention embeddings are constrained to exist and (ii) the partition and per-region affine mappings of the per-layer feedforward networks. Our results are informative, do not rely on approximations, and are actionable. First, we show that, motivated by our geometric interpretation, we can bypass Llama$2 s RLHF by controlling its embedding's intrinsic dimension through informed prompt manipulation. Second, we derive $7$ interpretable spline features that can be extracted from any (pre-trained) LLM layer, providing a rich abstract representation of their inputs. Those features alone ($224$ for Mistral-7B and Llama$2$-7B) are sufficient to help solve toxicity detection, infer the domain of the prompt, and even tackle the Jigsaw challenge, which aims at characterizing the type of toxicity of various prompts. Our results demonstrate how, even in large-scale regimes, exact theoretical results can answer practical questions in language models. Code: \url{https://github.com/RandallBalestriero/SplineLLM}.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
s RLHF by controlling its embedding's intrinsic dimension through informed prompt manipulation. Second, we derive $7$ interpretable spline features that can be extracted from any (pre-trained) LLM layer, providing a rich abstract representation of their inputs. Those features alone ($224$ for Mistral-7B and Llama$2$-7B) are sufficient to help solve toxicity detection, infer the domain of the prompt, and even tackle the Jigsaw challenge, which aims at characterizing the type of toxicity of various prompts. Our results demonstrate how, even in large-scale regimes, exact theoretical results can answer practical questions in language models. Code: \\url{https:\u002F\u002Fgithub.com\u002FRandallBalestriero\u002FSplineLLM}. ","authors":[{"id":"562539e745ce1e5964d50205","name":"Randall Balestriero"},{"id":"64352fa0f2699869fc1e1b50","name":"Romain Cosentino"},{"id":"638b6916cba47807f61b69b5","name":"Sarath Shekkizhar"}],"id":"656e8e53939a5f408286f8d2","is_downvoted":false,"is_starring":false,"is_upvoted":false,"num_citation":0,"num_starred":0,"num_upvoted":0,"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryroob.aminer.cn\u002F61\u002F33\u002F74\u002F613374CD63C3F1B0A4440F38EA283BFC.pdf","title":"Characterizing Large Language Model Geometry Solves Toxicity Detection\n and Generation","urls":["db\u002Fjournals\u002Fcorr\u002Fcorr2312.html#abs-2312-01648","https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2312.01648","https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01648"],"venue":{"info":{"name":"CoRR"},"volume":"abs\u002F2312.01648"},"versions":[{"id":"656e8e53939a5f408286f8d2","sid":"2312.01648","src":"arxiv","year":2023},{"id":"6593a6ae939a5f4082efab13","sid":"journals\u002Fcorr\u002Fabs-2312-01648","src":"dblp","year":2023}],"year":2023},"authorsData":[{"id":"562539e745ce1e5964d50205","name":"Randall Balestriero"},{"id":"64352fa0f2699869fc1e1b50","name":"Romain Cosentino"},{"id":"638b6916cba47807f61b69b5","name":"Sarath Shekkizhar"}]}};