Chrome Extension
WeChat Mini Program
Use on ChatGLM

DHA: Learning Decoupled-Head Attention from Transformer Checkpoints Via Adaptive Heads Fusion

NeurIPS 2024(2024)

Cited 0|Views41
Key words
Large Language Models,Multi-Head Attention,Pre-training Acceleration,Efficient Inference,Model Fusion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined