HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts.
EMNLP 2023(2023)
摘要
By routing input tokens to only a few split experts, Sparse
Mixture-of-Experts has enabled efficient training of large language models.
Recent findings suggest that fixing the routers can achieve competitive
performance by alleviating the collapsing problem, where all experts eventually
learn similar representations. However, this strategy has two key limitations:
(i) the policy derived from random routers might be sub-optimal, and (ii) it
requires extensive resources during training and evaluation, leading to limited
efficiency gains. This work introduces \HyperRout, which dynamically generates
the router's parameters through a fixed hypernetwork and trainable embeddings
to achieve a balance between training the routers and freezing them to learn an
improved routing policy. Extensive experiments across a wide range of tasks
demonstrate the superior performance and efficiency gains of \HyperRouter
compared to existing routing methods. Our implementation is publicly available
at {\url{{https://github.com/giangdip2410/HyperRouter}}}.
更多查看译文
关键词
sparse mixture,efficient training,experts
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要