Chrome Extension
WeChat Mini Program
Use on ChatGLM

Octavius: Mitigating Task Interference in MLLMs Via MoE

CoRR(2023)

Cited 4|Views49
Key words
Large Language Model (LLM),Multi-task learning,Multi-modal learning,Mixture-of-Experts (MoE),PEFT
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined