DRLAR: A deep reinforcement learning-based adaptive routing framework for network-on-chips

Computer Networks(2024)

引用 0|浏览3
暂无评分
摘要
Adaptive routing plays a pivotal role in the overall performance of Network-on-Chips (NoCs). However, with many-core architectures supporting complex and constantly changing traffic patterns for emerging applications, this aspect presents significant challenges. Our meticulous examination and analysis of existing heuristic adaptive routing algorithms revealed three key limiting factors: single network status metric, lack of system feedback awareness, and lack of ability to customize. Reinforcement Learning (RL) methods have demonstrated promising opportunities for exploring adaptive routing design. Deep reinforcement learning (DRL) techniques, in particular, enable efficient exploration in adaptive routing design spaces where heuristic strategies may be inadequate. This paper proposes a novel deep reinforcement learning framework for adaptive routing called DRLAR that is suitable for diversified traffic patterns and resolves multi-objective optimization simultaneously. DRLAR formulates routing as an agent, which makes routing decisions based on autonomous learning using multiple network state features and system-level feedback information. We conduct extensive experiments against state-of-the-art routing algorithms to evaluate our design. The results show that DRLAR reduces packet latency by an average of 33.3%, achieving a reduction of 41.6% and 10.5% on average under heavy synthetic traffic and the PARSEC 2.1 benchmark, respectively. We also perform a cost analysis to validate the potential implementation of DRLAR on NoCs with low computational, storage, and power.
更多
查看译文
关键词
Network-on-chips,Adaptive routing,Deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要