A General Deep Reinforcement Learning Framework for Grant-Free NOMA Optimization in mURLLC

arxiv(2021)

引用 3|浏览32
暂无评分
摘要
Grant-free non-orthogonal multiple access (GF-NOMA) is a potential technique to support massive Ultra-Reliable and Low-Latency Communication (mURLLC) service. However, the dynamic resource configuration in GF-NOMA systems is challenging due to the random traffics and collisions, which are unknown at the base station (BS). Meanwhile, joint consideration of the latency and reliability requirements makes the resource configuration of GF-NOMA for mURLLC more complex. To address this problem, we develop a general learning framework for signature-based GF-NOMA in mURLLC service taking into account the MA signature collision, and the UE detection as well as the data decoding procedures for the K-repetition GF scheme and the Proactive GF scheme. The goal of our learning framework is to maximize the long-term average number of successfully served users (UEs) under the latency constraint. We first perform a real-time repetition value configuration based on a double deep Q-Network (DDQN) and then propose a Cooperative Multi-Agent (CMA) learning technique based on the DQN to optimize the configuration of both the repetition values and the contention-transmission unit (CTU) numbers. Our results shown that the number of successfully served UEs achieved under the same latency constraint in our proposed learning framework is up to ten times for the K-repetition scheme, and two times for the Proactive scheme, more than those achieved in the system with fixed repetition values and CTU numbers, respectively. Importantly, our general learning framework can be used to optimize the resource configuration problems in all the signature-based GF-NOMA schemes.
更多
查看译文
关键词
noma,optimization,reinforcement,grant-free
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要