Deep Learning Based Resources Allocation for Internet-of-Things Deployment Underlaying Cellular Networks

MOBILE NETWORKS & APPLICATIONS(2020)

引用 8|浏览4
暂无评分
摘要
Resources allocation (RA) is a challenging task in many fields and applications including communications and computer networks. The conventional solutions of such problems usually come with a time and memory cost, especially for massive networks such as Internet-of-Things (IoT) networks. In this paper, two RA deep network models are proposed for enabling a clustered underlay IoT deployment, where a group of IoT nodes are uploading information to a centralized gateway in their vicinity by reusing the communication channels of conventional cellular users. The RA problem is formulated as a two-dimensional matching problem, which can be expressed as a traditional linear sum assignment problem (LSAP). The two proposed models are based on the recurrent neural network (RNN). Specifically, we investigate the performance of two long short-term memory (LSTM) based architectures. The results show that the proposed techniques could be used as replacement of the well-known Hungarian algorithm for solving LSAPs due to its ability to find the solution for the problems with different sizes, high accuracy, and very fast execution time. Additionally, the results show that the obtained accuracy outperforms the state-of-the-art deep network techniques.
更多
查看译文
关键词
Resources allocation, Linear sum assignment problems, Recurrent neural network, Long short-term memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要