LMM: latency-aware micro-service mashup in mobile edge computing environment

NEURAL COMPUTING & APPLICATIONS(2020)

引用 56|浏览66
暂无评分
摘要
Internet of Things (IoT) applications introduce a set of stringent requirements (e.g., low latency, high bandwidth) to network and computing paradigm. 5G networks are faced with great challenges for supporting IoT services. The centralized cloud computing paradigm also becomes inefficient for those stringent requirements. Only extending spectrum resources cannot solve the problem effectively. Mobile edge computing offers an IT service environment at the Radio Access Network edge and presents great opportunities for the development of IoT applications. With the capability to reduce latency and offer an improved user experience, mobile edge computing becomes a key technology toward 5G. To achieve abundant sharing, complex IoT applications have been implemented as a set of lightweight micro-services that are distributed among containers over the mobile edge network. How to produce the optimal collocation of suitable micro-service for an application in mobile edge computing environment is an important issue that should be addressed. To address this issue, we propose a latency-aware micro-service mashup approach in this paper. Firstly, the problem is formulated into an integer nonlinear programming. Then, we prove the NP-hardness of the problem by reducing it into the delay constrained least cost problem. Finally, we propose an approximation latency-aware micro-service mashup approach to solve the problem. Experiment results show that the proposed approach achieves a substantial reduction in network resource consumption while still ensuring the latency constraint.
更多
查看译文
关键词
Micro-service,Mobile edge computing,Network resource consumption,Latency,Mashup
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要