谷歌浏览器插件
订阅小程序
在清言上使用

Machine Learning Based Interference Modelling in Cloud-Native Applications

International Conference on Performance Engineering(2022)

引用 1|浏览17
暂无评分
摘要
Cloud-native applications are often composed of lightweight containers and conform to the microservice architecture. Cloud providers offer platforms for container hosting and orchestration. These platforms reduce the level of support required from the application owner as operational tasks are delegated to the platform. Furthermore, containers belonging to different applications can be co-located on the same virtual machine to utilize resources more efficiently. Given that there are underlying shared resources and consequently potential performance interference, predicting the level of interference before deciding to share virtual machines can avoid undesirable performance deterioration. We propose a lightweight performance interference modelling technique for cloud-native microservices. The technique constructs ML models for response time prediction and can dynamically account for changing runtime conditions through the use of a sliding window method. We evaluate our technique against realistic microservices on AWS EC2. Our technique outperforms baseline and competing techniques in MAPE by at least 1.45% and at most 92.04%.
更多
查看译文
关键词
Microservice,Interference,Cloud Computing,Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要