Cold Start Latency in Serverless Computing: A Systematic Review, Taxonomy, and Future Directions

CoRR(2023)

引用 0|浏览2
暂无评分
摘要
Recently, academics and the corporate sector have paid attention to serverless computing, which enables dynamic scalability and an economic model. In serverless computing, users pay only for the time they actually spend using the resources. Although zero scaling optimises cost and resource utilisation, it is the fundamental reason for the serverless cold start problem. Various academic and corporate sector studies are being conducted to tackle the cold start problem, which has large research challenges. To study the "cold start" problem in serverless computing, this article provides a comprehensive literature overview of recent research. In addition, we present a detailed taxonomy of several approaches to addressing the issue of cold start latency in serverless computing. Several academic and industrial organisations have proposed methods for cutting down the cold start time and cold start frequency, and this taxonomy is being used to explore these methods. There are several categories in which a current study on cold start latency is organised: caching and application-level optimization-based solutions, as well as AI/ML-based solutions. We have analysed the current methods and grouped them into categories based on their commonalities and features. Finally, we conclude with a review of current challenges and possible future research directions.
更多
查看译文
关键词
serverless computing,cold
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要