DyCo: Dynamic, Contextualized AI models

ACM Transactions on Embedded Computing Systems (TECS)(2022)

引用 0|浏览5
暂无评分
摘要
Devices with limited computing resources use smaller AI models to achieve low-latency inferencing. However, model accuracy is typically much lower than the accuracy of a bigger model that is trained and deployed in places where the computing resources are relatively abundant. We describe DyCo, a novel system that ensures privacy of stream data, and dynamically improves the accuracy of small models used in devices. Unlike knowledge distillation or federated learning, DyCo treats AI models as black boxes. Dyco uses a semi-supervised approach to leverage existing training frameworks and network model architectures to periodically train contextualized, smaller models for resource-constrained devices. Dyco uses a bigger, highly accurate model in the edge-cloud to auto-label data received from each sensor stream. Training in the edge-cloud (as opposed to the public cloud) ensures data privacy, and bespoke models for thousands of live data streams can be designed in parallel by using multiple edge-clouds. Dyco uses the auto-labeled data to periodically re-train, stream-specific, bespoke small models. To reduce the periodic training costs, Dyco uses different policies that are based on stride, accuracy and confidence information. We evaluate our system, and the contextualized models, by using two object detection models for vehicles and people, and two data sets (a public benchmark, and another real-world proprietary data set). Our results show that Dyco increases the mAP accuracy measure of small models by an average of 16.3% (and up to 20%) for the public benchmark, and an average of 19.0% (and up to 64.9%) for the real-world data set. Dyco also decreases the training costs for contextualized models by more than an order of magnitude.
更多
查看译文
关键词
Object detector,semi-supervised learning,contextualized,edge computing,edge cloud,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要