A light-weight edge-enabled knowledge distillation technique for next location prediction of multitude transportation means

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE(2024)

Cited 0|Views21
No score
Abstract
In this article we study how we can transfer knowledge between mobility models that represent different locations and means of transport. Specifically, we propose the use of knowledge distillation and fine-tuning techniques in order to build accurate next location prediction models using a light-weight architecture that can significantly reduce the inference time. Our goal is not to add one more model in the mobility literature. Instead, we believe that it is of paramount importance to present how we can manage, specialize, and enhance well-trained mobility predictors. In addition to this, we take into consideration the ever-generating mobility data, the limited resources of the devices that run the models and we focus on how we can reduce their computational requirements. We have tried three variations on how we use knowledge distillation, namely distilled agent, double-distilled agent and pre-distilled agent with the latter having an overall improvement of 6.57% in the distance errors compared with a state-of-the-art next location prediction that does not use knowledge distillation and 99.8% reduction in inference time on edge devices with the utilization of light-weight Machine Learning frameworks such as, TensorFlow Lite.
More
Translated text
Key words
Mobility,Next-location prediction,Deep learning,Transfer knowledge,Knowledge distillation,Fine-tuning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined