Model Reuse With Reduced Kernel Mean Embedding Specification
IEEE Transactions on Knowledge and Data Engineering(2023)
摘要
Given a publicly available pool of machine learning models constructed for various tasks, when a user plans to build a model for her own machine learning application, is it possible to build upon models in the pool such that the previous efforts on these existing models can be reused rather than starting from scratch? Here, a grand challenge is how to find models that are helpful for the current application, without accessing the raw training data for the models in the pool. In this paper, we present a two-phase framework. In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a
specification
for the model. Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification. Theoretical results and extensive experiments validate the effectiveness of our approach.
更多查看译文
关键词
Machine learning,data mining,information theory,model reuse,kernel mean embedding,privacy,domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络