Adversarial Distillation for Efficient Recommendation with External Knowledge.

ACM Trans. Inf. Syst.(2019)

引用 49|浏览180
暂无评分
摘要
Integrating external knowledge into the recommendation system has attracted increasing attention in both industry and academic communities. Recent methods mostly take the power of neural network for effective knowledge representation to improve the recommendation performance. However, the heavy deep architectures in existing models are usually incorporated in an embedded manner, which may greatly increase the model complexity and lower the runtime efficiency. To simultaneously take the power of deep learning for external knowledge modeling as well as maintaining the model efficiency at test time, we reformulate the problem of recommendation with external knowledge into a generalized distillation framework. The general idea is to free the complex deep architecture into a separate model, which is only used in the training phrase, while abandoned at test time. In particular, in the training phrase, the external knowledge is processed by a comprehensive teacher model to produce valuable information to teach a simple and efficient student model. Once the framework is learned, the teacher model is abandoned, and only the succinct yet enhanced student model is used to make fast predictions at test time. In this article, we specify the external knowledge as user review, and to leverage it in an effective manner, we further extend the traditional generalized distillation framework by designing a Selective Distillation Network (SDNet) with adversarial adaption and orthogonality constraint strategies to make it more robust to noise information. Extensive experiments verify that our model can not only improve the performance of rating prediction, but also can significantly reduce time consumption when making predictions as compared with several state-of-the-art methods.
更多
查看译文
关键词
Recommendation system, adversarial training, distillation network, external knowledge, personalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要