Fully adaptive recommendation paradigm: top-enhanced recommender distillation for intelligent education systems

COMPLEX & INTELLIGENT SYSTEMS(2022)

引用 2|浏览12
暂无评分
摘要
Top-N recommendation has received great attention in assisting students in providing personalized learning guidance on the required subject/domain. Generally, existing approaches mainly aim to maximize the overall accuracy of the recommendation list while ignoring the accuracy of highly ranked recommended exercises, which seriously affects the students’ learning enthusiasm. Motivated by the Knowledge Distillation (KD) technique, we skillfully design a fully adaptive recommendation paradigm named Top-enhanced Recommender Distillation framework (TERD) to improve the recommendation effect of the top positions. Specifically, the proposed TERD transfers the knowledge of an arbitrary recommender (teacher network), and injects it into a well-designed student network. The prior knowledge provided by the teacher network, including student-exercise embeddings, and candidate exercise subsets, are further utilized to define the state and action space of the student network (i.e., DDQN). In addition, the student network introduces a well-designed state representation scheme and an effective individual ability tracing model to enhance the recommendation accuracy of top positions. The developed TERD follows a flexible model-agnostic paradigm that not only simplifies the action space of the student network, but also promotes the recommendation accuracy of the top position, thus enhancing the students’ motivation and engagement in e-learning environment. We implement our proposed approach on three well-established datasets and evaluate its Top-enhanced performance. The experimental evaluation on three publicly available datasets shows that our proposed TERD scheme effectively resolves the Top-enhanced recommendation issue.
更多
查看译文
关键词
Top-N recommendation,Knowledge distillation,Reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要