Joint Word and Entity Embeddings for Entity Retrieval from a Knowledge Graph

european conference on information retrieval(2020)

引用 9|浏览29
暂无评分
摘要
Recent years have witnessed the emergence of novel models for ad-hoc entity search in knowledge graphs of varying complexity. Since these models are based on direct term matching, their accuracy can suffer from a mismatch between vocabularies used in queries and entity descriptions. Although successful applications of word embeddings and knowledge graph entity embeddings to address the issues of vocabulary mismatch in ad-hoc document retrieval and knowledge graph noisiness and incompleteness, respectively, have been reported in recent literature, the utility of joint word and entity embeddings for entity search in knowledge graphs has been relatively unexplored. In this paper, we propose Knowledge graph Entity and Word Embedding for Retrieval (KEWER), a novel method to embed entities and words into the same low-dimensional vector space, which takes into account a knowledge graph’s local structure and structural components, such as entities, attributes, and categories, and is designed specifically for entity search. KEWER is based on random walks over the knowledge graph and can be considered as a hybrid of word and network embedding methods. Similar to word embedding methods, KEWER utilizes contextual co-occurrences as training data, however, it treats words and entities as different objects. Similar to network embedding methods, KEWER takes into account knowledge graph’s local structure, however, it also differentiates between structural components. Experiments on publicly available entity search benchmarks and state-of-the-art word and joint word and entity embedding methods indicate that a combination of KEWER and BM25F results in a consistent improvement in retrieval accuracy over BM25F alone.
更多
查看译文
关键词
entity embeddings,entity retrieval,joint word,knowledge
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要