Probabilistic Models for Ontology Learning: Transitivity in semantic relation learning

Probabilistic Models for Ontology Learning: Transitivity in semantic relation learning(2012)

引用 23|浏览14
暂无评分
摘要
Capturing word meaning is one of the challenges of natural language processing (NLP). Formal models of meaning such as semantic networks of words or concepts are knowledge repositories used in a variety of applications. To be effectively used, these networks have to be large or, at least, adapted to specific domains. Our main goal is to contribute practically to the research on semantic networks learning models by covering different aspects of the task. We propose a novel probabilistic model for learning semantic networks that expands existing semantic networks taking into accounts both corpus-extracted evidences and the structure of the generated semantic networks. The model exploits structural properties of target relations such as transitivity during learning. Our model presents some innovations in estimating the probabilities. We then propose two extensions of our probabilistic model: a model for learning from a generic domain that can be exploited to extract new information in a specific domain and an incremental ontology learning system that puts human validations in the learning loop.
更多
查看译文
关键词
Ontology Learning,human validation,Capturing word meaning,corpus-extracted evidence,formal model,Probabilistic Models,generic domain,novel probabilistic model,different aspect,probabilistic model,specific domain,semantic network,semantic relation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要