Lexicalized Well-Founded Grammars: Learnability and Merging

msra(2005)

引用 24|浏览12
暂无评分
摘要
This paper presents the theoretical foundation of a new type of constraint-based grammars, Lexicalized Well-Founded Grammars , which are adequate for modeling human language and are learnable. These features make the grammars suitable for developing robust and scalable natural language understanding systems. Our grammars capture both syntax and semantics and have two types of constraints at the rule level: one for semantic composition and one for ontology-based semantic interpretation. We prove that these grammars can always be learned from a small set of semantically annotated, ordered representative examples, using a relational learning algorithm. We introduce a new semantic representation for natural language, which is suitable for an ontology-based interpretation and allows us to learn the compositional con- straints together with the grammar rules. Besides the learnability results, we give a principle for grammar merging. The experiments presented in this paper show promising results for the ade- quacy of these grammars in learning natural language. Relatively simple linguistic knowledge is needed to build the small set of semantically annotated examples required for the grammar induction.
更多
查看译文
关键词
natural language understanding.,inductive logic programming,ontology-based semantic representation,constraint-based grammar induction,computer science
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要