Benchmarking for syntax-based sentential inference.

COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters(2010)

引用 1|浏览0
暂无评分
摘要
We propose a methodology for investigating how well NLP systems handle meaning preserving syntactic variations. We start by presenting a method for the semi automated creation of a benchmark where entailment is mediated solely by meaning preserving syntactic variations. We then use this benchmark to compare a semantic role labeller and two grammar based RTE systems. We argue that the proposed methodology (i) supports a modular evaluation of the ability of NLP systems to handle the syntax/semantic interface and (ii) permits focused error mining and error analysis.
更多
查看译文
关键词
NLP system,syntactic variation,error analysis,error mining,proposed methodology,semantic interface,semantic role labeller,RTE system,modular evaluation,semi automated creation,syntax-based sentential inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要