Language Simplification through Error-Correcting and Grammatical Inference Techniques

Machine Learning(2001)

引用 18|浏览0
暂无评分
摘要
In many language processing tasks, most of the sentences generally convey rather simple meanings. Moreover, these tasks have a limited semantic domain that can be properly covered with a simple lexicon and a restricted syntax. Nevertheless, casual users are by no means expected to comply with any kind of formal syntactic restrictions due to the inherent “spontaneous” nature of human language. In this work, the use of error-correcting-based learning techniques is proposed to cope with the complex syntactic variability which is generally exhibited by natural language. In our approach, a complex task is modeled in terms of a basic finite state model, F , and a stochastic error model, E . F should account for the basic (syntactic) structures underlying this task, which would convey the meaning. E should account for general vocabulary variations, word disappearance, superfluous words, and so on. Each “natural” user sentence is thus considered as a corrupted version (according to E ) of some “simple” sentence of L ( F ). Adequate bootstrapping procedures are presented that incrementally improve the “structure” of F while estimating the probabilities for the operations of E . These techniques have been applied to a practical task of moderately high syntactic variability, and the results which show the potential of the proposed approach are presented.
更多
查看译文
关键词
language processing,error-correcting techniques,bootstrapping,incremental learning algorithms,edit operations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要