Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms

EMNLP(2002)

引用 2712|浏览560
暂无评分
摘要
We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.
更多
查看译文
关键词
training example,conditional random field,hidden markov model,viterbi decoding,perceptron algorithm,new algorithm,classification problem,part-of-speech tagging,maximum-entropy tagger,base noun phrase chunk,discriminative training method,random field,maximum entropy model,maximum entropy,noun phrase,viterbi decoder
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要