End-To-End Neural Text Classification For Tibetan

CHINESE COMPUTATIONAL LINGUISTICS AND NATURAL LANGUAGE PROCESSING BASED ON NATURALLY ANNOTATED BIG DATA, CCL 2017(2017)

引用 4|浏览66
暂无评分
摘要
As a minority language, Tibetan has received relatively little attention in the field of natural language processing (NLP), especially in current various neural network models. In this paper, we investigate three end-to-end neural models for Tibetan text classification. The experimental results show that the end-to-end models outperform the traditional Tibetan text classification methods. The dataset and codes are available on https://github.com/FudanNLP/Tibetan-Classification.
更多
查看译文
关键词
Neural Model, Tibetan Word, Tibetan Script, Fixed-length Vector Representation, Segment Words
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要