谷歌浏览器插件
订阅小程序
在清言上使用

Cross-lingual Transfer of Knowledge in Distributional Language Models: Experiments in Hungarian

Acta linguistica academica(2022)

引用 0|浏览0
暂无评分
摘要
In this paper, we argue that the very convincing performance of recent deep-neural-model-based NLP applications has demonstrated that the distributionalist approach to language description has proven to be more successful than the earlier subtle rule-based models created by the generative school. The now ubiquitous neural models can naturally handle ambiguity and achieve human-like linguistic performance with most of their training consisting only of noisy raw linguistic data without any multimodal grounding or external supervision refuting Chomsky's argument that some generic neural architecture cannot arrive at the linguistic performance exhibited by humans given the limited input available to children. In addition, we demonstrate in experiments with Hungarian as the target language that the shared internal represen-tations in multilingually trained versions of these models make them able to transfer specific linguistic skills, including structured annotation skills, from one language to another remarkably efficiently.
更多
查看译文
关键词
distributional vs,generative models of language,zero-shot cross-lingual knowledge transfer,multilingual contextual neural language models,meaning representation parsing,named entity recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要