谷歌浏览器插件
订阅小程序
在清言上使用

Language Models Are Better Than Humans at Next-Token Prediction.

Trans Mach Learn Res(2024)

引用 0|浏览24
暂无评分
摘要
Current language models are considered to have sub-human capabilities at natural language tasks like question-answering or writing code. However, language models are not trained to perform well at these tasks, they are trained to accurately predict the next token given previous tokes in tokenized text. It is not clear whether language models are better or worse than humans at next token prediction. To try to answer this question, we performed two distinct experiments to directly compare humans and language models on this front: one measuring top-1 accuracy and the other measuring perplexity. In both experiments, we find humans to be consistently worse than even relatively small language models like GPT3-Ada at next-token prediction.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要