Examining Human and Automated Ratings of Elementary Students' Writing Quality: A Multivariate Generalizability Theory Application

AMERICAN EDUCATIONAL RESEARCH JOURNAL(2022)

引用 1|浏览11
暂无评分
摘要
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger study. Students wrote six essays across three genres. All essays were hand-scored by four raters and an AES system called Project Essay Grade (PEG). Both scoring methods were highly reliable, but PEG was more reliable for non-struggling students, while hand-scoring was more reliable for struggling students. We provide recommendations regarding ways of optimizing writing assessment and blending hand-scoring with AES.
更多
查看译文
关键词
elementary grades, generalizability theory, writing assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要