Sample Size Requirements For Training To A Kappa Agreement Criterion On Clinical Dementia Ratings

ALZHEIMER DISEASE & ASSOCIATED DISORDERS(2010)

引用 17|浏览9
暂无评分
摘要
The Clinical Dementia Rating (CDR) is a valid and reliable global measure of dementia severity. Diagnosis and transition across stages hinge on its consistent administration. Reports of CDR ratings reliability have been based on 1 or 2 test cases at each severity level; agreement (kappa) statistics based on so few rated cases have large error, and confidence intervals are incorrect. Simulations varied the number of test cases, and their distribution across CDR stage; to derive the sample size yielding a 95% confidence that estimated is at least 0.60. We found that testing raters on 5 or more patients per CDR level (total N = 25) will yield the desired confidence in estimated k, and if the test involves greater representation of CDR stages that are harder to evaluate, at least 42 ratings are needed. Testing newly trained raters with at least 5 patients per CDR stage will provide valid estimation of rater consistency, given the point estimate for k is roughly 0.80; fewer test cases increases the standard error and unequal distribution of test cases across CDR stages will lower k and increase error.
更多
查看译文
关键词
agreement, kappa, CDR, training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要