Statistical practice and transparent reporting in the neurosciences: Preclinical motor behavioral experiments

Olivia Hogue, Tucker Harvey,Dena Crozier,Claire Sonneborn, Abagail Postle, Hunter Block-Beach,Eashwar Somasundaram,Francis J May, Monica Snyder Braun, Felicia L Pasadyn, Khandi King, Casandra Johnson,Mary A Dolansky,Nancy A Obuchowski,Andre G Machado,Kenneth B Baker,Jill S Barnholtz-Sloan

PLOS ONE(2022)

引用 5|浏览10
暂无评分
摘要
Longitudinal and behavioral preclinical animal studies generate complex data, which may not be well matched to statistical approaches common in this literature. Analyses that do not adequately account for complexity may result in overly optimistic study conclusions, with consequences for reproducibility and translational decision-making. Recent work interrogating methodological shortcomings in animal research has not yet comprehensively investigated statistical shortcomings in the analysis of complex longitudinal and behavioral data. To this end, the current cross-sectional meta-research study rigorously reviewed published mouse or rat controlled experiments for motor rehabilitation in three neurologic conditions to evaluate statistical choices and reporting. Medline via PubMed was queried in February 2020 for English-language articles published January 1, 2017- December 31, 2019. Included were articles that used rat or mouse models of stroke, Parkinson's disease, or traumatic brain injury, employed a therapeutic controlled experimental design to determine efficacy, and assessed at least one functional behavioral assessment or global evaluation of function. 241 articles from 99 journals were evaluated independently by a team of nine raters. Articles were assessed for statistical handling of non-independence, animal attrition, outliers, ordinal data, and multiplicity. Exploratory analyses evaluated whether transparency or statistical choices differed as a function of journal factors. A majority of articles failed to account for sources of non-independence in the data (74-93%) and/or did not analytically account for mid-treatment animal attrition (78%). Ordinal variables were often treated as continuous (37%), outliers were predominantly not mentioned (83%), and plots often concealed the distribution of the data (51%) Statistical choices and transparency did not differ with regards to journal rank or reporting requirements. Statistical misapplication can result in invalid experimental findings and inadequate reporting obscures errors. Clinician-scientists evaluating preclinical work for translational promise should be mindful of commonplace errors. Interventions are needed to improve statistical decision-making in preclinical behavioral neurosciences research.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要