A Target Population Derived Method for Developing a Competency Standard in Radiograph Interpretation

TEACHING AND LEARNING IN MEDICINE(2022)

引用 5|浏览13
暂无评分
摘要
Construct For assessing the skill of visual diagnosis such as radiograph interpretation, competency standards are often developed in an ad hoc method, with a poorly delineated connection to the target clinical population. Background Commonly used methods to assess for competency in radiograph interpretation are subjective and potentially biased due to a small sample size of cases, subjective evaluations, or include an expert-generated case-mix versus a representative sample from the clinical field. Further, while digital platforms are available to assess radiograph interpretation skill against an objective standard, they have not adopted a data-driven competency standard which informs educators and the public that a physician has achieved adequate mastery to enter practice where they will be making high-stakes clinical decisions. Approach Operating on a purposeful sample of radiographs drawn from the clinical domain, we adapted the Ebel Method, an established standard setting method, to ascertain a defensible, clinically relevant mastery learning competency standard for the skill of radiograph interpretation as a model for deriving competency thresholds in visual diagnosis. Using a previously established digital platform, emergency physicians interpreted pediatric musculoskeletal extremity radiographs. Using one-parameter item response theory, these data were used to categorize radiographs by interpretation difficulty terciles (i.e. easy, intermediate, hard). A panel of emergency physicians, orthopedic surgeons, and plastic surgeons rated each radiograph with respect to clinical significance (low, medium, high). These data were then used to create a three-by-three matrix where radiographic diagnoses were categorized by interpretation difficulty and significance. Subsequently, a multidisciplinary panel that included medical and parent stakeholders determined acceptable accuracy for each of the nine cells. An overall competency standard was derived from the weighted sum. Finally, to examine consequences of implementing this standard, we reported on the types of diagnostic errors that may occur by adhering to the derived competency standard. Findings To determine radiograph interpretation difficulty scores, 244 emergency physicians interpreted 1,835 pediatric musculoskeletal extremity radiographs. Analyses of these data demonstrated that the median interpretation difficulty rating of the radiographs was -1.8 logits (IQR -4.1, 3.2), with a significant difference of difficulty across body regions (p < 0.0001). Physician review classified the radiographs as 1,055 (57.8%) as low, 424 (23.1%) medium or 356 (19.1%) high clinical significance. The multidisciplinary panel suggested a range of acceptable scores between cells in the three-by-three table of 76% to 95% and the sum of equal-weighted scores resulted in an overall performance-based competency score of 85.5% accuracy. Of the 14.5% diagnostic interpretation errors that may occur at the bedside if this competency standard were implemented, 9.8% would be in radiographs of low-clinical significance, while 2.5% and 2.3% would be in radiographs of medium or high clinical significance, respectively. Conclusion(s) This study's novel integration of radiograph selection and a standard setting method could be used to empirically drive evidence-based competency standard for radiograph interpretation and can serve as a model for deriving competency thresholds for clinical tasks emphasizing visual diagnosis.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要