Assessing Correlations Between Competency Ratings and Assessment-Specific Global Rating Scores Across Seven Core Clinical Clerkships at the University of Michigan Medical School.

Academic medicine : journal of the Association of American Medical Colleges(2023)

引用 0|浏览6
暂无评分
摘要
Purpose: A major challenge in implementing competency-based medical education (CBME) in undergraduate medical education (UME) is that students are assessed in a wide range of contexts and learning environments. These diverse learning environments may emphasize development of certain competencies more than others. This may be particularly true of the core clerkships, where students encounter a broad spectrum of medical disciplines, each with its own culture,1 values, and approach to patient care. These contextual differences are not necessarily addressed by measures intended to standardize assessment (such as entrustable professional activities or milestones, entrustment scales, and rater training or other faculty development initiatives). One step in optimally implementing competency-based programmatic assessment within clerkship education,2 is to identify which competencies are most culturally valued—and therefore potentially best assessed—by different clerkships. This cross-sectional study investigated how competency assessments correlated with assessment-specific global rating scores (GRS) across all core clinical clerkships and within specific clerkships at the University of Michigan Medical School (UMMS). Method: Clinical assessment forms assess 9 competencies within 6 UMMS competency domains (5 ACGME + 1 institutional) using a 3-point Likert scale, while also soliciting a 9-point GRS. Clinical assessment forms for 524 students who each completed seven core clinical clerkships from 2018–2021 were analyzed (n = 25,995 assessments) using linear mixed models. GRS was regressed on (1) the average competency score per assessment form across all clerkships; (2) each of the 9 individual competency scores per assessment form across all clerkships; and (3) the interaction between the nine individual competency scores and individual clerkships. All models included random intercepts for student and assessor to account for non-independence of the outcome. Results: Overall, average competency score was positively associated with GRS across all clerkships (β = 2.41, P < .0001, r2 = 0.49). Additionally, there were notable differences in the relative strength of association between individual competencies and GRS. Across all clerkships, medical knowledge/knowledge of basic and clinical sciences (MK-SM) was most strongly associated with increases in GRS (β = 0.4162), whereas professionalism/responsibility and accountability to patients, co-workers, and profession (PR-RA) was most weakly associated with GRS (β = 0.1307). Specific competencies were also variably associated with GRS across different clerkships. MK-SM was most strongly associated with GRS in the internal medicine and surgery clerkships. Practice-based learning and improvement/self-directed learning was most strongly associated with GRS in surgery and obstetrics and gynecology. Finally, communication/patients and families (C-PF) was most strongly associated with GRS in the pediatrics and psychiatry clerkships. Some competencies such as patient care/clinical reasoning, patient care/history physical, and patient care/management plan, showed little variation between clerkships. Discussion: These findings demonstrate that individual competencies do not correlate equally with GRS. The MK-SM competency had the strongest association with GRS regardless of clerkship, but the degree to which other competencies correlated with GRS varied significantly between clerkships. One possible explanation is that certain competencies may be valued differently in different clerkships. Patient care competencies are similarly correlated across all clerkships, suggesting these may be universally important competencies. Significance: Understanding how competency-oriented assessments align with global rating assessments can provide valuable insight regarding how to implement programmatic assessment in UME.2,3 These results are consistent with the intuitive concept that different specialties would inherently value different competencies. These findings can help bridge the gap between best practices and implementation, by providing students with clearer expectations regarding competency assessment in different clerkship learning environments. Articulating this aspect of the “hidden curriculum” can help learners navigate the complicated and evolving competency assessment environment, and therefore meaningfully contribute to CBME implementation in UME. Acknowledgments: The authors wish to thank Dr. Douglas Gelb for substantive editing of the abstract.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要