학술논문

Impact of panelists' experience on script concordance test scores of medical students
Document Type
Discussion
Source
BMC Medical Education. September 17, 2020, Vol. 20 Issue 1
Subject
Rankings
Analysis
Information accessibility
College students -- Analysis
Medical students -- Analysis
Information management -- Analysis
Medical schools -- Analysis
Language
English
ISSN
1472-6920
Abstract
Author(s): Olivier Peyrony[sup.1], Alice Hutin[sup.2], Jennifer Truchot[sup.3,4], Raphaël Borie[sup.5,6], David Calvet[sup.7,8], Adrien Albaladejo[sup.4], Yousrah Baadj[sup.9], Pierre-Emmanuel Cailleaux[sup.10,11], Martin Flamant[sup.12,13,14], Clémence Martin[sup.15,16], Jonathan Messika[sup.14,17,18], Alexandre Meunier[sup.9], Mariana Mirabel[sup.14,19,20], Victoria Tea[sup.21], Xavier [...]
Background The evaluation process of French medical students will evolve in the next few years in order to improve assessment validity. Script concordance testing (SCT) offers the possibility to assess medical knowledge alongside clinical reasoning under conditions of uncertainty. In this study, we aimed at comparing the SCT scores of a large cohort of undergraduate medical students, according to the experience level of the reference panel. Methods In 2019, the authors developed a 30-item SCT and sent it to experts with varying levels of experience. Data analysis included score comparisons with paired Wilcoxon rank sum tests and concordance analysis with Bland & Altman plots. Results A panel of 75 experts was divided into three groups: 31 residents, 21 non-experienced physicians (NEP) and 23 experienced physicians (EP). Among each group, random samples of N = 20, 15 and 10 were selected. A total of 985 students from nine different medical schools participated in the SCT examination. No matter the size of the panel (N = 20, 15 or 10), students' SCT scores were lower with the NEP group when compared to the resident panel (median score 67.1 vs 69.1, p < 0.0001 if N = 20; 67.2 vs 70.1, p < 0.0001 if N = 15 and 67.7 vs 68.4, p < 0.0001 if N = 10) and with EP compared to NEP (65.4 vs 67.1, p < 0.0001 if N = 20; 66.0 vs 67.2, p < 0.0001 if N = 15 and 62.5 vs 67.7, p < 0.0001 if N = 10). Bland & Altman plots showed good concordances between students' SCT scores, whatever the experience level of the expert panel. Conclusions Even though student SCT scores differed statistically according to the expert panels, these differences were rather weak. These results open the possibility of including less-experienced experts in panels for the evaluation of medical students. Keywords: Script concordance test, Medical student, Panelist