摘要:Purpose:The purpose of the study was to examine the recording accuracy of faculty observers and standardized patients (SPs) on a clinical performance examination (CPX). Methods:This was a cross-sectional study of a fourth-year medical students’ CPX that was held at a medical school in Seoul, Korea. The CPX consisted of 4 cases and was administered to 118 examinees,with the participation of 52 SP and 45 faculty observers. For the study we chose 15 examinees per case,and analyzed 60 student-SP encounters in total. To determine the recording accuracy level,2 SP trainers developed an answer key for each encounter. First,we computed agreement rates (P) and kappa coefficient (K) values between the answer key–SPs and the answer key–faculty observers. Secondly,we analyzed variance (ANOVA) with repeated measures to determine whether the mean percentage of the correct checklist score differed as a function of the rater,the case, or the interaction between both factors. Results:Mean P rates ranged from 0.72 to 0.86,while mean K values varied from 0.39 to 0.59. The SP checklist accuracy was higher than that of faculty observersat the level of item comparison. Results from ANOVA showed that there was no significant difference between the percentage of correct scores by the answer key,faculty observers and SPs. There was no significant interaction between rater and case factors. Conclusion:Acceptable levels of recording accuracy were obtained in both rater groups. SP raters can replace faculty raters in a large-scale CPX with thorough preparation.
关键词:Clinical competence;Undergraduate medical education;Observer variation;Educational measurement