TY - JOUR
T1 - Validation of a Surgical Objective Structured Clinical Examination (S-OSCE) Using Convergent, Divergent, And Trainee-Based Assessments of Fidelity
AU - Orovec, Adele
AU - Bishop, Alex
AU - Scott, Stephanie A.
AU - Wilson, Dave
AU - Richardson, C. Glen
AU - Oxner, William
AU - Glennie, R. Andrew
N1 - Publisher Copyright:
© 2022 Association of Program Directors in Surgery
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Objective: Describe the validation of a surgical objective structured clinical examination (S-OSCE) for the purpose of competency assessment based on the Royal College of Canada's Can-MEDS framework. Design: A surgical OSCE was developed to evaluate the management of common orthopedic surgical problems. The scores derived from this S-OSCE were compared to Ottawa Surgical Competency Operating Room Evaluation (O-SCORE), a validated entrustability assessment, to establish convergent validity. The S-OSCE scores were compared to Orthopedic In-Training Examination (OITE) scores to evaluate divergent validity. Resident evaluations of the clinical encounter with a standardized patient and the operative procedure were scored on a 10-point Likert scale for fidelity. Setting: A tertiary level academic teaching hospital. Participants: 21 postgraduate year 2 to 5 trainees of a 5-year Canadian orthopedic residency program creating 160 operative case performances for review. Results: There were 5 S-OSCE days, over a 4-year period (2016-2019) encompassing a variety of surgical procedures. Performance on the S-OSCE correlated strongly with the O-SCORE (Pearson correlation coefficient 0.88), and a linear regression analysis correlated moderately with year of training (R² = 0.5345). The Pearson correlation coefficient between the S-OSCE and OITE scores was 0.57. There was a significant increase in the average OITE score after the introduction of the surgical OSCE. Resident fidelity ratings were available from 16 residents encompassing 8 different surgical cases. The average score for the overall simulation (8.0±1.6) was significantly higher than the cadaveric surgical simulation (6.5 ± 0.8) (p < 0.001) Conclusions: The S-OSCE scores correlate strongly with an established form of assessment demonstrating convergent validity. The correlation between the S-OSCE and OITE scores was less, demonstrating divergent validity. Although residents rank the overall simulation highly, the fidelity of the cadaveric simulation may need improvement. Administration of a surgical OSCE can be used to evaluate preoperative and intraoperative decision making and complement other forms of assessment.
AB - Objective: Describe the validation of a surgical objective structured clinical examination (S-OSCE) for the purpose of competency assessment based on the Royal College of Canada's Can-MEDS framework. Design: A surgical OSCE was developed to evaluate the management of common orthopedic surgical problems. The scores derived from this S-OSCE were compared to Ottawa Surgical Competency Operating Room Evaluation (O-SCORE), a validated entrustability assessment, to establish convergent validity. The S-OSCE scores were compared to Orthopedic In-Training Examination (OITE) scores to evaluate divergent validity. Resident evaluations of the clinical encounter with a standardized patient and the operative procedure were scored on a 10-point Likert scale for fidelity. Setting: A tertiary level academic teaching hospital. Participants: 21 postgraduate year 2 to 5 trainees of a 5-year Canadian orthopedic residency program creating 160 operative case performances for review. Results: There were 5 S-OSCE days, over a 4-year period (2016-2019) encompassing a variety of surgical procedures. Performance on the S-OSCE correlated strongly with the O-SCORE (Pearson correlation coefficient 0.88), and a linear regression analysis correlated moderately with year of training (R² = 0.5345). The Pearson correlation coefficient between the S-OSCE and OITE scores was 0.57. There was a significant increase in the average OITE score after the introduction of the surgical OSCE. Resident fidelity ratings were available from 16 residents encompassing 8 different surgical cases. The average score for the overall simulation (8.0±1.6) was significantly higher than the cadaveric surgical simulation (6.5 ± 0.8) (p < 0.001) Conclusions: The S-OSCE scores correlate strongly with an established form of assessment demonstrating convergent validity. The correlation between the S-OSCE and OITE scores was less, demonstrating divergent validity. Although residents rank the overall simulation highly, the fidelity of the cadaveric simulation may need improvement. Administration of a surgical OSCE can be used to evaluate preoperative and intraoperative decision making and complement other forms of assessment.
UR - http://www.scopus.com/inward/record.url?scp=85125437510&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125437510&partnerID=8YFLogxK
U2 - 10.1016/j.jsurg.2022.01.014
DO - 10.1016/j.jsurg.2022.01.014
M3 - Article
C2 - 35232691
AN - SCOPUS:85125437510
SN - 1931-7204
VL - 79
SP - 1000
EP - 1008
JO - Journal of Surgical Education
JF - Journal of Surgical Education
IS - 4
ER -