Can standardized patients replace physicians as OSCE examiners?

Kevin McLaughlin, Laura Gregor, Allan Jones, Sylvain Coderre
Author Information
  1. Kevin McLaughlin: University of Calgary, Calgary, Alberta, Canada. kevin.mclaughlin@calgaryhealthregion.ca

Abstract

BACKGROUND: To reduce inter-rater variability in evaluations and the demand on physician time, standardized patients (SP) are being used as examiners in OSCEs. There is concern that SP have insufficient training to provide valid evaluation of student competence and/or provide feedback on clinical skills. It is also unknown if SP ratings predict student competence in other areas. The objectives of this study were: to examine student attitudes towards SP examiners; to compare SP and physician evaluations of competence; and to compare predictive validity of these scores, using performance on the multiple choice questions examination (MCQE) as the outcome variable.
METHODS: This was a cross-sectional study of third-year medical students undergoing an OSCE during the Internal Medicine clerkship rotation. Fifty-two students rotated through 8 stations (6 physician, 2 SP examiners). Statistical tests used were Pearson's correlation coefficient, two-sample t-test, effect size calculation, and multiple linear regression.
RESULTS: Most students reported that SP stations were less stressful, that SP were as good as physicians in giving feedback, and that SP were sufficiently trained to judge clinical skills. SP scored students higher than physicians (mean 90.4% +/- 8.9 vs. 82.2% +/- 3.7, d = 1.5, p < 0.001) and there was a weak correlation between the SP and physician scores (coefficient 0.4, p = 0.003). Physician scores were predictive of summative MCQE scores (regression coefficient = 0.88 [0.15, 1.61], P = 0.019) but there was no relationship between SP scores and summative MCQE scores (regression coefficient = -0.23, P = 0.133).
CONCLUSION: These results suggest that SP examiners are acceptable to medical students, SP rate students higher than physicians and, unlike physician scores, SP scores are not related to other measures of competence.

References

  1. Acad Med. 2000 Dec;75(12):1206-11 [PMID: 11112723]
  2. Acad Med. 2002 Sep;77(9):932 [PMID: 12228103]
  3. Med Educ. 2002 Dec;36(12):1117-21 [PMID: 12472737]
  4. Med Teach. 2003 May;25(3):262-70 [PMID: 12881047]
  5. Med Educ. 1979 Jan;13(1):41-54 [PMID: 763183]
  6. Med Teach. 2005 May;27(3):200-6 [PMID: 16011942]
  7. Am J Surg. 1990 Sep;160(3):302-5 [PMID: 2393060]
  8. CMAJ. 1992 May 15;146(10):1735-40 [PMID: 1596809]
  9. Acad Med. 1994 Jul;69(7):567-70 [PMID: 8018268]
  10. Acad Med. 1996 Feb;71(2):170-5 [PMID: 8615935]
  11. Med Educ. 1999 Aug;33(8):572-8 [PMID: 10447842]
  12. Med Educ. 1989 May;23(3):290-6 [PMID: 2725369]

MeSH Term

Alberta
Attitude of Health Personnel
Clinical Clerkship
Clinical Competence
Communication
Cross-Sectional Studies
Educational Measurement
Feedback
Humans
Internal Medicine
Medical History Taking
Observer Variation
Patient Simulation
Students, Medical
Teaching

Word Cloud

Created with Highcharts 10.0.0SPscoresstudents=0physicianexaminerscompetencecoefficientphysiciansstudentMCQEregressionevaluationsstandardizedpatientsusedprovidefeedbackclinicalskillsstudycomparepredictivemultiplemedicalOSCE8stationscorrelationhigher+/-1psummativePBACKGROUND:reduceinter-ratervariabilitydemandtimeOSCEsconcerninsufficienttrainingvalidevaluationand/oralsounknownratingspredictareasobjectiveswere:examineattitudestowardsvalidityusingperformancechoicequestionsexaminationoutcomevariableMETHODS:cross-sectionalthird-yearundergoingInternalMedicineclerkshiprotationFifty-tworotated62StatisticaltestsPearson'stwo-samplet-testeffectsizecalculationlinearRESULTS:reportedlessstressfulgoodgivingsufficientlytrainedjudgescoredmean904%9vs822%37d5<001weak4003Physician88[01561]019relationship-023133CONCLUSION:resultssuggestacceptablerateunlikerelatedmeasuresCanreplaceexaminers?

Similar Articles

Cited By