Virginia Research Day 2021

Comparing Peer-Evaluation Scores with Self-Evaluation Scores during a Cardiac Dysrhythmia Management Simulation with Undergraduate Medical Students Watson Edwards, BSN CHSE, Fred Rawlins, DO FACEP, Janella Looney, MSHI, Kim Gittings, BSN, Ryan Martin, HSOS, Ning Cheng, PhD, Sarah Astrab (OMS3), Nirav Patel (OMS3)

Abstract

INTRODUCTION

RESULTS

DISCUSSION

• The use of self-assessments and/or peer-evaluations is a valid and efficient method of assessing simulation performance • Peer-evaluators, when properly trained, can serve as expert evaluators of clinical performance in leu of physician faculty • Student self-assessments with video-review are a viable method of assessing clinical performance when provided objective checklists and may serve as a feedback mechanism • Upon reviewing the data, it was observed that students exhibited more lenient grading behaviors in comparison to peer-evaluated scores. In the future, peer-evaluations could be used to offset demands on faculty

Manikin-based simulation assessment provides a mechanism to evaluate a student’s performance of applying knowledge to clinically relevant scenarios. The aim of the study was to compare evaluation responses among first-year medical students’ self- assessment with associated peer-assessments on management of two sets of scenarios presenting cardiac dysrhythmias. Objectively and consistently evaluating student performance places increased demand on faculty and staff. Our focus was to compare live peer-assessment scores with student self-assessment scores to determine if this scoring modality may be used in future simulations. A retrospective review of data on first-year osteopathic medical students enrolled in a clinical medicine cardiopulmonary course was evaluated. All 167 student participants in the study received similar instruction and treatments throughout the cardiopulmonary course. Simulation assessments were conducted over three days. Simulations were conducted using Laerdal Manikin Simulators, LLEAP software, and BLINE Medical software. Four simulation rooms, all set up in a similar fashion, were used to conduct the assessments. Seven cardiac dysrhythmias were presented during instructional events. To provide randomization, two sets of cardiac dysrhythmia assessments were designed. Each student was randomly tested on one set of dysrhythmias with a 20-minute time limit. Testing days for simulation varied in length. Additionally, each simulation intern (peer-evaluator) was scheduled to a single room for finite amounts of time, with scheduled breaks, to avoid evaluator fatigue. For each test, students were evaluated in real time by a simulation intern (peer-evaluator) and were also directed to complete a self-evaluation using scenario video review. Students’ test scores were compared between self-evaluation and peer-evaluation by a t- test for each evaluation test. A p-value <0.05 was used to indicate a difference with 95% confidence interval. The results revealed no statistical significance in two scenario sets, by either a peer or self-evaluation, with the exception for one cardiac dysrhythmia in Set 1 (Scenario, SIM | VF, p-values: 0.014) where peers and self-evaluation scores differed; peer scores were lower. Peer evaluation scores and student self- evaluation scores were comparable and demonstrated self-grading is an accurate evaluation technique in all other cases. As academic calendars and faculty demands increase, assessment scoring could be assigned to peer-evaluators or student to self-report performance and could also serve as a feedback mechanism. Further exploration of data should be completed to determine the significance of lower peer-evaluation score compared to the self- evaluation score.

• A total of 167 first-year osteopathic medical students and 10 simulation interns participated in the peer-review and self-evaluation study • Students’ test scores were compared between self-evaluation and peer- evaluation by a t-test for each assessment • A p-value <0.05 was used to indicate a difference with 95% confidence interval • The results revealed no statistical significance, in two scenario sets, in grading by either a peer or self-evaluation, with the exception of one cardiac dysrhythmia in Set 1 (Scenario, SIM | VF, p-values: 0.014)

• Medical education involves the development of both academic and clinical skills during a student’s undergraduate experience. Medical skills are further developed during residency and throughout a clinician’s professional career • Undergraduate medical education incorporates manikin-simulation into the student’s training during their first year of medical school. With manikin simulations, competency examinations are integrated into the simulation curriculum to assess each student’s cognitive and psychomotor abilities • The use of simulation often requires intensive faculty resources to assess and monitor student performance • Medical schools align curriculum on the basis of preparing students for success by providing a firm foundation of medicine and clinical experiences • During the first two years of medical school, students are exposed to a number of disease conditions and pathology. However, the clinical relevance of these disease conditions is not readily understood until their clinical years • In order to increase the relevancy of diseases and management, medical colleges utilize manikin-based simulation to provide early clinical exposure while in a safe environment • For medical colleges with large class sizes, individual assessment presents challenges. Faculty resources can quickly become surpassed during simulation sessions • The use of student self-assessment, from recorded videos, can relieve the demands on clinical faculty but students may not have the expertise to accurately identify skills performed poorly. Additionally, real-time evaluations require faculty presence and place large demands on their clinical or educational responsibilities • The Edward Via College of Osteopathic Medicine’s Simulation Center utilized ten specially trained simulation interns to operate manikin software and perform live evaluations on peer students • The use of simulation interns as peer-evaluators reduced the number of clinical faculty needed on each of the simulation assessment days. Additionally, the trained simulation interns were prepared to objectively assess peer students utilizing the designed checklist METHODS • The study utilized a retrospective, descriptive research design throughout the curricular structure. A comparative analysis was performed to assess the use of peer-evaluation and student self-assessment • Each participant completed an approved electrocardiogram course; completed an orientation to the simulators, defibrillation and pacing equipment; observed an emulation scenario aimed at providing students an example of how the simulation should be performed; and completed an interactive online module • Simulations were conducted over three days using Laerdal Manikin Simulators, LLEAP software, and BLINE Medical software • Four simulation rooms, all set up in a similar fashion, were used to conduct the assessments • Seven cardiac dysrhythmias were presented during instructional events. To provide randomization, two sets of cardiac dysrhythmia assessments were designed. Each student was randomly tested on one set of dysrhythmias with a 20-minute time limit • Each simulation intern (peer-evaluator) was scheduled to a single room for finite amounts of time, with scheduled breaks, to avoid evaluator fatigue • For each test, students were evaluated in real time by a simulation intern (peer-evaluator) and were also directed to complete a self-evaluation using scenario video-review

Sample Checklist | Student Self-Assessment & Peer-Evaluation

CONCLUSIONS

Cardiac Dysrhythmia Set 1

Specific area

Evaluation Type (P or S)

N Mean Std. Deviation t-test p-value

Peer

85 85 85 85 85 85 85 85 85 85

9.74 9.98 8.41 8.62 8.72 8.81

1.481 1.431 0.835 0.723

Total: Scenario , SIM| 3RD DEGREE AVB

0.294

Student

• Peer-evaluation scores and student self-evaluation scores were comparable and demonstrated self-grading is an accurate evaluation technique • As academic calendars and faculty demands increase, assessment scoring could be assigned to students to self-report performance and could serve as a mechanism for performance feedback • Peer-evaluations by simulation interns may serve as a quality assurance mechanism and expert reviewer throughout the evaluation process • Further exploration of data should be completed to determine if the discrepancy of the VF case was related to insufficient student knowledge or confusion of proper treatment REFERENCES Jackson MA, et al. Peer vs. Self-Grading of Practice Exams: Which is Better? CBE Life Sci Educ. 2019 Sep;17(3):es44. Ferguson KJ, Kreiter CD. Assessing the relationship between peer and facilitator evaluations in case-based learning. Med Educ. 2007 Sep;41(9):906-8. Ficzere CH, Clauson AS, Lee PH. Reliability of Peer Assessment of Patient Education Simulations. Curr Pharm Teach Learn. 2019 Jun;11(6):580-584. Hegg RM, Ivan KF, Tone J, Morten A. Comparison of Peer Assessment and Faculty Assessment in an Interprofessional Simulation-Based Team Training Program. Nurse Educ Pract. 2020 Jan;42:102666. Morgan PJ, Cleave-Hogg D, Lam-McCulloch J. Applying Theory to Practice in Undergraduate Education Using High-Fidelity Simulation. Med Teach. 2006 Feb;28(1):e10-5. Ten Eyck RP, Tews M, Ballester JM. Improved Medical Student Satisfaction and Test Performance with a Simulation-Based Emergency Medicine Curriculum: A Randomized Controlled Trial. Ann Emerg Med. 2009 Nov;54(5):684-91. Emam HA, Jatana CA, Wade S, Hamamoto D. Dental Students Self-Assessment of a Medical History Competency Developed by Oral and Maxillofacial Surgery Faculty. Eur J Dent Educ. 2018 Feb;22(1):9-14. Sanderson TR, Kearney RC, Kissell D, Salisbury J. Evaluating Student Self-Assessment through Video-Recorded Patient Simulations. J Dent Hyg. 2016 Aug;90(4):257-62. Hoffman LA, et al. The Association Between Peer and Self-Assessments and Professionalism Lapses Among Medical Students. Eval Health Prof. 2017 Jun;40(2):219-243.

Peer

Total: Scenario , SIM| AFIB UNSTABLE

0.079

Student

Peer

0.59

Total: Scenario , SIM| SVT STABLE

0.254

Student

0.475

Peer

14.66 15.14 41.53 42.55

1.41

0.014*

Total: Scenario , SIM| VF

Student

1.114 3.157 2.543

Peer

0.021*

Total: All Dysrhythmias

Student

*, p-value<0.05, the scores are significant different between evaluation type P and S

Cardiac Dysrhythmia Set 2

Specific area

Evaluation Type (P or S)

N Mean Std. Deviation t-test p-value

Peer

82 82 82 82 82 82 82 82 82 82

9.94

1.081 1.056 0.873 0.849 0.616 0.511 1.529 2.167 2.654 3.053

Total: Scenario , SIM| 3RD DEGREE AVB

0.382

Student

10.09

Peer

8.32 8.46 8.65 8.76

Total: Scenario , SIM| AFIB STABLE

0.278

Student

Peer

Total: Scenario , SIM| SVT UNSTABLE

0.216

Student

Peer

14.38

Total: Scenario , SIM| VT PULSELESS

0.678

Student

14.5

Peer

41.28

Total: Overall

0.242

Student

41.8

There is no significant different between evaluation type P and S for all specific areas

ERF-2

185 2 0 2 1 R e s e a r c h R e c o g n i t i o n D a y

Made with FlippingBook flipbook maker