Identifying the narrative used by educators in articulating judgement of performance
Tóm tắt
Modern assessment in medical education is increasingly reliant on human judgement, as it is clear that quantitative scales have limitations in fully assessing registrars’ development of competence and providing them with meaningful feedback to assist learning. For this, possession of an expert vocabulary is essential. This study aims to explore how medical education experts voice their subjective judgements about learners and to what extent they are using clear, information-rich terminology (high-level semantic qualifiers); and to gain a better understanding of the experts’ language used in these subjective judgements. Six experienced medical educators from urban and rural environments were purposefully selected. Each educator reviewed a registrar clinical case analysis in a think out loud manner. The transcribed data were analyzed, codes were identified and ordered into themes. Analysis continued until saturation was reached. Five themes with subthemes emerged. The main themes were: (1) Demonstration of expertise; (2) Personal credibility; (3) Professional credibility; (4) Using a predefined structure and (5) Relevance. Analogous to what experienced clinicians do in clinical reasoning, experienced medical educators verbalize their judgements using high-level semantic qualifiers. In this study, we were able to unpack these. Although there may be individual variability in the exact words used, clear themes emerged. These findings can be used to develop a helpful shared narrative for educators in observation-based assessment. The provision of a rich, detailed narrative will also assist in providing clarity to registrar feedback with areas of weakness clearly articulated to improve learning and remediation.
Tài liệu tham khảo
Swanson DB. A measurement framework for performance-based tests. In: Hart I, Harden R, editors. Further developments in Assessing Clinical Competence. Montreal: Can-Heal Publications. 1987. pp. 13–45.
Swanson DB, Norcini JJ. Factors influencing reproducibility of tests using standardized patients. Teach Learn Med. 1989;1:158–66.
Norcini JJ, Swanson DB. Factors influencing testing time requirements for measurements using written simulations. Teach Learn Med. 1989;1:85–91.
Norcini J, Blank LL, Arnold GK, The Mini-CEX KHR. (Clinical Evaluation Exercise); A Preliminary Investigation. Ann Intern Med. 1995;123:795–9.
Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–34.
Van der Vleuten CPM. The assessment of Professional Competence: Developments, Research and Practical Implications. Adv Health Sci Educ. IEEE Trans Med Imaging. 1996;1:41:67.
Delandshere G, Petrosky AR. Assessment of Complex Performances: Limitations of Key Measurement Assumptions. Educ Res. 1998;27:14–24.
Kane MT. Validation. In: Brennan RL, editor. Educational Measurement. Westport: ACE/Praeger. 2006. p. 17–64.
Norman GR, Van der Vleuten CP, De Graaff E. Pitfalls in the pursuit of objectivity: issues of validity, efficiency and acceptability. Med Educ. 1991;25:119–26.
Plous S. The psychology of judgment and decision making. New Jersey: McGraw-Hill Inc; 1993.
Gigerenzer G, Goldstein DG. Reasoning the Fast and Frugal Way: Models of Bounded Rationality. Psychol Rev. 1996;103:650–69.
Marewski J, Gaissmaier W, Gigerenzer G. Good judgements do not require complex cognition. Cogn Process. 2009;11:103–21.
Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Workplace-Based Assessment MAMM. Effects of Rater Expertise. Adv Health Sci Educ. IEEE Trans Med Imaging. 2011;16:151:65.
Govaerts MJB, Van de Wiel MWJ. Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: raters’ performance theories and constructs. Adv Health Sci Educ. IEEE Trans Med Imaging. 2012;1:22.
Popham WJ. Assessment Literacy for Teachers: Faddish or Fundamental? Theory Pract. 2009;48:4–11.
Weller JM, Misur M, Nicolson S, et al. Can I leave the theatre? A key to more reliable workplace-based assessment. Br J Anaesth. 2014;112:1083–91.
Cook DA, Kuper A, Hatala R, Ginsburg S. When assessment data are words: validity evidence for qualitative educational assessments. Acad Med. 2016;91:1359–69.
Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward Authentic Clinical Evaluation: Pitfalls in the Pursuit of Competency. Acad Med. 2010;85:780–6.
Ginsburg S, Regehr G, Lingard L, Eva K. Reading between the lines: faculty interpretations narrative evaluation comments. Med Educ. 2015;49:296–306.
Hodges BD, Ginsburg S, Cruess R, et al. Assessment of professionalism: Recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:354–63.
Berendonk C, Stalmeijer RE, Schuwirth LWT. Assessors’ perspectives on assessment: ‘I think you call it expertise. Adv Health Sci Educ. 2012;18(4):559–571.
Bordage G, Lemieux M. Semantic structures and diagnostic thinking of experts and novices. Acad Med. 1991;99:s70–2.
Bordage G. Prototypes and semantic qualifiers: from past to present. Med Educ. 2007;41:1117–21.
Russo JE, Johnson EJ, Stephens DL. The validity of verbal protocols. Mem Cogn. 1989;17:759–69.
Messick S. The Interplay of Evidence and Consequences in the Validation of Performance Assessments. Educ Res. 1994;23:13–23.
van Merrienboer JJ, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2010;44:85–93.
Schmidt HG, Boshuizen HP. On acquiring expertise in medicine. Special Issue: European educational psychology. Educ Psychol Rev. 1993;5:205–21.