Understanding resident ratings of teaching in the workplace: a multi-centre studySpringer Science and Business Media LLC - Tập 20 - Trang 691-707 - 2014
Cornelia R. M. G. Fluit, Remco Feskens, Sanneke Bolhuis, Richard Grol, Michel Wensing, Roland Laan
Providing clinical teachers with feedback about their teaching skills is a powerful tool to improve teaching. Evaluations are mostly based on questionnaires completed by residents. We investigated to what extent characteristics of residents, clinical teachers, and the clinical environment influenced these evaluations, and the relation between residents’ scores and their teachers’ self-scores. The evaluation and feedback for effective clinical teaching questionnaire (EFFECT) was used to (self)assess clinical teachers from 12 disciplines (15 departments, four hospitals). Items were scored on a five-point Likert scale. Main outcome measures were residents’ mean overall scores (MOSs), specific scale scores (MSSs), and clinical teachers’ self-evaluation scores. Multilevel regression analysis was used to identify predictors. Residents’ scores and self-evaluations were compared. Residents filled in 1,013 questionnaires, evaluating 230 clinical teachers. We received 160 self-evaluations. ‘Planning Teaching’ and ‘Personal Support’ (4.52, SD .61 and 4.53, SD .59) were rated highest, ‘Feedback Content’ (CanMEDS related) (4.12, SD .71) was rated lowest. Teachers in affiliated hospitals showed highest MOS and MSS. Medical specialty did not influence MOS. Female clinical teachers were rated higher for most MSS, achieving statistical significance. Residents in year 1–2 were most positive about their teachers. Residents’ gender did not affect the mean scores, except for role modeling. At group level, self-evaluations and residents’ ratings correlated highly (Kendall’s τ 0.859). Resident evaluations of clinical teachers are influenced by teacher’s gender, year of residency training, type of hospital, and to a lesser extent teachers’ gender. Clinical teachers and residents agree on strong and weak points of clinical teaching.
Is reflection like soap? a critical narrative umbrella review of approaches to reflection in medical education researchSpringer Science and Business Media LLC - Tập 27 Số 2 - Trang 537-551 - 2022
Sven P. C. Schaepkens, Mario Veen, Anne de la Croix
AbstractReflection is a complex concept in medical education research. No consensus exists on what reflection exactly entails; thus far, cross-comparing empirical findings has not resulted in definite evidence on how to foster reflection. The concept is as slippery as soap. This leaves the research field with the question, ‘how can research approach the conceptual indeterminacy of reflection to produce knowledge?’. The authors conducted a critical narrative umbrella review of research on reflection in medical education. Forty-seven review studies on reflection research from 2000 onwards were reviewed. The authors used the foundational literature on reflection from Dewey and Schön as an analytical lens to identify and critically juxtapose common approaches in reflection research that tackle the conceptual complexity. Research on reflection must deal with the paradox that every conceptualization of reflection is either too sharp or too broad because it is entrenched in practice. The key to conceptualizing reflection lies in its use and purpose, which can be provided by in situ research of reflective practices.
Competences for implementation science: what trainees need to learn and where they learn itSpringer Science and Business Media LLC - Tập 26 - Trang 19-35 - 2020
Marie-Therese Schultes, Monisa Aijaz, Julia Klug, Dean L. Fixsen
Education in implementation science, which involves the training of health professionals in how to implement evidence-based findings into health practice systematically, has become a highly relevant topic in health sciences education. The present study advances education in implementation science by compiling a competence profile for implementation practice and research and by exploring implementation experts’ sources of expertise. The competence profile is theoretically based on educational psychology, which implies the definition of improvable and teachable competences. In an online-survey, an international, multidisciplinary sample of 82 implementation experts named competences that they considered most helpful for conducting implementation practice and implementation research. For these competences, they also indicated whether they had acquired them in their professional education, additional training, or by self-study and on-the-job experience. Data were analyzed using a mixed-methods approach that combined qualitative content analyses with descriptive statistics. The participants deemed collaboration knowledge and skills most helpful for implementation practice. For implementation research, they named research methodology knowledge and skills as the most important ones. The participants had acquired most of the competences that they found helpful for implementation practice in self-study or by on-the-job experience. However, participants had learned most of their competences for implementation research in their professional education. The present results inform education and training activities in implementation science and serve as a starting point for a fluid set of interdisciplinary implementation science competences that will be updated continuously. Implications for curriculum development and the design of educational activities are discussed.
Using conversation analysis to explore feedback on resident performanceSpringer Science and Business Media LLC - Tập 24 - Trang 577-594 - 2019
Marrigje E. Duitsman, Marije van Braak, Wyke Stommel, Marianne ten Kate-Booij, Jacqueline de Graaf, Cornelia R. M. G. Fluit, Debbie A. D. C. Jaarsma
Feedback on clinical performance of residents is seen as a fundamental element in postgraduate medical education. Although literature on feedback in medical education is abundant, many supervisors struggle with providing this feedback and residents experience feedback as insufficiently constructive. With a detailed analysis of real-world feedback conversations, this study aims to contribute to the current literature by deepening the understanding of how feedback on residents’ performance is provided, and to formulate recommendations for improvement of feedback practice. Eight evaluation meetings between program directors and residents were recorded in 2015–2016. These meetings were analyzed using conversation analysis. This is an ethno-methodological approach that uses a data-driven, iterative procedure to uncover interactional patterns that structure naturally occurring, spoken interaction. Feedback in our data took two forms: feedback as a unidirectional activity and feedback as a dialogic activity. The unidirectional feedback activities prevailed over the dialogic activities. The two different formats elicit different types of resident responses and have different implications for the progress of the interaction. Both feedback formats concerned positive as well as negative feedback and both were often mitigated by the participants. Unidirectional feedback and mitigating or downplaying feedback is at odds with the aim of feedback in medical education. Dialogic feedback avoids the pitfall of a program director-dominated conversation and gives residents the opportunity to take ownership of their strengths and weaknesses, which increases chances to change resident behavior. On the basis of linguistic analysis of our real-life data we suggest implications for feedback conversations.
Supervision training in healthcare: a realist synthesisSpringer Science and Business Media LLC - Tập 25 - Trang 523-561 - 2019
Charlotte E. Rees, Sarah L. Lee, Eve Huang, Charlotte Denniston, Vicki Edouard, Kirsty Pope, Keith Sutton, Susan Waller, Bernadette Ward, Claire Palermo
Supervision matters: it serves educational, supportive and management functions. Despite a plethora of evidence on the effectiveness of supervision, scant evidence for the impact of supervision training exists. While three previous literature reviews have begun to examine the effectiveness of supervision training, they fail to explore the extent to which supervision training works, for whom, and why. We adopted a realist approach to answer the question: to what extent do supervision training interventions work (or not), for whom and in what circumstances, and why? We conducted a team-based realist synthesis of the supervision training literature focusing on Pawson’s five stages: (1) clarifying the scope; (2) determining the search strategy; (3) study selection; (4) data extraction; and (5) data synthesis. We extracted contexts (C), mechanisms (M) and outcomes (O) and CMO configurations from 29 outputs including short (n = 19) and extended-duration (n = 10) supervision training interventions. Irrespective of duration, interventions including mixed pedagogies involving active and/or experiential learning, social learning and protected time served as mechanisms triggering multiple positive supervisor outcomes. Short-duration interventions also led to positive outcomes through mechanisms such as supervisor characteristics, whereas facilitator characteristics was a key mechanism triggering positive and negative outcomes for extended-duration interventions. Disciplinary and organisational contexts were not especially influential. While our realist synthesis builds on previous non-realist literature reviews, our findings extend previous work considerably. Our realist synthesis presents a broader array of outcomes and mechanisms than have been previously identified, and provides novel insights into the causal pathways in which short and extended-duration supervision training interventions produce their effects. Future realist evaluation should explore further any differences between short and extended-duration interventions. Educators are encouraged to prioritize mixed pedagogies, social learning and protected time to maximize the positive supervisor outcomes from training.
An exploration of “real time” assessments as a means to better understand preceptors’ judgments of student performanceSpringer Science and Business Media LLC - Tập 28 - Trang 793-809 - 2022
Kimberly Luu, Ravi Sidhu, Neil K Chadha, Kevin W Eva
Clinical supervisors are known to assess trainee performance idiosyncratically, causing concern about the validity of their ratings. The literature on this issue relies heavily on retrospective collection of decisions, resulting in the risk of inaccurate information regarding what actually drives raters’ perceptions. Capturing in-the-moment information about supervisors’ impressions could yield better insight into how to intervene. The purpose of this study, therefore, was to gather “real-time” judgments to explore what drives preceptors’ judgments of student performance. We performed a prospective study in which physicians were asked to adjust a rating scale in real-time while watching two video-recordings of trainee clinical performances. Scores were captured in 1-s increments, examined for frequency, direction, and magnitude of adjustments, and compared to assessors’ final entrustability judgment as measured by the modified Ottawa Clinic Assessment Tool. The standard deviation in raters’ judgment was examined as a function of time to determine how long it takes impressions to begin to vary. 20 participants viewed 2 clinical vignettes. Considerable variability in ratings was observed with different behaviours triggering scale adjustments for different raters. That idiosyncrasy occurred very quickly, with the standard deviation in raters’ judgments rapidly increasing within 30 s of case onset. Particular moments appeared to generally be influential, but their degree of influence still varied. Correlations between the final assessment and (a) score assigned upon first adjustment of the scale, (b) upon last adjustment, and (c) the mean score, were r = 0.13, 0.32, and 0.57 for one video and r = 0.30, 0.50, and 0.52 for the other, indicating the degree to which overall impressions reflected accumulation of raters’ idiosyncratic moment-by-moment observations. Our results demonstrated that variability in raters’ impressions begins very early in a case presentation and is associated with different behaviours having different influence on different raters. More generally, this study outlines a novel methodology that offers a new path for gaining insight into factors influencing assessor judgments.
Alphas, betas and skewy distributions: two ways of getting the wrong answerSpringer Science and Business Media LLC - Tập 16 - Trang 291-296 - 2011
Peter Fayers
Although many parametric statistical tests are considered to be robust, as recently shown in Methodologist’s Corner, it still pays to be circumspect about the assumptions underlying statistical tests. In this paper I show that robustness mainly refers to α, the type-I error. If the underlying distribution of data is ignored there can be a major penalty in terms of the β, the type-II error, representing a large increase in false negative rate or, equivalently, a severe loss of power of the test.