Journal of Educational Evaluation for Health Professions

Công bố khoa học tiêu biểu

* Dữ liệu chỉ mang tính chất tham khảo

Sắp xếp:  
Assessment of students’ satisfaction with a student-led team-based learning course
Journal of Educational Evaluation for Health Professions - Tập 12 - Trang 23
Justin W. Bouw, Vasudha Gupta, Ana L. Hincapie

Purpose: To date, no studies in the literature have examined student delivery of team-based learning (TBL) modules in the classroom. We aimed to assess student perceptions of a student-led TBL elective. Methods: Third-year pharmacy students were assigned topics in teams and developed learning objectives, a 15-minute mini-lecture, and a TBL application exercise and presented them to student colleagues. Students completed a survey upon completion of the course and participated in a focus group discussion to share their views on learning. Results: The majority of students (n=23/30) agreed that creating TBL modules enhanced their understanding of concepts, improved their self-directed learning skills (n=26/30), and improved their comprehension of TBL pedagogy (n=27/30). However, 60% disagreed with incorporating student-generated TBL modules into core curricular classes. Focus group data identified student-perceived barriers to success in the elective, in particular the development of TBL application exercises. Conclusion: This study provides evidence that students positively perceived student-led TBL as encouraging proactive learning from peer-to-peer teaching.

Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study
Journal of Educational Evaluation for Health Professions - Tập 20 - Trang 1
Sun Huh

This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT's overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT's knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Tổng số: 2   
  • 1