Hệ thống quan sát lớp học trong bối cảnh: Một trường hợp để xác thực các hệ thống quan sát

Journal of Personnel Evaluation in Education - Tập 31 - Trang 61-95 - 2019
Shuangshuang Liu1, Courtney A. Bell1, Nathan D. Jones2, Daniel F. McCaffrey1
1Educational Testing Service, Princeton, USA
2Boston University, Boston, USA

Tóm tắt

Các nhà nghiên cứu và thực hành đôi khi giả định rằng việc sử dụng một công cụ đã "được xác thực" sẽ tạo ra các điểm số "hợp lệ"; tuy nhiên, các quan điểm hiện đại về tính hợp lệ chỉ ra rằng có nhiều lý do khiến giả định này có thể sai lệch. Để minh họa một số vấn đề với quan điểm này và để hỗ trợ so sánh các giao thức quan sát khác nhau trong các bối cảnh khác nhau, chúng tôi giới thiệu và định nghĩa công cụ khái niệm của một hệ thống quan sát. Chúng tôi sau đó mô tả bằng chứng tâm lý đo lường của một công cụ quan sát giáo viên phổ biến, Khung Giáo dục của Charlotte Danielson, trong ba bối cảnh sử dụng — một bối cảnh nghiên cứu có mức độ rủi ro thấp, một bối cảnh thực hành có mức độ rủi ro thấp, và một bối cảnh thực hành có mức độ rủi ro cao. Mặc dù chia sẻ một công cụ chung, chúng tôi nhận thấy ba hệ thống quan sát và các bối cảnh sử dụng liên quan kết hợp lại để tạo ra các điểm số trung bình của giáo viên khác nhau, sự biến động trong phân phối điểm số và các mức độ chính xác khác nhau trong điểm số. Tuy nhiên, cả ba hệ thống đều tạo ra điểm số trung bình cao hơn trong miền môi trường lớp học so với miền giảng dạy và cả ba tập điểm số đều hỗ trợ một mô hình một yếu tố, trong khi Khung Giáo dục đề xuất bốn yếu tố. Chúng tôi thảo luận về cách mà sự phụ thuộc giữa các khía cạnh của hệ thống quan sát và các ràng buộc thực tiễn để lại cho các nhà nghiên cứu những thách thức và cơ hội xác thực đáng kể.

Từ khóa

#hệ thống quan sát #xác thực công cụ #giáo viên #tâm lý đo lường #bối cảnh học tập

Tài liệu tham khảo

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education [AERA/APA/NCME]. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association. Archer, J., Cantrell, S., Holtzman, S. L., Joe, J. N., Tocci, C. M., & Wood, J. (2016). Better feedback for better teaching: a practical guide to improving classroom observations. New York: John Wiley & Sons. Bell, C. A., Gitomer, D. H., McCaffrey, D. F., Hamre, B. K., Pianta, R. C., & Qi, Y. (2012). An argument approach to observation protocol validity. Educational Assessment, 17(2–3), 62–87. https://doi.org/10.1080/10627197.2012.715014. Bell, C., Jones, N., Lewis, J., Qi, Y., Kirui, D., Stickler, L., & Liu, S. (2016). Understanding consequential assessment systems of teaching: Year 1 final report to Los Angeles Unified School District (Research Memorandum No. RM-16-12). Princeton, NJ: Educational Testing Service. Carey, M. D., Mannell, R. H., & Dunn, P. K. (2011). Does a rater’s familiarity with a candidate’s pronunciation affect the rating in oral proficiency interviews? Language Testing, 28(2), 201–219. https://doi.org/10.1177/0265532210393704. Casabianca, J. M., Lockwood, J. R., & McCaffrey, D. F. (2015). Trends in classroom observation scores. Educational and Psychological Measurement, 75(2), 311–337. https://doi.org/10.1177/0013164414539163. Chaplin, D., Gill, B., Thompkins, A., & Miller, H. (2014). Professional practice, student surveys, and value-added: Multiple measures of teacher effectiveness in the Pittsburgh Public Schools. REL 2014-024. Regional Educational Laboratory Mid-Atlantic. Charalambous, C. Y., & Praetorius, A. K. (2018). Studying mathematics instruction through different lenses: setting the ground for understanding instructional quality more comprehensively. ZDM, 50(3), 355–366. Cohen, J., & Grossman, P. (2016). Respecting complexity in measures of teaching: keeping students and schools in focus. Teaching and Teacher Education, 55, 308–317. https://doi.org/10.1016/j.tate.2016.01.017. Cohen, J., Ruzek, E., & Sandilos, L. (2018). Does teaching quality cross subjects? Exploring consistency in elementary teacher practice across subjects. AERA Open, 4(3), 2332858418794492), 233285841879449. Dalland, C.P., Klette, K., & Svenkerud, S. (2018). Video studies and the challenge of selecting time scales. International Journal of Research & Method in Education. Manuscript submitted for publication. Danielson, C. (1996). Enhancing professional development: A framework for teaching. Alexandria, VA: Association for Supervision and Curriculum Development. Danielson, C. (2007). Enhancing professional practice: a framework for teaching. Alexandria, VA: Association for Supervision and Curriculum Development. Danielson, C. (2011). Enhancing professional practice: a framework for teaching. Princeton, NJ: The Danielson Group. Danielson, C. (2013). The Framework for Teaching evaluation instrument, 2013 Edition. Retrieved January 17, 2017 from https://www.danielsongroup.org/framework/. Darling-Hammond, L., & Rothman, R. (2015). Teaching in the flat world: learning from high-performing systems. Teachers College Press. Donaldson, M. L., & Woulfin, S. (2018). From tinkering to going “rogue”: how principals use agency when enacting new teacher evaluation systems. Educational Evaluation and Policy Analysis 0162373718784205. Engelhard, G. (1996). Evaluating rater accuracy in the performance assessments. Journal of Educational Measurement, 33(1), 56–70. Floman, J. L., Hagelskamp, C., Brackett, M. A., & Rivers, S. E. (2017). Emotional bias in classroom observations: within-rater positive emotion predicts favorable assessments of classroom quality. Journal of Psychoeducational Assessment, 35(3), 291–301. Goe, L., Bell, C., & Little, O. (2008). Approaches to evaluating teacher effectiveness: a research synthesis. National Comprehensive Center for Teacher Quality. Retrieved on December 3, 2008 from: https://gtlcenter.org/sites/default/files/docs/EvaluatingTeachEffectiveness.pdf. Hafen, C. A., Hamre, B. K., Allen, J. P., Bell, C. A., Gitomer, D. H., & Pianta, R. C. (2015). Teaching through interactions in secondary school classrooms revisiting the factor structure and practical application of the Classroom Assessment Scoring System–Secondary. The Journal of Early Adolescence, 35(5–6), 651–680. Harik, P., Clauser, B. E., Grabovsky, I., Nungester, R. J., Swanson, D., & Nandakumar, R. (2009). An examination of rater drift within a generalizability theory framework. Journal of Educational Measurement, 46(1), 43–58. Herlihy, C., Karger, E., Pollard, C., Hill, H. C., Kraft, M. A., Williams, M., & Howard, S. (2014). State and local efforts to investigate the validity and reliability of scores from teacher evaluation systems. Teachers College Record, 116(1), 1–28. Hess, F. M. (2015). Lofty promises but little change for America’s schools. Education Next, 15(4), 50–56. Hill, H. C., Charalambous, C. Y., Blazar, D., McGinn, D., Kraft, M. A., Beisiegel, M., et al. (2012a). Validating arguments for observational instruments: attending to multiple sources of variation. Educational Assessment, 17(2–3), 88–106. Hill, H. C., Charalambous, C. Y., & Kraft, M. A. (2012b). When rater reliability is not enough: teacher observation systems and a case for the generalizability study. Educational Researcher, 41(2), 56–64. https://doi.org/10.3102/0013189X12437203. Ho, A. D., & Kane, T. J. (2013). The reliability of classroom observations by school personnel. Research paper. MET Project. Bill & Melinda Gates Foundation. Hoffman, J. V., Sailors, M., Duffy, G. R., & Beretvas, S. N. (2004). The effective elementary classroom literacy environment: examining the validity of the TEX-IN3 Observation System. Journal of Literacy Research, 36(3), 303–334. Joe, J. N., McClellan, C. A., & Holtzman, S. L. (2014). Scoring design decisions: reliability and the length and focus of classroom observations. In T. J. Kane, K. Kerr, & R. C. Pianta (Eds.), Designing teacher evaluation systems (pp. 415–443). New York: Jossey Bass. Joe, J. N., Tocci, C. M., Holtzman, S. L., & Williams, J. C. (2013). Foundations of observation: considerations for developing a classroom observation system that helps districts achieve consistent and accurate scores. MET Project, Policy and Practice Brief. Retrieved on January 21, 2019 from http://k12education.gatesfoundation.org/resource/foundations-of-observations-considerations-for-developing-a-classroom-observation-system-that-helps-districts-achieve-consistent-and-accurate-scores/. Jølle, L. (2015). Rater strategies for reaching agreement on pupil text quality. Assessment in Education: Principles, Policy & Practice, 22(4), 458–474. Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (pp. 17–64). New York: Praeger. Kane, M. T. (2013a). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. https://doi.org/10.1111/jedm.12000. Kane, M. T. (2013b). Validation as a pragmatic, scientific activity. Journal of Educational Measurement, 50(1), 115–122. https://doi.org/10.1111/jedm.12007. Kane, T. J., & Staiger, D. O. (2012). Gathering Feedback for Teaching: Combining High-Quality Observations with Student Surveys and Achievement Gains. Retrieved on January 4, 2013 from http://metproject.org/downloads/MET_Gathering_Feedback_Research_Paper.pdf. Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2010). Identifying effective classroom practices using student achievement data, (September 2010), 51. https://doi.org/10.3386/w15803. Kraft, M. A., & Gilmour, A. F. (2016). Can principals promote teacher development as evaluators? A case study of principals’ views and experiences. Educational Administration Quarterly, 52(5), 711–753. Lazarev, V., Newman, D., Sharp, A., & (ED), R. E. L. W. (2014). Properties of the multiple measures in Arizona’s teacher evaluation model. REL 2015-050. Regional Educational Laboratory West, (October). Retrieved on July 23, 2018 from https://files.eric.ed.gov/fulltext/ED548027.pdf. Leckie, G., & Baird, J. A. (2011). Rater effects on essay scoring: a multilevel analysis of severity drift, central tendency, and rater experience. Journal of Educational Measurement, 48(4), 399–418. Lockwood, J. R., Savitsky, T. D., & McCaffrey, D. F. (2015). Inferring constructs of effective teaching from classroom observations: an application of Bayesian exploratory factor analysis without restrictions. Ann. Appl. Stat., 9(3), 1484–1509. Martin-Raugh, M., Tannenbaum, R. J., Tocci, C. M., & Reese, C. (2016). Behaviorally anchored rating scales: An application for evaluating teaching practice. Teaching and Teacher Education, 59, 414–419. https://doi.org/10.1016/j.tate.2016.07.026 Martinez, F., Taut, S., & Schaaf, K. (2016). Classroom observation for evaluating and improving teaching: an international perspective. Studies in Educational Evaluation, 49, 15–29. McCaffrey, D. F., Yuan, K., Savitsky, T. D., Lockwood, J. R., & Edelen, M. O. (2015). Uncovering multivariate structure in classroom observations in the presence of rater errors. Educational Measurement: issues and Practice, 34(2), 34–46. McClellan, C. (2013). What it looks like: master coding videos for observer training and assessment. Seattle: Bill & Melinda Gates Foundation. Retrieved on January 14, 2014 from http://k12education.gatesfoundation.org/resource/what-it-looks-like-master-coding-videos-for-observer-training-and-assessment/. McClellan, C., Atkinson, M., & Danielson, C. (2012). Teacher evaluator training & certification: lessons learned from the Measures of Effective Teaching project (Practitioner Series for Teacher Evaluation). San Francisco: Teachscape. Retrieved Jan 3, 2019 from https://www.issuelab.org/resource/teacher-evaluator-training-certification-lessons-learned-from-themeasures-of-effective-teaching-project.html. Muijs, D., Kyriakides, L., van der Werf, G., Creemers, B., Timperley, H., & Earl, L. (2014). State of the art–teacher effectiveness and professional learning. School Effectiveness and School Improvement, 25(2), 231–256. Myford, C. M., & Wolfe, E. W. (2009). Monitoring rater performance over time: a framework for detecting differential accuracy and differential scale category use. Journal of Educational Measurement, 46(4), 371–389. Netolicky, D. M. (2016). Coaching for professional growth in one Australian school: “oil in water”. International Journal of Mentoring and Coaching in Education, 5(2), 66–86. https://doi.org/10.1108/IJMCE-09-2015-0025. Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom assessment scoring system (CLASS) manual, pre-K. Baltimore: Brookes. Pons, A. (2018). What does teaching look like? A new video study [Blog post]. Retrieved from http://oecdeducationtoday.blogspot.com/2018/01/what-does-teaching-look-like-new-video.html. Accessed 2 Dec 2018. Praetorius, A.-K., Pauli, C., Reusser, K., Rakoczy, K., & Klieme, E. (2014). One lesson is all you need? Stability of instructional quality across lessons. Learning and Instruction, 31, 2–12. https://doi.org/10.1016/j.learninstruc.2013.12.002. Praetorius, A. K., & Charalambous, C. Y. (2018). Classroom observation frameworks for studying instructional quality: looking back and looking forward. ZDM - Mathematics Education, 50(3), 535–553. https://doi.org/10.1007/s11858-018-0946-0. Roegman, R., Goodwin, A. L., Reed, R., & Scott-McLaughlin, R. M. (2016). Unpacking the data: an analysis of the use of Danielson’s (2007) Framework for Professional Practice in a teaching residency program. Educational Assessment, Evaluation and Accountability, 28(2), 111–137. https://doi.org/10.1007/s11092-015-9228-3. Sahlberg, P. (2011). Finnish lessons. New York: Teachers College Press. Schoenfeld, A. H., Floden, R., El Chidiac, F., Gillingham, D., Fink, H., Hu, S., et al. (2018). On classroom observations. Journal for STEM Education Research., 1, 34–59. https://doi.org/10.1007/s41979-018-0001-7. Seidel, T., Prenzel, M., & Kobarg, M. (2005). How to run a video study. Technical report of the IPN Video Study. Berlin: Waxmann Shepard, L. A. (2016). Evaluating test validity: reprise and progress. Assessment in Education: Principles, Policy and Practice, 23(2), 268–280. https://doi.org/10.1080/0969594X.2016.1141168. State of New Jersey Administrative Code, 6A:10-7.1 (2016), Subchapter 7. Steinberg, M. P., & Garrett, R. (2016). Classroom composition and measured teacher performance: what do teacher observation scores really measure? Educational Evaluation and Policy Analysis, 38(2), 293–317. https://doi.org/10.3102/0162373715616249. Stigler, J. W., Gonzales, P., Kwanaka, T., Knoll, S., & Serrano, A. (1999). The TIMSS videotape classroom study: methods and findings from an exploratory research project on eighth-grade mathematics instruction in Germany, Japan, and the United States, Washington D. C. Retrieved Oct 12, 2014 from: http://nces.ed.gov/pubs99/1999074.pdf. Taut, S., Santelices, M. V., & Stecher, B. (2012). Validation of a national teacher assessment and improvement system. Educational Assessment, 17(4), 163–199. Taut, S., & Sun, Y. (2014). The development and implementation of a national, standards-based, multi-method teacher performance assessment system in Chile. Education Policy Analysis Archives, 22(71), 1–31. https://doi.org/10.14507/epaa.v22n71.2014. van der Lans, R. M., van de Grift, W. J., & van Veen, K. (2017). Individual differences in teacher development: an exploration of the applicability of a stage model to assess individual teachers. Learning and Individual Differences, 58, 46–55. Van der Lans, R. M., van de Grift, W. J., van Veen, K., & Fokkens-Bruinsma, M. (2016). Once is not enough: establishing reliability criteria for feedback and evaluation decisions based on classroom observations. Studies in Educational Evaluation, 50, 88–95. White, T. (2014a). Evaluating teachers more strategically: using performance results to streamline evaluation systems. Retrieved September 6, 2018 from: https://www.carnegiefoundation.org/wp-content/uploads/2014/12/BRIEF_evaluating_teachers_strategically_Jan2014.pdf. White, T. (2014b). Adding eyes: the rise, rewards, and risks of multi-rater teacher observation systems. Retrieved September 6, 2018 from: https://www.carnegiefoundation.org/wp-content/uploads/2014/12/BRIEF_Multi-rater_evaluation_Dec2014.pdf. White, M. C. (2018). Rater performance standards for classroom observation instruments. Educational Researcher, 47(8), 492–501. https://doi.org/10.3102/0013189X18785623. Whitehurst, G., Chingos, M., & Lindquist, K. (2014). Evaluating teachers with classroom observations: Lessons learned in four districts. Providence, RI: Brown Center on Education Policy at the Brookings Institution.