Revisiting an Analysis of Threats to Internal Validity in Multiple Baseline Designs
Tóm tắt
In our previous article on threats to internal validity of multiple baseline design variations (Slocum et al., 2022), we argued that nonconcurrent multiple baseline designs (NCMB) are capable of rigorously demonstrating experimental control and should be considered equivalent to concurrent multiple baselines (CMB) in terms of internal validity. We were fortunate to receive five excellent commentaries on our article from experts in single-subject research design—four of whom endorsed the conclusion that NCMBs should be considered strong experimental designs capable of demonstrating experimental control. In the current article, we address the most salient points made in the five commentaries by further elaborating and clarifying the logic described in our original article. We address arguments related to classic threats including maturation, testing and session experience, and coincidental events (history). We rebut the notion that although NCMBs are strong, CMBs provide an increment of additional control and discuss the application of probability-based analysis of the likelihood of threats to internal validity. We conclude by emphasizing our agreement with many of the commentaries that selection of single-case experimental designs should be based on the myriad subtleties of research priorities and contextual factors rather than on a decontextualized hierarchy of designs.
Tài liệu tham khảo
Christ, T. J. (2007). Experimental control and threats to internal validity of concurrent and nonconcurrent multiple baseline designs. Psychology in the Schools, 44(5), 451–459. https://doi.org/10.1002/pits.20237
Hantula, D. A. (2019). Replication and reliability in behavior science and behavior analysis [special section]. Perspectives on Behavior Science, 42, 1–132.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179. https://doi.org/10.1177/001440290507100203
Jarmolowicz, D. P., Greer, B. D., Killeen, P. R., & Huskinson, S. L. (2021). Applications of quantitative methods [special section]. Perspectives on Behavior Science, 44, 503–682.
Kennedy, C. H. (2005). Single-case designs for educational research (Vol. 1). Pearson/Allyn &.
Kratochwill, T. R., Horner, R. H., Levin, J. R., Machalicek, W., Ferron, J., & Johnson, A. (2021). Single-case design standards: An update and proposed upgrades. Journal of School Psychology, 89, 91–105. https://doi.org/10.1016/j.jsp.2021.10.006
Ledford, J., & Zimmerman, K. N. (2022). Rethinking rigor in multiple baseline and multiple probe designs. Remedial & Special Education. Advance online publication. https://doi.org/10.1177/07419325221102539.
Maggin, D. M., Barton, E., Reichow, B., Lane, K., & Shogren, K. A. (2021). Commentary on the what works clearinghouse standards and procedures handbook (v. 4.1) for the review of single-case research. Remedial & Special Education. Advance online publication. https://doi.org/10.1177/07419325211051317.
Ninci, J., Vannest, K. J., Willson, V., & Zhang, N. (2015). Interrater agreement between visual analysts of single-case data: A meta-analysis. Behavior Modification, 39, 510–541. https://doi.org/10.1177/0145445515581327
Slocum, T. A., Pinkelman, S. E., Joslyn, P. R., & Nichols, B. (2022). Threats to internal validity in multiple-baseline design variations. Advance online publication. https://doi.org/10.1007/s40614-022-00326-1
Wolfe, K., Seaman, M. A., & Drasgow, E. (2016). Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification, 40, 852–873. https://doi.org/10.1177/0145445516644699