Introducing EMMIE: an evidence rating scale to encourage mixed-method crime prevention synthesis reviews

Journal of Experimental Criminology - Tập 11 - Trang 459-473 - 2015
Shane D. Johnson1, Nick Tilley1, Kate J. Bowers1
1UCL Department of Security and Crime Science, University College London, London, UK

Tóm tắt

This paper describes the need for, and the development of, a coding system to distil the quality and coverage of systematic reviews of the evidence relating to crime prevention interventions. The starting point for the coding system concerns the evidence needs of policymakers and practitioners. The proposed coding scheme (EMMIE) builds on previous scales that have been developed to assess the probity, coverage and utility of evidence both in health and criminal justice. It also draws on the principles of realist synthesis and review. The proposed EMMIE scale identifies five dimensions to which systematic reviews intended to inform crime prevention should speak. These are the Effect of intervention, the identification of the causal Mechanism(s) through which interventions are intended to work, the factors that Moderate their impact, the articulation of practical Implementation issues, and the Economic costs of intervention. Systematic reviews of crime prevention, and the primary studies on which they are based, typically address the question of effect size, but are often silent on the other dimensions of EMMIE. This lacuna of knowledge is unhelpful to practitioners who want to know more than what might work to reduce crime. The EMMIE framework is intended to encourage the collection of primary data regarding these issues and the synthesis of such knowledge in future systematic reviews.

Tài liệu tham khảo

Adetugbo, K., & Williams, H. (2000). How well are randomized controlled trials reported in the dermatology literature? Archives of Dermatology, 136(3), 381–385. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex: Wiley. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1, 97–111. Bowers, K., Tompson, L., & Johnson, S. D. (2014). Implementing Information Science in Policing: Mapping the Evidence Base. Policing, advanced online access. Braga, A. A., & Weisburd, D. (2012). The effects of focused deterrence strategies on crime a systematic review and meta-analysis of the empirical evidence. Journal of Research in Crime and Delinquency, 49, 323–358. Bryant, F. B., & Wortman, P. M. (1984). Methodological issues in the meta‐analysis of quasi‐experiments. New Directions for Program Evaluation, 24, 5–24. Campbell, D. T., & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin. Cartwright, N., & Hardie, J. (2012). Evidence-Based Policy: A Practical Way of Doing it Better. Oxford: Oxford University Press. Farrell, G., Bowers, K., & Johnson, S. D. (2004). Cost-benefit analysis for crime science: making cost-benefit analysis useful through a portfolio of outcomes. In M. Smith & N. Tilley (Eds.), Launching Crime Science. London: Willan. Gill, C. E. (2011). Missing links: how descriptive validity impacts the policy relevance of randomized controlled trials in criminology. Journal of Experimental Criminology, 7(3), 201–224. Guyatt, G. H., Oxman, A. D., Kunz, R., Vist, G. E., Falck-Ytter, Y., & Schünemann, H. J. (2008). What is “quality of evidence” and why is it important to clinicians? British Medical Journal, 336(7651), 995–998. Hedges, L. V., & Vevea, J. L. (1996). Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model. Journal of Educational and Behavioral Statistics, 21(4), 299–332. Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1, 39–65. Higgins J. P. T., & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration. Available from www.cochrane-handbook.org. Higgins, J., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., Savovic, J., Schulz, K., Weeks, L., & Sterne, J. A. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. British Medical Journal, 343, d5928. Huffcutt, A. I., & Arthur, W. (1995). Development of a new outlier statistic for meta-analytic data. Journal of Applied Psychology, 80(2), 327–334. Johnson, S. and Loxley, C. (2001) ‘Installing Alley-Gates: Practical Lessons from Burglary Prevention Projects’. Home Office Briefing Note 2/01. London. Home Office. Johnson, S. D., Guerette, R. T., & Bowers, K. (2014) Crime displacement: what we know, what we don’t know, and what it means for crime reduction. Journal of Experimental Criminology, 10(4), 549–571. Knutsson, J., & Tilley, N. (2009). Introduction. In J. Knutsson & N. Tilley (Eds.), Evaluating crime reduction initiatives (pp. 1–6). New Jersey: Prentice Hall. Lipsey, M., & Wilson, D. (1993). The efficacy of psychological, educational, and behavioral treatment: confirmation from meta-analysis. American Psychologist, 48(12), 1181–1209. Lipsey, M., & Wilson, D. (2001). Practical Meta-Analysis. London: Sage. McDougal, C., Cohen, M. A., Perry, A., & Swaray, R. (2008). Benefit-Cost Analyses of Sentencing: A Systematic Review. Norway: Campbell Collaboration. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine, 151(4), 264–269. Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gotzsche, P. C., Devereaux, P. J., Elbourne, D., Egger, M., & Altman, D. G. (2010). CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomized trials. Journal of Clinical Epidemiology, 63, e1–e27. Ogrinc, G., Mooney, S. E., Estrada, C., Foster, T., Goldmann, D., Hall, L. W., Huizinga, M. M., Liu, S. K., Mills, P., Neily, J., Nelson, W., Pronovost, P. J., Provost, L., Rubenstein, L. V., Speroff, T., Splaine, M., Thomson, R., Tomolo, A. M., & Watts, B. (2008). The SQUIRE (standards for QUality improvement reporting excellence) guidelines for quality improvement reporting: explanation and elaboration. Quality and Safety in Health Care, 17(Suppl 1), i13–i32. Pawson, R. (2002). Evidence-based policy and the promise of ‘realist synthesis’. Evaluation, 8, 340–358. Pawson, R. (2006). Evidence-Based Policy. London: Sage. Pawson, R., & Tilley, N. (1997). Realistic Evaluation. London: Sage. Perry, A. E., Weisburd, D., & Hewitt, C. (2010). Are criminologists describing randomized controlled trials in ways that allow us to assess them? Findings from a sample of crime and justice trials. Journal of Experimental Criminology, 6(3), 245–262. Petticrew, M., & Roberts, H. (2006). Systematic Reviews in the Social Sciences (Chapter 1- Why do we need systematic reviews?). Oxford: Blackwell. Rosenbaum, D. (1988). Community crime prevention: a review and synthesis of the literature. Justice Quarterly, 5(3), 323–395. Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC Medicine, 8(1), 18. 1–9. Shea, B. J., Grimshaw, J. M., Wells, G. A., Boers, M., Andersson, N., Hamel, C., Porter, A. C., Tugwell, P., Moher, D., & Bouter, L. M. (2007). Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology, 7(10), 1–7. Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997). Preventing Crime: What Works, What Doesn’t, What’s Promising. Washington DC: US Department of Justice Office of Justice Programs. Sidebottom, A., & Tilley, N. (2012). Further improving reporting in crime and justice: an addendum to Perry, Weisburd and Hewitt (2010). Journal of Experimental Criminology, 8(1), 49–69. Stock, W. A., Okun, M. A., Haring, M. J., Miller, W., Kinney, C., & Ceurvorst, R. W. (1982). Rigor in data synthesis: a case study of reliability in meta-analysis. Educational Researcher, 11(6), 10–14. 20. Tilley, N. (1996). Demonstration, exemplification, duplication and replication in evaluation research. Evaluation: The International Journal of Theory, Research and Practice, 2(1), 35–50. von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., et al. (2007). The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Medicine, 4(10), e296. doi:10.1371/journal.pmed.0040296. Weisburd, D. (2010). Justifying the Use of Non-experimental methods and disqualifying the use of randomized controlled trials: challenging folklore in evaluation research in crime and justice. Journal of Experimental Criminology, 6(2), 209–227. Weisburd, D., Hinkle, J., Braga, A., and Wooditch, A. (2015). Understanding the Mechanisms Underlying Broken Windows Policing: The Need for Evaluation Evidence, Journal of Research in Crime and Delinquency (in press). Wilson, J. Q., & Kelling, G. L. (1982). Broken windows: the police and neighborhood safety. Atlantic Monthly, 211, 29–38. Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: realist syntheses. BMC Medicine, 11(21), 1–14.