Defending and managing the pipeline: lessons for running a randomized experiment in a correctional institution

Journal of Experimental Criminology - Tập 8 - Trang 307-329 - 2012
Caterina G. Roman1, Jocelyn Fontaine2, John Fallon3, Jacquelyn Anderson4, Corinne Rearer5
1Department of Criminal Justice, Temple University, Philadelphia, USA
2The Urban Institute, Washington, USA
3Corporation for Supportive Housing, Chicago, USA
4Corporation for Supportive Housing, Oakland, USA
5Trilogy, Inc., Chicago, USA

Tóm tắt

To discuss the challenges faced in an experimental prisoner reentry evaluation with regard to managing the pipeline of eligible cases. This paper uses a case study approach, coupled with a review of the relevant literature on issues of case flow in experimental studies in criminal justice settings. Included are recommendations for researchers on the management of case flow, reflections on the major research design issues encountered, and a listing of dilemmas that are likely to plague experimental evaluations of prisoner reentry programs. Particularly in a jail setting, anticipating the timing of release of a prisoner to the community is probably impossible given the large number of issues that impact release, many of which will be unanticipated. A detailed pipeline study is critical to the success of an experimental study targeting returning prisoners. Pipeline studies should be conducted under what will be the true conditions and context for enrollment, given all eligibility criteria. With continued and systematic documentation of enrollment challenges in future experimental evaluations of reentry programs, as well as other experimental evaluations that involve individuals, academics can build a deep literature that would help facilitate future successful randomized experiments in the criminal justice field.

Tài liệu tham khảo

Bell, J. B. (2004). Managing evaluation projects. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 571–603). San Francisco: Jossey-Bass. Bickman, L. (1985). Randomized field experiments in education: implementation issues. New Directions for Program Evaluation, 28, 39–53. Boruch, R. (1997). Randomized experiments for planning and evaluation. Thousand Oaks: Sage. Clarke, R., & Cornish, D. (1972). The controlled trial in institutional research: Paradigm or pitfall for penal evaluators? London: HMSO. Conner, R. F. (1977). Selecting a control group: an analysis of the randomization process in twelve social reform programs. Evaluation Quarterly, 1, 194–244. Davis, R. L., & Auchter, B. (2010). National Institute of Justice funding of experimental studies of violence against women: a critical look at implementation issues and policy implications. Journal of Experimental Criminology, 6, 377–95. Devine, J., Wright, J., & Joyner, L. (1994). Issues in implementing a randomized experiment in a field setting. New Directions for Program Evaluation, 63, 27–40. Eck, J. (2002). Learning from experience in problem oriented policing and crime prevention: The positive function of weak evaluations and the negative functions of strong ones. In N. Tilley (Ed.), Evaluation for crime prevention: Crime prevention studies, 4 (pp. 93–117). Monsey: Criminal Justice Press. Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and justice ( Vol. 4, pp. 257–308). Chicago: University of Chicago Press. Feder, L., Jolin, A., & Feyerherm, W. (2000). Lessons from two randomized experiments in criminal justice settings. Crime and Delinquency, 46, 380–400. Fontaine, J., Gilchrist-Scott, D., & Horvath, A. (2011). Supportive housing for the disabled reentry population: The District of Columbia frequent users service enhancement pilot program. Washington, DC: Urban Institute. Goldkamp, J. (2008). Missing the target and missing the point: “Successful” random assignment but misleading results. Journal of Experimental Criminology, 4, 83–115. Gondolf, E. W. (2010). Lessons from a successful and failed random assignment testing batterer program innovations. Journal of Experimental Criminology, 6, 355–376. Gueron, J. M. (2002). The politics of random assignment: Implementing studies and affecting policy. In F. Mosteller & R. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 15–49). Washington, DC: Brookings Institution Press. Hatry, H. T. (2004). Using agency records. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 396–411). San Francisco: Jossey-Bass. Hatry, H. T., & Newcomer, K. E. (2004). Pitfalls of evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 547–570). San Francisco: Jossey-Bass. Lum, C., & Yang, S. M. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1, 191–213. Morell, J. A. (2010). Evaluation in the face of uncertainty. NY: Guilford Press. Moser, R. (2007). Reentry housing: Systems, programs and policy. Presentation to the North Carolina Department of Corrections (May). New York, NY: Corporation for Supportive Housing. National Research Council. (2005). Improving evaluation of anticrime programs. Committee on Improving evaluation of anti-crime programs. Washington, DC: The National Academies Press. Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Nightingale, D. S., & Rossman, S. B. (2004). Collecting data in the field. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 363–395). San Francisco: Jossey-Bass. Petersilia, J. (1989). Implementing random experiments: lessons from BJA’s intensive supervision project. Evaluation Review, 13, 435–458. Rezmovic, E. L., Cook, T. J., & Dobson, L. D. (1981). Beyond random assignment: factors affecting evaluation integrity. Evaluation Review, 5, 51–67. Roman, C., Fontaine, J., & Burt, M. (2009). System change accomplishments of the Corporation for Supportive Housing’s Returning Home Initiative- Summary Brief. Washington, DC: Urban Institute. Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks: Sage. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. Sherman, L. W. (2007). The power few: experimental criminology and the reduction of harm. Journal of Experimental Criminology, 3, 299–321. Solomon, A. L., Osborne, J. W. L., LoBuglio, S. F., Mellow, J., & Mukamal, D. (2008). Life after lockup: Improving reentry from jail to the community. Washington, DC: Urban Institute. Weisburd, D. (2003). Ethical practice and evaluation of the interventions in crime and justice: the moral imperative for randomized trials. Evaluation Review, 27, 336–354. Wholey, J. S. (2004). Evaluability assessment. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 33–62). San Francisco: Jossey-Bass. Wolff, N. (2000). Using randomized controlled trials to evaluate socially complex services: problems, challenges and recommendations. The Journal of Mental Health Policy and Economics, 3, 97–109.