Testing Hypotheses on Risk Factors for Scientific Misconduct via Matched-Control Analysis of Papers Containing Problematic Image Duplications

Science and Engineering Ethics - Tập 25 - Trang 771-789 - 2018
Daniele Fanelli1, Rodrigo Costas2, Ferric C. Fang3, Arturo Casadevall4, Elisabeth M. Bik5
1Department of Methodology, London School of Economics and Political Science, Columbia House, London, UK
2Centre for Science and Technology Studies, (CWTS), Leiden University, Leiden, The Netherlands
3Departments of Laboratory Medicine and Microbiology, University of Washington School of Medicine, Seattle, USA
4Department of Molecular Microbiology and Immunology, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA
5uBiome, San Francisco, USA

Tóm tắt

It is commonly hypothesized that scientists are more likely to engage in data falsification and fabrication when they are subject to pressures to publish, when they are not restrained by forms of social control, when they work in countries lacking policies to tackle scientific misconduct, and when they are male. Evidence to test these hypotheses, however, is inconclusive due to the difficulties of obtaining unbiased data. Here we report a pre-registered test of these four hypotheses, conducted on papers that were identified in a previous study as containing problematic image duplications through a systematic screening of the journal PLoS ONE. Image duplications were classified into three categories based on their complexity, with category 1 being most likely to reflect unintentional error and category 3 being most likely to reflect intentional fabrication. We tested multiple parameters connected to the hypotheses above with a matched-control paradigm, by collecting two controls for each paper containing duplications. Category 1 duplications were mostly not associated with any of the parameters tested, as was predicted based on the assumption that these duplications were mostly not due to misconduct. Categories 2 and 3, however, exhibited numerous statistically significant associations. Results of univariable and multivariable analyses support the hypotheses that academic culture, peer control, cash-based publication incentives and national misconduct policies might affect scientific integrity. No clear support was found for the “pressures to publish” hypothesis. Female authors were found to be equally likely to publish duplicated images compared to males. Country-level parameters generally exhibited stronger effects than individual-level parameters, because developing countries were significantly more likely to produce problematic image duplications. This suggests that promoting good research practices in all countries should be a priority for the international research integrity agenda.

Tài liệu tham khảo

Anderson, M., Ronning, E., De Vries, R., & Martinson, B. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13(4), 437–461. Bik, E. M., Casadevall, A., & Fang, F. C. (2016). The prevalence of inappropriate image duplication in biomedical research publications. Mbio, 7(3), e00809–e00816. https://doi.org/10.1128/mBio.00809-16. Caron, E., & Van Eck, N. J. (2014). Large scale author name disambiguation using rule-based scoring and clustering.. In E. Noyons (Ed.), Proceedings of the 19th international conference on science and technology indicators, Leiden, The Netherlands (pp. 79–86). Universiteit Leiden—CWTS. Costas, R., & Bordons, M. (2011). Do age and professional rank influence the order of authorship in scientific publications? Some evidence from a micro-level perspective. Scientometrics, 88(1), 145–161. https://doi.org/10.1007/s11192-011-0368-z. DFG. (2010). Quality not quantity—DFG adopts rules to counter the flood of publications in research. http://www.dfg.de/en/service/press/press_releases/2010/pressemitteilung_nr_07/index.html. DORA. (2012). San Francisco declaration on research assessment. http://www.ascb.org/dora/. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(5), e5738. https://doi.org/10.1371/journal.pone.0005738. Fanelli, D. (2012). When east meets west … does bias increase? A preliminary study on South Korea, United States and other countries. In C. Ho-Nam, K. Hye-Sun, N. Kyung-Ran, L. Seon-Hee, K. Hye-Jin, & H. Kretschmer (Eds.), 8th international conference on webometrics, informetrics and scientometrics and 13th COLLNET meeting, Seoul, South Korea, 23–26 October 2012 (pp. 47–51). Seoul: KISTI. Fanelli, D. (2013). Why growing retractions are (mostly) a good sign. PLoS Medicine, 10(12), e1001563. https://doi.org/10.1371/journal.pmed.1001563. Fanelli, D., Costas, R., Fang, F., Casadevall, A., & Bik, E. (2016). What study and author characteristics predict scientific misconduct? https://osf.io/w53yu/. Fanelli, D., Costas, R., & Ioannidis, J. P. A. (2017). Meta-assessment of bias in science. Proceedings of the National Academy of Sciences, 114(14), 3714–3719. https://doi.org/10.1073/pnas.1618569114. Fanelli, D., Costas, R., & Larivière, V. (2015). Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PLoS ONE, 10(6), e0127556. https://doi.org/10.1371/journal.pone.0127556. Fanelli, D., & Larivière, V. (2016). Researchers’ individual publication rate has not increased in a century. PLoS ONE, 11(3), e0149504. https://doi.org/10.1371/journal.pone.0149504. Fang, F. C., Bennett, J. W., & Casadevall, A. (2013). Males are overrepresented among life science researchers committing scientific misconduct. Mbio, 4(1), e00640-12. https://doi.org/10.1128/mbio.00640-12. Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 109(42), 17028–17033. https://doi.org/10.1073/pnas.1212247109. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7(1), 45–52. https://doi.org/10.1177/1948550615612150. Franzoni, C., Scellato, G., & Stephan, P. (2011). Changing incentives to publish. Science, 333(6043), 702–703. https://doi.org/10.1126/science.1197286. Grieneisen, M. L., & Zhang, M. H. (2012). A comprehensive survey of retracted articles from the scholarly literature. PLoS ONE, 7(10), 15. https://doi.org/10.1371/journal.pone.0044118. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953. Kaatz, A., Vogelman, P. N., & Carnes, M. (2013). Are men more likely than women to commit scientific misconduct? Maybe, maybe not. Mbio, 4(2), 2. https://doi.org/10.1128/mbio.00156-13. Lee, C., & Schrank, A. (2010). Incubating innovation or cultivating corruption? The developmental state and the life sciences in Asia. Social Forces, 88(3), 1231–1255. Lu, S. F., Jin, G. Z., Uzzi, B., & Jones, B. (2013). The retraction penalty: Evidence from the web of science. Scientific Reports, 3, 5. https://doi.org/10.1038/srep03146. Pontille, D. (2004). La signature scientifique: Une sociologie pragmatique de l’attribution. Paris: CNRS Sociologie. Pupovac, V., & Fanelli, D. (2014). Scientists admitting to plagiarism: A meta-analysis of surveys. Science and Engineering Ethics. https://doi.org/10.1007/s11948-014-9600-6. Qiu, J. (2010). Publish or perish in China. Nature, 463(7278), 142–143. https://doi.org/10.1038/463142a. Redman, B. K., & Merz, J. F. (2008). Scientific misconduct: Do the punishments fit the crime? Science, 321(5890), 775. Resnik, D. B., Rasmussen, L. M., & Kissling, G. E. (2015). An international study of research misconduct policies. Accountability in Research-Policies and Quality Assurance, 22(5), 249–266. https://doi.org/10.1080/08989621.2014.958218. Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12(1), 53–74. Therneau, T. (2014). A package for survival analysis in S_. R package version 2.37-7. van Dalen, H. P., & Henkens, K. (2012). Intended and unintended consequences of a publish-or-perish culture: A worldwide survey. Journal of the American Society for Information Science and Technology, 63(7), 1282–1293. https://doi.org/10.1002/asi.22636. VSNU. (2015). Protocol for research assessments in the Netherlands. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47. https://doi.org/10.1016/j.joi.2010.08.001. Wright, D., Titus, S., & Cornelison, J. (2008). Mentoring and research misconduct: An analysis of research mentoring in closed ORI cases. Science and Engineering Ethics, 14(3), 323–336.