Nội dung được dịch bởi AI, chỉ mang tính chất tham khảo
Các Nguyên Nhân Cơ Bản Đằng Sau Phản Hồi Bất Cẩn và Thiên Kiến Của Người Tham Gia Nghiên Cứu Trong Lĩnh Vực Khoa Học
Tóm tắt
Khả năng tái lập các nghiên cứu khoa học dựa trên công cụ khảo sát đã trở nên khó khăn hơn trong thời gian gần đây do kích thước mẫu nhỏ và các thiên kiến phản hồi như sự không chú ý và giả vờ. Việc tăng cường kích thước mẫu có những khó khăn của riêng nó, và một cách hiệu quả hơn để tăng tính chính xác của dữ liệu là giải quyết các vấn đề liên quan đến thiên kiến phản hồi. Bài báo này xem xét các nguyên nhân gốc rễ dẫn đến thiên kiến phản hồi, đặc biệt là phản hồi bất cẩn và giả vờ từ phía người tham gia khảo sát. Các nguyên nhân gốc rễ này được phân chia thành các yếu tố định sẵn của người tham gia như khả năng nhận thức, đặc điểm tính cách, mức độ động lực và thời gian phản ứng, các biến tình huống như phương thức thu thập dữ liệu, những phân tâm từ môi trường, tương tác giữa nhà nghiên cứu và người tham gia, những nét đặc trưng cá nhân và sự khác biệt xuyên văn hóa.
Từ khóa
#thiên kiến phản hồi #nghiên cứu khoa học #mẫu khảo sát #phản hồi bất cẩn #giả vờ #khả năng nhận thức #đặc điểm tính cách #động lực #thời gian phản ứng #biến tình huống #thu thập dữ liệu #phân tâm môi trường #tương tác nghiên cứuTài liệu tham khảo
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th Edn). American Psychiatric Association. Retrieved from http://psychiatryonline.org/doi/book/10.1176/appi.books.9780890425596
Bainbridge, W. S. (Ed.). (2012). Leadership in science and technology: A reference handbook. Los Angeles: Sage.
Baker, R., Brick, J. M., Bates, N. A., Battaglia, M., Couper, M. P., Dever, J. A., et al. (2013). Summary report of the AAPOR task force on non-probability sampling. Journal of Survey Statistics and Methodology, 1(2), 90–143. https://doi.org/10.1093/jssam/smt008.
Begley, C. G., & Ioannidis, J. P. A. (2015). Reproducibility in science: Improving the standard for basic and preclinical research. Circulation Research, 116(1), 116–126. https://doi.org/10.1161/CIRCRESAHA.114.303819.
Behrend, T. S., & Thompson, L. F. (2011). Similarity effects in online training: Effects with computerized trainer agents. Computers in Human Behavior, 27(3), 1201–1206. https://doi.org/10.1016/j.chb.2010.12.016.
Behrend, T. S., & Thompson, L. F. (2012). Using animated agents in learner-controlled training: The effects of design control: Effects of design control using animated agents. International Journal of Training and Development, 16(4), 263–283. https://doi.org/10.1111/j.1468-2419.2012.00413.x.
Berry, D. T. R., Wetter, M. W., Baer, R. A., Larsen, L., Clark, C., & Monroe, K. (1992). MMPI-2 random responding indices: Validation using a self-report methodology. Psychological Assessment, 4(3), 340–345. https://doi.org/10.1037/1040-3590.4.3.340.
Bohannon, J. (2011). Social science for pennies. Science, 334(6054), 307–307. https://doi.org/10.1126/science.334.6054.307.
Brenner, P. S., & DeLamater, J. (2016). Measurement Directiveness as a cause of response bias: Evidence from two survey experiments. Sociological Methods & Research, 45(2), 348–371. https://doi.org/10.1177/0049124114558630.
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews. Neuroscience, 14(5), 365–376. https://doi.org/10.1038/nrn3475.
Cacioppo, J. T. (1984). The efficient assessment of need for cognition / John T. Cacioppo, Richard E. Petty and Chuan Feng Kao. [Mahwah, NJ]: L. Erlbaum Associates, 1984. Retrieved from http://encore.newcastle.edu.au/iii/encore/record/C__Rb2850368__Sthe%20efficient%20assessment%20of%20need%20for%20cognition__Orightresult__U__X6?lang=eng&suite=cobalt
Carrier, L. M., Cheever, N. A., Rosen, L. D., Benitez, S., & Chang, J. (2009). Multitasking across generations: Multitasking choices and difficulty ratings in three generations of Americans. Computers in Human Behavior, 25(2), 483–489. https://doi.org/10.1016/j.chb.2008.10.012.
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112–130. https://doi.org/10.3758/s13428-013-0365-7.
Chen, C. P. (1995). Counseling applications of RET in a Chinese cultural context. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 13(2), 117–129. https://doi.org/10.1007/BF02354457.
Craig, B. M., Hays, R. D., Pickard, A. S., Cella, D., Revicki, D. A., & Reeve, B. B. (2013). Comparison of US panel vendors for online surveys. Journal of Medical Internet Research, 15(11), e260. https://doi.org/10.2196/jmir.2903.
Credé, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70(4), 596–612. https://doi.org/10.1177/0013164410366686.
Daly, T. M., & Nataraajan, R. (2015). Swapping bricks for clicks: Crowdsourcing longitudinal data on Amazon Turk. Journal of Business Research, 68, 2603–2609 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edselp&AN=S0148296315001903&site=eds-live.
Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41(1), 417–440. https://doi.org/10.1146/annurev.ps.41.020190.002221.
Edwards, P., Roberts, I., Clarke, M., DiGuiseppi, C., Pratap, S., Wentz, R., et al. (2007). Methods to increase response rates to postal questionnaires. The Cochrane Database of Systematic Reviews, 2, MR000008. https://doi.org/10.1002/14651858.MR000008.pub3.
Fang, J., Prybutok, V., & Wen, C. (2016). Shirking behavior and socially desirable responding in online surveys: A cross-cultural study comparing Chinese and American samples. Computers in Human Behavior, 54, 310–317. https://doi.org/10.1016/j.chb.2015.08.019.
Fugett, A., Thomas, S. W., & Lindberg, M. A. (2014). The many faces of malingering and participant response strategies: New methodologies in the attachment and clinical issues questionnaire (ACIQ). The Journal of General Psychology, 141(2), 80–97. https://doi.org/10.1080/00221309.2013.866538.
Gao, Z., House, L., & Bi, X. (2016). Impact of satisficing behavior in online surveys on consumer preference and welfare estimates. Food Policy, 64, 26–36 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edselp&AN=S0306919216303098&site=eds-live.
Gehlbach, H., & Barge, S. (2012). Anchoring and adjusting in questionnaire responses. Basic and Applied Social Psychology, 34(5), 417–433. https://doi.org/10.1080/01973533.2012.711691.
Godinho, A., Kushnir, V., & Cunningham, J. A. (2016). Unfaithful findings: Identifying careless responding in addictions research. Addiction, 111(6), 955–956 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=s3h&AN=115197404&site=eds-live.
Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of mechanical Turk samples: Data collection in a flat world. Journal of Behavioral Decision Making, 26(3), 213–224. https://doi.org/10.1002/bdm.1753.
Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003). Understanding and using the implicit association test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85(2), 197–216.
Hansen, J. M., & Smith, S. M. (2012). The impact of two-stage highly interesting questions on completion rates and data quality in online marketing research. International Journal of Market Research, 54(2), 241–260 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bsu&AN=74636921&site=ehost-live&scope=site.
Hardré, P. L., Crowson, H. M., & Xie, K. (2012). Examining contexts-of-use for web-based and paper-based questionnaires. Educational and Psychological Measurement, 72(6), 1015–1038. https://doi.org/10.1177/0013164412451977.
Haslam, S. A., Reicher, S. D., & Birney, M. E. (2014). Nothing by mere authority: Evidence that in an experimental analogue of the Milgram paradigm participants are motivated not by orders but by appeals to science: Responses to Milgram’s prods. Journal of Social Issues, 70(3), 473–488. https://doi.org/10.1111/josi.12072.
Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48(1), 400–407. https://doi.org/10.3758/s13428-015-0578-z.
Huang, J., Curran, P., Keeney, J., Poposki, E., & DeShon, R. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27(1), 99–114 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=71672025&site=eds-live.
Johnson, J. A. (2005). Ascertaining the validity of individual protocols from web-based personality inventories. Journal of Research in Personality, 39(1), 103–129. https://doi.org/10.1016/j.jrp.2004.09.009.
Kapelner, A. (2010). Preventing Satisficing in Online Surveys. Proceedings of CrowdConf 2010. Retrieved from http://www.academia.edu/2788541/Preventing_Satisficing_in_Online_Surveys.
Kelley, K., Clark, B., Brown, V., & Sitzia, J. (2003). Good practice in the conduct and reporting of survey research. International Journal for Quality in Health Care: Journal of the International Society for Quality in Health Care, 15(3), 261–266.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. https://doi.org/10.1002/acp.2350050305.
Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537–567. https://doi.org/10.1146/annurev.psych.50.1.537.
Krosnick, J. A., Li, F., & Lehman, D. R. (1990). Conversational conventions, order of information acquisition, and the effect of base rates and individuating information on social judgments. Journal of Personality and Social Psychology, 59(6), 1140–1152. https://doi.org/10.1037/0022-3514.59.6.1140.
Lai, L. C. H., Cummins, R. A., & Lau, A. L. D. (2013). Cross-cultural difference in subjective wellbeing: Cultural response bias as an explanation. Social Indicators Research, 114(2), 607–619. https://doi.org/10.1007/s11205-012-0164-z.
Lau, A. L. D., Cummins, R. A., & Mcpherson, W. (2005). An investigation into the cross-cultural equivalence of the personal wellbeing index. Social Indicators Research, 72(3), 403–430. https://doi.org/10.1007/s11205-004-0561-z.
Lee, J. W., Jones, P. S., Mineyama, Y., & Zhang, X. E. (2002). Cultural differences in responses to a likert scale. Research in Nursing & Health, 25(4), 295–306. https://doi.org/10.1002/nur.10041.
Leelakulthanit, O., & Day, R. (1993). Cross cultural comparisons of quality of life of Thais and Americans. Social Indicators Research, 30(1), 49–70. https://doi.org/10.1007/BF01080332.
Levitt, S. D., & List, J. A. (2007). What do Laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174. https://doi.org/10.1257/jep.21.2.153.
Litman, L., Robinson, J., & Rosenzweig, C. (2015). The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on mechanical Turk. Behavior Research Methods, 47(2), 519–528. https://doi.org/10.3758/s13428-014-0483-x.
Lowry, P. B., D’Arcy, J., Hammer, B., & Moody, G. D. (2016). “Cargo cult” science in traditional organization and information systems survey research: A case for using nontraditional methods of data collection, including mechanical Turk and online panels. The Journal of Strategic Information Systems, 25(3), 232–240. https://doi.org/10.1016/j.jsis.2016.06.002.
Lu, L. (2001). Understanding happiness: A look into the Chinese folk psychology. Journal of Happiness Studies, 2(4), 407–432 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eoh&AN=0637857&site=eds-live.
Maniaci, M. R., & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83. https://doi.org/10.1016/j.jrp.2013.09.008.
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. https://doi.org/10.1037/a0028085.
Necka, E. A., Cacioppo, S., Norman, G. J., & Cacioppo, J. T. (2016). Measuring the prevalence of problematic respondent behaviors among MTurk, campus, and community participants. PLoS One, 11(6), e0157732. https://doi.org/10.1371/journal.pone.0157732.
Nichols, A. L., & Edlund, J. E. (2015). Practicing what we preach (and sometimes study): Methodological issues in experimental laboratory research. Review of General Psychology, 19(2), 191–202. https://doi.org/10.1037/gpr0000027.
Nichols, A. L., & Maner, J. K. (2008). The good-subject effect: Investigating participant demand characteristics. The Journal of General Psychology, 135(2), 151–166. https://doi.org/10.3200/GENP.135.2.151-166.
Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. https://doi.org/10.1016/j.jesp.2009.03.009.
Osborne, J. W., & Blanchard, M. R. (2011). Random responding from participants is a threat to the validity of social science research results. Frontiers in Psychology, 1, 220. https://doi.org/10.3389/fpsyg.2010.00220.
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edsdoj&AN=edsdoj.43e5221c746446ea9e04c912fa30dbd&site=eds-live.
Park, S., & Catrambone, R. (2007). Social facilitation effects of virtual humans. Human Factors, 49(6), 1054–1060. https://doi.org/10.1518/001872007X249910.
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews. Drug Discovery, 10(9), 712. https://doi.org/10.1038/nrd3439-c1.
Rickenberg, R., & Reeves, B. (2000). The effects of animated characters on anxiety, task performance, and evaluations of user interfaces (pp. 49–56). New York: ACM Press. https://doi.org/10.1145/332040.332406.
Schober, M. F., & Conrad, F. G. (1997). Does conversational interviewing reduce survey measurement error? Public Opinion Quarterly, 61(4), 576. https://doi.org/10.1086/297818.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93–105. https://doi.org/10.1037/0003-066X.54.2.93.
Simon, H. A. (1957). A behavioral model of rational choice / Herbert a Simon. New York: Wiley Retrieved from http://encore.newcastle.edu.au/iii/encore/record/C__Rb3370556__Smodels%20of%20man:%20Social%20and%20rational__Orightresult__U__X3?lang=eng&suite=cobalt.
Stening, B. W., & Everett, J. E. (1984). Response styles in a cross-cultural managerial study. The Journal of Social Psychology, 122(2), 151–156. https://doi.org/10.1080/00224545.1984.9713475.
Swain, S. D., Weathers, D., & Niedrich, R. W. (2008). Assessing three sources of Misresponse to reversed Likert items. Journal of Marketing Research, 45(1), 116–131. https://doi.org/10.1509/jmkr.45.1.116.
Thomas, R. K., Cook, W., Fulgoni, G., Gloeckler, D., & Terhanian, G. H. (2014). Fast and furious … … or much ado about nothing?: Sub-optimal respondent behavior and data quality. Journal of Advertising Research, 54(1), 17–31. https://doi.org/10.2501/JAR-54-1-017-031.
Trippas, D., Pennycook, G., Verde, M. F., & Handley, S. J. (2015). Better but still biased: Analytic cognitive style and belief bias. Thinking and Reasoning, 21(4), 431–445. https://doi.org/10.1080/13546783.2015.1016450.
Tsilidis, K. K., Panagiotou, O. A., Sena, E. S., Aretouli, E., Evangelou, E., Howells, D. W., et al. (2013). Evaluation of excess significance bias in animal studies of neurological diseases. PLoS Biology, 11(7), e1001609. https://doi.org/10.1371/journal.pbio.1001609.
van Sonderen, E., Sanderman, R., & Coyne, J. C. (2013). Ineffectiveness of reverse wording of questionnaire items: Let’s learn from cows in the rain. PLoS One, 8(7), e68967. https://doi.org/10.1371/journal.pone.0068967.
Ward, M. K., & Pond, S. B. (2015). Using virtual presence and survey instructions to minimize careless responding on internet-based surveys. Computers in Human Behavior, 48, 554–568. https://doi.org/10.1016/j.chb.2015.01.070.
Weathers, D., & Bardakci, A. (2015). Can response variance effectively identify careless respondents to multi-item, unidimensional scales? Journal of Marketing Analytics, 3(2), 96 Retrieved from http://ezproxy.newcastle.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edb&AN=109132487&site=eds-live.
Weijters, B., De Beuckelaer, A., & Baumgartner, H. (2014). Discriminant validity where there should be none: Positioning same-scale items in separated blocks of a questionnaire. Applied Psychological Measurement, 38(6), 450–463. https://doi.org/10.1177/0146621614531850.
Zanbaka, C., Ulinski, A., Goolkasian, P., & Hodges, L. F. (2004). Effects of virtual human presence on task performance. In: Proc. International Conference on Artificial Reality and Telexistence 2004 (pp. 174–181). Retrieved from http://vrsj.ime.cmc.osaka-u.ac.jp/ic-at/papers/2004/S4-1.pdf.
Zwarun, L., & Hall, A. (2014). What’s going on? Age, distraction, and multitasking during online survey taking. Computers in Human Behavior, 41, 236–244. https://doi.org/10.1016/j.chb.2014.09.041.