Is the CVI an acceptable indicator of content validity? Appraisal and recommendations

Research in Nursing and Health - Tập 30 Số 4 - Trang 459-467 - 2007
Denise F. Polit1,2, Cheryl Tatano Beck3, Steven V. Owen4
1Griffith University School of Nursing, Gold Coast, Australia
2Humanalysis, Inc., 75 Clinton Street, Saratoga Springs, NY 12866
3University of Connecticut School of Nursing, Storrs, CT
4School of Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX

Tóm tắt

AbstractNurse researchers typically provide evidence of content validity for instruments by computing a content validity index (CVI), based on experts' ratings of item relevance. We compared the CVI to alternative indexes and concluded that the widely‐used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, focus on consensus rather than consistency, and provision of both item and scale information. One weakness is its failure to adjust for chance agreement. We solved this by translating item‐level CVIs (I‐CVIs) into values of a modified kappa statistic. Our translation suggests that items with an I‐CVI of .78 or higher for three or more experts could be considered evidence of good content validity. © 2007 Wiley Periodicals, Inc. Res Nurs Health 30:459–467, 2007.

Từ khóa


Tài liệu tham khảo

Cicchetti D.V., 1981, Developing criteria for establishing interrater reliability of specific items: Application to assessment of adaptive behavior, American Journal of Mental Deficiency, 86, 127

10.1177/001316446002000104

10.1016/S0897-1897(05)80008-4

10.1037/h0031619

Fleiss J., 1981, Statistical methods for rates and proportions

10.1002/nur.10090

Grant J.S., 1997, Selection and use of content experts in instrument development, Research in Nursing & Health, 20, 269, 10.1002/(SICI)1098-240X(199706)20:3<269::AID-NUR9>3.0.CO;2-G

10.1037/1040-3590.7.3.238

10.1037/0021-9010.69.1.85

10.1002/nur.4770180511

10.2307/2529310

10.1111/j.1744-6570.1975.tb01393.x

Lindell M.K., 1999, Assessing interrater agreement on the job relevance of a test: A comparison of the CVI, T, r WG(J) , and r* WG(J) indexes, Journal of Applied Psychology, 84, 640, 10.1037/0021-9010.84.4.640

10.1177/01466219922031257

10.1097/00006199-198611000-00017

10.1002/nur.20147

Stemler S.(2004).A comparison of consensus consistency and measurement approaches to estimating interrater reliability. Practical Assessment Research and Evaluation 9(4). Retrieved October 5 2006 fromhttp://PAREonline.net/getvn.asp?v= 9&n=4.

10.1037/h0076640

10.1097/00006199-198607000-00020

Waltz C.F., 2005, Measurement in nursing and health research

10.1177/0193945903252998