Decision curve analysis revisited: overall net benefit, relationships to ROC curve analysis, and application to case-control studies
Tóm tắt
Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Tài liệu tham khảo
Vickers A, Elkin E: Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making. 2006, 26 (6): 565-574. 10.1177/0272989X06295361.
Fleiss J, Levin B, Paik M: Statistical methods for rates and proportions. 2003, Wiley New York, 3
Peirce CS: The numerical measure of the success of predictions. Science. 1884, 4: 453-454.
Dupont W: Statistical modeling for biomedical researchers: a simple introduction to the analysis of complex data. 2002, Cambridge University Press, 2
Levy D, National Heart, Lung, and Blood Institute, Center for Bio-Medical Communication: 50 years of discovery: medical milestones from the National Heart, Lung, and Blood Institute's Framingham Heart Study. 1999, Hackensack, NJ: Center for Bio-Medical Communication
Baker SG, Kramer BS: Peirce, Youden, and receiver operating characteristic curves. Am Stat. 2007, 61 (4): 343-346. 10.1198/000313007X247643.
Metz C: Basic principles of ROC analysis. Semin Nucl Med. 1978, 8 (4): 283-298. 10.1016/S0001-2998(78)80014-2.
Steyerberg E, Vickers A, Cook N, Gerds T, Gonen M, Obuchowski N, Pencina M, Kattan W: Assessing the performance of prediction models -- a framework for traditional and novel measures. Epidemiology. 2010, 21: 128-138. 10.1097/EDE.0b013e3181c30fb2.
Pencina M, D'Agostino R, Steyerberg E: Extensions of net reclassification improvement calculations to measure uefulness of new biomarkers. Stat Med. 2011, 30: 11-21. 10.1002/sim.4085.
R Development Core Team: R: A language and environment for statistical computing. 2011, R Foundation for Statistical Computing, Vienna, Austria, [http://www.R-project.org]
Tsalatsanis A, Hozo I, Vickers A, Djulbegovic B: A regret theory approach to decision curve analysis: a novel method for eliciting decision makers' preferences and decision-making. BMC Med Inf Decis Making. 2010, 10: 51-10.1186/1472-6947-10-51.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/11/45/prepub