Statistics in Medicine

Công bố khoa học tiêu biểu

Sắp xếp:  
Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond
Statistics in Medicine - Tập 27 Số 2 - Trang 157-172 - 2008
Michael Preuß, Ralph B. D' Agostino, Ramachandran S. Vasan
AbstractIdentification of key factors associated with the risk of developing cardiovascular disease and quantification of this risk using multivariable prediction algorithms are among the major advances made in preventive cardiology and cardiovascular epidemiology in the 20th century. The ongoing discovery of new risk markers by scientists presents opportunities and challenges for statisticians and clinicians to evaluate these biomarkers and to develop new risk formulations that incorporate them. One of the key questions is how best to assess and quantify the improvement in risk prediction offered by these new models. Demonstration of a statistically significant association of a new biomarker with cardiovascular risk is not enough. Some researchers have advanced that the improvement in the area under the receiver‐operating‐characteristic curve (AUC) should be the main criterion, whereas others argue that better measures of performance of prediction models are needed. In this paper, we address this question by introducing two new measures, one based on integrated sensitivity and specificity and the other on reclassification tables. These new measures offer incremental information over the AUC. We discuss the properties of these new measures and contrast them with the AUC. We also develop simple asymptotic tests of significance. We illustrate the use of these measures with an example from the Framingham Heart Study. We propose that scientists consider these types of measures in addition to the AUC when assessing the performance of newer biomarkers. Copyright © 2007 John Wiley & Sons, Ltd.
Classification accuracy and cut point selection
Statistics in Medicine - Tập 31 Số 23 - Trang 2676-2686 - 2012
Xinhua Liu
In biomedical research and practice, quantitative tests or biomarkers are often used for diagnostic or screening purposes, with a cut point established on the quantitative measurement to aid binary classification. This paper introduces an alternative to the traditional methods based on the Youden index and the closest‐to‐(0, 1) criterion for threshold selection. A concordance probability evaluating the classification accuracy of a dichotomized measure is defined as an objective function of the possible cut point. A nonparametric approach is used to search for the optimal cut point maximizing the objective function. The procedure is shown to perform well in a simulation study. Using data from a real‐world study of arsenic‐induced skin lesions, we apply the method to a measure of blood arsenic levels, selecting a cut point to be used as a warning threshold. Copyright © 2012 John Wiley & Sons, Ltd.
Number needed to treat (NNT): estimation of a measure of clinical benefit
Statistics in Medicine - Tập 20 Số 24 - Trang 3947-3962 - 2001
Stephen D. Walter
AbstractThe number needed to treat (NNT) is becoming increasingly popular as an index for reporting the results of randomized trials and other clinical studies. It represents the expected number of patients who must be treated with an experimental therapy in order to prevent one additional adverse outcome event (or, depending on the context, to expect one additional beneficial outcome), compared to the expected event rates under the control therapy. Although NNT is a clinically useful measure, little work has been done on its statistical properties. In this paper, alternative NNT‐type measures are defined for use with discrete or continuous data. Estimators and their variances are obtained for these measures in cross‐over or parallel group designs. The ideas are illustrated with data on quality of life in asthma patients. Copyright © 2001 John Wiley & Sons, Ltd.
A comparison of confidence interval methods for the intraclass correlation coefficient in cluster randomized trials
Statistics in Medicine - Tập 21 Số 24 - Trang 3757-3774 - 2002
Obioha C. Ukoumunne
AbstractA Correction has been published for this article in Statistics in Medicine 23(18) 2004, 2935. This study compared different methods for assigning confidence intervals to the analysis of variance estimator of the intraclass correlation coefficient (ρ). The context of the comparison was the use of ρ to estimate the variance inflation factor when planning cluster randomized trials. The methods were compared using Monte Carlo simulations of unbalanced clustered data and data from a cluster randomized trial of an intervention to improve the management of asthma in a general practice setting. The coverage and precision of the intervals were compared for data with different numbers of clusters, mean numbers of subjects per cluster and underlying values of ρ. The performance of the methods was also compared for data with Normal and non‐Normally distributed cluster specific effects. Results of the simulations showed that methods based upon the variance ratio statistic provided greater coverage levels than those based upon large sample approximations to the standard error of ρ. Searle's method provided close to nominal coverage for data with Normally distributed random effects. Adjusted versions of Searle's method to allow for lack of balance in the data generally did not improve upon it either in terms of coverage or precision. Analyses of the trial data, however, showed that limits provided by Thomas and Hultquist's method may differ from those of the other variance ratio statistic methods when the arithmetic mean differs markedly from the harmonic mean cluster size. The simulation results demonstrated that marked non‐Normality in the cluster level random effects compromised the performance of all methods. Confidence intervals for the methods were generally wide relative to the underlying size of ρsuggesting that there may be great uncertainty associated with sample size calculations for cluster trials where large clusters are randomized. Data from cluster based studies with sample sizes much larger than those typical of cluster randomized trials are required to estimate ρ with a reasonable degree of precision. Copyright © 2002 John Wiley & Sons, Ltd.
A goodness‐of‐fit approach to inference procedures for the kappa statistic: Confidence interval construction, significance‐testing and sample size estimation
Statistics in Medicine - Tập 13 Số 8 - Trang 876-880 - 1994
Helena Chmura Kraemer, D. Blöch, Allan Donner, Michael Eliasziw
New confidence intervals for the difference between two sensitivities at a fixed level of specificity
Statistics in Medicine - Tập 25 Số 20 - Trang 3487-3502 - 2006
Gengsheng Qin, Yu‐Sheng Hsu, Xiao‐Hua Zhou
AbstractFor two continuous‐scale diagnostic tests, it is of interest to compare their sensitivities at a predetermined level of specificity. In this paper, we propose three new intervals for the difference between two sensitivities at a fixed level of specificity. These intervals are easy to compute. We also conduct simulation studies to compare the relative performance of the new intervals with the existing normal‐ approximation‐based interval proposed by Wieand et al. Our simulation results show that the newly proposed intervals perform better than the existing normal‐approximation‐based interval in terms of coverage accuracy and interval length. Copyright © 2005 John Wiley & Sons, Ltd.
Choice of time‐scale in Cox's model analysis of epidemiologic cohort data: a simulation study
Statistics in Medicine - Tập 23 Số 24 - Trang 3803-3820 - 2004
A. Thiébaut, Jacques Bénichou
AbstractCox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time‐on‐study instead of age as the time‐scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time‐scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time‐dependent dichotomous covariates were considered. We observed no bias upon using age as the time‐scale. Upon using time‐on‐study, we verified the absence of bias for exponentially distributed age to disease onset. For non‐exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time‐dependent covariates. These findings were illustrated on data from a cohort of 84329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time‐on‐study as the time‐scale for analysing epidemiologic cohort data. Copyright © 2004 John Wiley & Sons, Ltd.
Flexible modeling of the cumulative effects of time‐dependent exposures on the hazard
Statistics in Medicine - Tập 28 Số 27 - Trang 3437-3453 - 2009
Marie‐Pierre Sylvestre, Michał Abrahamowicz
AbstractMany epidemiological studies assess the effects of time‐dependent exposures, where both the exposure status and its intensity vary over time. One example that attracts public attention concerns pharmacoepidemiological studies of the adverse effects of medications. The analysis of such studies poses challenges for modeling the impact of complex time‐dependent drug exposure, especially given the uncertainty about the way effects cumulate over time and about the etiological relevance of doses taken in different time periods. We present a flexible method for modeling cumulative effects of time‐varying exposures, weighted by recency, represented by time‐dependent covariates in the Cox proportional hazards model. The function that assigns weights to doses taken in the past is estimated using cubic regression splines. We validated the method in simulations and applied it to re‐assess the association between exposure to a psychotropic drug and fall‐related injuries in the elderly. Copyright © 2009 John Wiley & Sons, Ltd.
Comparison of algorithms to generate event times conditional on time‐dependent covariates
Statistics in Medicine - Tập 27 Số 14 - Trang 2618-2634 - 2008
Marie‐Pierre Sylvestre, Michał Abrahamowicz
AbstractThe Cox proportional hazards model with time‐dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time‐dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well‐designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration. Copyright © 2007 John Wiley & Sons, Ltd.
Evaluation of Cox's model and logistic regression for matched case‐control data with time‐dependent covariates: a simulation study
Statistics in Medicine - Tập 22 Số 24 - Trang 3781-3794 - 2003
Karen Leffondré, Michał Abrahamowicz, Jack Siemiatycki
AbstractCase‐control studies are typically analysed using the conventional logistic model, which does not directly account for changes in the covariate values over time. Yet, many exposures may vary over time. The most natural alternative to handle such exposures would be to use the Cox model with time‐dependent covariates. However, its application to case‐control data opens the question of how to manipulate the risk sets. Through asimulation study, we investigate how the accuracy of the estimates of Cox's model depends on the operational definition of risk sets and/or on some aspects of the time‐varying exposure. We also assess the estimates obtained from conventional logistic regression. The lifetime experience of a hypothetical population is first generated, and a matched case‐control study is then simulated from this population. We control the frequency, the age at initiation, and the total duration of exposure, as well as the strengths of their effects. All models considered include a fixed‐in‐time covariate and one or two time‐dependent covariate(s): the indicator of current exposure and/or the exposure duration. Simulation results show that none of the models always performs well. The discrepancies between the odds ratios yielded by logistic regression and the ‘true’ hazard ratio depend on both the type of the covariate and the strength of its effect. In addition, it seems that logistic regression has difficulty separating the effects of inter‐correlated time‐dependent covariates. By contrast, each ofthe two versions of Cox's model systematically induces either a serious under‐estimation or a moderate over‐estimation bias. The magnitude of the latter bias is proportional to the true effect, suggesting that an improved manipulation of the risk sets may eliminate, or at least reduce, the bias. Copyright © 2003 JohnWiley & Sons, Ltd.
Tổng số: 52   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6