首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Estimation and inference in time-to-event analysis typically focus on hazard functions and their ratios under the Cox proportional hazards model. These hazard functions, while popular in the statistical literature, are not always easily or intuitively communicated in clinical practice, such as in the settings of patient counseling or resource planning. Expressing and comparing quantiles of event times may allow for easier understanding. In this article we focus on residual time, i.e., the remaining time-to-event at an arbitrary time t given that the event has yet to occur by t. In particular, we develop estimation and inference procedures for covariate-specific quantiles of the residual time under the Cox model. Our methods and theory are assessed by simulations, and demonstrated in analysis of two real data sets.  相似文献   

2.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
The random censorship model (RCM) is commonly used in biomedical science for modeling life distributions. The popular non-parametric Kaplan–Meier estimator and some semiparametric models such as Cox proportional hazard models are extensively discussed in the literature. In this paper, we propose to fit the RCM with the assumption that the actual life distribution and the censoring distribution have a proportional odds relationship. The parametric model is defined using Marshall–Olkin's extended Weibull distribution. We utilize the maximum-likelihood procedure to estimate model parameters, the survival distribution, the mean residual life function, and the hazard rate as well. The proportional odds assumption is also justified by the newly proposed bootstrap Komogorov–Smirnov type goodness-of-fit test. A simulation study on the MLE of model parameters and the median survival time is carried out to assess the finite sample performance of the model. Finally, we implement the proposed model on two real-life data sets.  相似文献   

4.
The linear transformation model is a semiparametric model which contains the Cox proportional hazards model and the proportional odds model as special cases. Cai et al. (Biometrika 87:867-878, 2000) have proposed an inference procedure for the linear transformation model with correlated censored observations. In this article, we develop formal and graphical model checking techniques for the linear transformation models based on cumulative sums of martingale-type residuals. The proposed method is illustrated with a clinical trial data.  相似文献   

5.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

6.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

7.
In survival analysis, it is routine to test equality of two survival curves, which is often conducted by using the log-rank test. Although it is optimal under the proportional hazards assumption, the log-rank test is known to have little power when the survival or hazard functions cross. To test the overall homogeneity of hazard rate functions, we propose a group of partitioned log-rank tests. By partitioning the time axis and taking the supremum of the sum of two partitioned log-rank statistics over different partitioning points, the proposed test gains enormous power for cases with crossing hazards. On the other hand, when the hazards are indeed proportional, our test still maintains high power close to that of the optimal log-rank test. Extensive simulation studies are conducted to compare the proposed test with existing methods, and three real data examples are used to illustrate the commonality of crossing hazards and the advantages of the partitioned log-rank tests.  相似文献   

8.

Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed in the literature which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject’s event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose a Bayesian model which loosens assumptions on the mixing distribution inherent in the random effects FHT models currently in use. We demonstrate via simulation study that the proposed model greatly improves both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodology to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with both the Cox model and two popular FHT models.

  相似文献   

9.
This paper discusses the goodness-of-fit test for the proportional odds model for K-sample interval-censored failure time data, which frequently occur in, for example, periodic follow-up survival studies. The proportional odds model has a feature that allows the ratio of two hazard functions to be monotonic and converge to one and provides an important tool for the modeling of survival data. To test the model, a procedure is proposed, which is a generalization of the method given in Dauxois and Kirmani [Dauxois JY, Kirmani SNUA (2003) Biometrika 90:913–922]. The asymptotic distribution of the procedure is established and its properties are evaluated by simulation studies  相似文献   

10.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

11.
Many clinical research studies evaluate a time‐to‐event outcome, illustrate survival functions, and conventionally report estimated hazard ratios to express the magnitude of the treatment effect when comparing between groups. However, it may not be straightforward to interpret the hazard ratio clinically and statistically when the proportional hazards assumption is invalid. In some recent papers published in clinical journals, the use of restricted mean survival time (RMST) or τ ‐year mean survival time is discussed as one of the alternative summary measures for the time‐to‐event outcome. The RMST is defined as the expected value of time to event limited to a specific time point corresponding to the area under the survival curve up to the specific time point. This article summarizes the necessary information to conduct statistical analysis using the RMST, including the definition and statistical properties of the RMST, adjusted analysis methods, sample size calculation, information fraction for the RMST difference, and clinical and statistical meaning and interpretation. Additionally, we discuss how to set the specific time point to define the RMST from two main points of view. We also provide developed SAS codes to determine the sample size required to detect an expected RMST difference with appropriate power and reconstruct individual survival data to estimate an RMST reference value from a reported survival curve.  相似文献   

12.
In longitudinal studies, the proportional hazard model is often used to analyse covariate effects on the duration time, defined as the elapsed time between the first and second event. In this article, we consider the situation when the first event suffers partly interval-censoring and the second event suffers left-truncation and right-censoring. We proposed a two-step estimation procedure for estimating the regression coefficients of the proportional model. A simulation study is conducted to investigate the performance of the proposed estimator.  相似文献   

13.
In event time data analysis, comparisons between distributions are made by the logrank test. When the data appear to contain crossing hazards phenomena, nonparametric weighted logrank statistics are usually suggested to accommodate different-weighted functions to increase the power. However, the gain in power by imposing different weights has its limits since differences before and after the crossing point may balance each other out. In contrast to the weighted logrank tests, we propose a score-type statistic based on the semiparametric-, heteroscedastic-hazards regression model of Hsieh [2001. On heteroscedastic hazards regression models: theory and application. J. Roy. Statist. Soc. Ser. B 63, 63–79.], by which the nonproportionality is explicitly modeled. Our score test is based on estimating functions derived from partial likelihood under the heteroscedastic model considered herein. Simulation results show the benefit of modeling the heteroscedasticity and power of the proposed test to two classes of weighted logrank tests (including Fleming–Harrington's test and Moreau's locally most powerful test), a Renyi-type test, and the Breslow's test for acceleration. We also demonstrate the application of this test by analyzing actual data in clinical trials.  相似文献   

14.
In this article, we propose a class of Box-Cox transformation models for recurrent event data, which includes the proportional means models as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the proposed models, we apply a profile pseudo-partial likelihood method to estimate the model parameters via estimating equation approaches and establish large sample properties of the estimators and examine its performance in moderate-sized samples through simulation studies. In addition, some graphical and numerical procedures are presented for model checking. An example of application on a set of multiple-infection data taken from a clinic study on chronic granulomatous disease (CGD) is also illustrated.  相似文献   

15.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

16.
Abstract

In this paper, we propose a hybrid method to estimate the baseline hazard for Cox proportional hazard model. In the proposed method, the nonparametric estimate of the survival function by Kaplan Meier, and the parametric estimate of the logistic function in the Cox proportional hazard by partial likelihood method are combined to estimate a parametric baseline hazard function. We compare the estimated baseline hazard using the proposed method and the Cox model. The results show that the estimated baseline hazard using hybrid method is improved in comparison with estimated baseline hazard using the Cox model. The performance of each method is measured based on the estimated parameters of the baseline distribution as well as goodness of fit of the model. We have used real data as well as simulation studies to compare performance of both methods. Monte Carlo simulations carried out in order to evaluate the performance of the proposed method. The results show that the proposed hybrid method provided better estimate of the baseline in comparison with the estimated values by the Cox model.  相似文献   

17.
During their follow-up, patients with cancer can experience several types of recurrent events and can also die. Over the last decades, several joint models have been proposed to deal with recurrent events with dependent terminal event. Most of them require the proportional hazard assumption. In the case of long follow-up, this assumption could be violated. We propose a joint frailty model for two types of recurrent events and a dependent terminal event to account for potential dependencies between events with potentially time-varying coefficients. For that, regression splines are used to model the time-varying coefficients. Baseline hazard functions (BHF) are estimated with piecewise constant functions or with cubic M-Splines functions. The maximum likelihood estimation method provides parameter estimates. Likelihood ratio tests are performed to test the time dependency and the statistical association of the covariates. This model was driven by breast cancer data where the maximum follow-up was close to 20 years.  相似文献   

18.
This paper addresses problem of testing whether an individual covariate in the Cox model has a proportional (i.e., time-constant) effect on the hazard. Two existing methods are considered: one is based on the component of the score process, and the other is a Neyman type smooth test. Simulations show that, when the model contains both proportional and nonproportional covariates, these methods are not reliable tools for discrimination. A simple yet effective solution is proposed based on smooth modeling of the effects of the covariates not in focus.  相似文献   

19.
Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.  相似文献   

20.
Abstract

In general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号