首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Summary.  In an outbreak of a completely new infectious disease like severe acute respiratory syndrome (SARS), estimation of the fatality rate over the course of the epidemic is of clinical and epidemiological importance. In contrast with the constant case fatality rate, a new measure, termed the 'realtime' fatality rate, is proposed for monitoring the new emerging epidemic at a population level. A competing risk model implemented via a counting process is used to estimate the realtime fatality rate in an epidemic of SARS. It can capture and reflect the time-varying nature of the fatality rate over the course of the outbreak in a timely and accurate manner. More importantly, it can provide information on the efficacy of a certain treatment and management policy for the disease. The method has been applied to the SARS data from the regions affected, namely Hong Kong, Singapore, Toronto, Taiwan and Beijing. The magnitudes and patterns of the estimated fatalities are virtually the same except in Beijing, which has a lower rate. It is speculated that the effect is linked to the different treatment protocols that were used. The standard estimate of the case fatality rate that was used by the World Health Organization has been shown to be unable to provide useful information to monitor the time-varying fatalities that are caused by the epidemic.  相似文献   

2.
This article studies some ordering results for the sample spacings arising from the single- and multiple-outlier exponential models. In the single-outlier exponential models, it is shown that the weak majorization order between the two hazard rate vectors implies the hazard rate order as well as the dispersive order between the corresponding sample spacings. We also extend this result from the single-outlier model to the multiple-outlier model for the special case of the second sample spacing. Furthermore, we obtain some necessary and sufficient conditions such that, on the one hand, the hazard rate, dispersive and usual stochastic orders, and on the other hand, the likelihood ratio and reversed hazard rate orders of the second sample spacings from two independent heterogeneous exponential random variables are equivalent.  相似文献   

3.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

4.
In clinical trials survival endpoints are usually compared using the log-rank test. Sequential methods for the log-rank test and the Cox proportional hazards model are largely reported in the statistical literature. When the proportional hazards assumption is violated the hazard ratio is ill-defined and the power of the log-rank test depends on the distribution of the censoring times. The average hazard ratio was proposed as an alternative effect measure, which has a meaningful interpretation in the case of non-proportional hazards, and is equal to the hazard ratio, if the hazards are indeed proportional. In the present work we prove that the average hazard ratio based sequential test statistics are asymptotically multivariate normal with the independent increments property. This allows for the calculation of group-sequential boundaries using standard methods and existing software. The finite sample characteristics of the new method are examined in a simulation study in a proportional and a non-proportional hazards setting.  相似文献   

5.
Several omnibus tests of the proportional hazards assumption have been proposed in the literature. In the two-sample case, tests have also been developed against ordered alternatives like monotone hazard ratio and monotone ratio of cumulative hazards. Here we propose a natural extension of these partial orders to the case of continuous and potentially time varying covariates, and develop tests for the proportional hazards assumption against such ordered alternatives. The work is motivated by applications in biomedicine and economics where covariate effects often decay over lifetime. The proposed tests do not make restrictive assumptions on the underlying regression model, and are applicable in the presence of time varying covariates, multiple covariates and frailty. Small sample performance and an application to real data highlight the use of the framework and methodology to identify and model the nature of departures from proportionality.  相似文献   

6.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

7.
The use of relevance vector machines to flexibly model hazard rate functions is explored. This technique is adapted to survival analysis problems through the partial logistic approach. The method exploits the Bayesian automatic relevance determination procedure to obtain sparse solutions and it incorporates the flexibility of kernel-based models. Example results are presented on literature data from a head-and-neck cancer survival study using Gaussian and spline kernels. Sensitivity analysis is conducted to assess the influence of hyperprior distribution parameters. The proposed method is then contrasted with other flexible hazard regression methods, in particular the HARE model proposed by Kooperberg et al. [16]. A simulation study is conducted to carry out the comparison. The model developed in this paper exhibited good performance in the prediction of hazard rate. The application of this sparse Bayesian technique to a real cancer data set demonstrated that the proposed method can potentially reveal characteristics of the hazards, associated with the dynamics of the studied diseases, which may be missed by existing modeling approaches based on different perspectives on the bias vs. variance balance.  相似文献   

8.
The hazard function plays an important role in cancer patient survival studies, as it quantifies the instantaneous risk of death of a patient at any given time. Often in cancer clinical trials, unimodal hazard functions are observed, and it is of interest to detect (estimate) the turning point (mode) of hazard function, as this may be an important measure in patient treatment strategies with cancer. Moreover, when patient cure is a possibility, estimating cure rates at different stages of cancer, in addition to their proportions, may provide a better summary of the effects of stages on survival rates. Therefore, the main objective of this paper is to consider the problem of estimating the mode of hazard function of patients at different stages of cervical cancer in the presence of long-term survivors. To this end, a mixture cure rate model is proposed using the log-logistic distribution. The model is conveniently parameterized through the mode of the hazard function, in which cancer stages can affect both the cured fraction and the mode. In addition, we discuss aspects of model inference through the maximum likelihood estimation method. A Monte Carlo simulation study assesses the coverage probability of asymptotic confidence intervals.  相似文献   

9.
Marginal hazard models for multivariate failure time data have been studied extensively in recent literature. However, standard hypothesis test statistics based on the likelihood method are not exactly appropriate for this kind of model. In this paper, extensions of the three commonly used likelihood hypothesis test statistics are discussed. Generalized Wald, generalized score and generalized likelihood ratio tests for hazard ratio parameters in a marginal hazard model for multivariate failure time data are proposed and their asymptotic distributions examined. The finite sample properties of these statistics are studied through simulations. The proposed method is applied to data from Busselton Population Health Surveys.  相似文献   

10.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

11.
Case‐cohort design has been demonstrated to be an economical and efficient approach in large cohort studies when the measurement of some covariates on all individuals is expensive. Various methods have been proposed for case‐cohort data when the dimension of covariates is smaller than sample size. However, limited work has been done for high‐dimensional case‐cohort data which are frequently collected in large epidemiological studies. In this paper, we propose a variable screening method for ultrahigh‐dimensional case‐cohort data under the framework of proportional model, which allows the covariate dimension increases with sample size at exponential rate. Our procedure enjoys the sure screening property and the ranking consistency under some mild regularity conditions. We further extend this method to an iterative version to handle the scenarios where some covariates are jointly important but are marginally unrelated or weakly correlated to the response. The finite sample performance of the proposed procedure is evaluated via both simulation studies and an application to a real data from the breast cancer study.  相似文献   

12.
A stochastic model wiuh exponential components is used to describe our data collected from a phase III cancer clinical trial. Criteria which guarantee that disease-free survival (DFS) can be used as a surrogate for overall survival are explored under this model. We examine several colorectal adjuvant clinical trials and find that these conditions are not satisfied. The relationship between the hazard ratio of DFS for an active treatment versus a control treatment and the cumulative hazard ratio of survival for the same two treatments is then explored. An almost linear relationship is found such that a hazard ratio for DFS of less than a threshold R corresponds to a non-null treatment effect on survival The threshold value R is determined for our colorectal adjuvant trial data. Based on this relationship, a one-sided test of equal hazard rate of survival is equivalent to a test of hazard ratio of DFS small than R This approach assumes that recurrence information is unbiasedly and accurately assessed; an assumpion which is sometimes difficult to ensure for multicenter clinical trials, particularly for interim analyses.  相似文献   

13.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

14.
In this paper an extension of the piecewise exponential distribution based on the distribution of the maximum of a random sample is considered. Properties of its density and hazard function are investigated. Maximum likelihood inference is discussed and the Fisher information matrix is identified. Results of two real data applications are reported, where model fitting is implemented by using maximum likelihood. The applications illustrate the better performance of the new distribution when compared with other recently proposed alternative models.  相似文献   

15.
For random variables with Archimedean copula or survival copula, we develop the reversed hazard rate order and the hazard rate order on sample extremes in the context of proportional reversed hazard models and proportional hazard models, respectively. The likelihood ratio order on sample maximum is also investigated for the proportional reversed hazard model. Several numerical examples are presented for illustrations as well.  相似文献   

16.
The random censorship model (RCM) is commonly used in biomedical science for modeling life distributions. The popular non-parametric Kaplan–Meier estimator and some semiparametric models such as Cox proportional hazard models are extensively discussed in the literature. In this paper, we propose to fit the RCM with the assumption that the actual life distribution and the censoring distribution have a proportional odds relationship. The parametric model is defined using Marshall–Olkin's extended Weibull distribution. We utilize the maximum-likelihood procedure to estimate model parameters, the survival distribution, the mean residual life function, and the hazard rate as well. The proportional odds assumption is also justified by the newly proposed bootstrap Komogorov–Smirnov type goodness-of-fit test. A simulation study on the MLE of model parameters and the median survival time is carried out to assess the finite sample performance of the model. Finally, we implement the proposed model on two real-life data sets.  相似文献   

17.
In this paper, we introduce a partially linear single-index additive hazards model with current status data. Both the unknown link function of the single-index term and the cumulative baseline hazard function are approximated by B-splines under a monotonicity constraint on the latter. The sieve method is applied to estimate the nonparametric and parametric components simultaneously. We show that, when the nonparametric link function is an exact B-spline, the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound and the rate of convergence of the estimator for the cumulative baseline hazard function is optimal. Simulation studies are presented to examine the finite sample performance of the proposed estimation method. For illustration, we apply the method to a clinical dataset with current status outcome.  相似文献   

18.
The purpose of this paper is to consider the problem of statistical inference about a hazard rate function that is specified as the product of a parametric regression part and a non-parametric baseline hazard. Unlike Cox's proportional hazard model, the baseline hazard not only depends on the duration variable, but also on the starting date of the phenomenon of interest. We propose a new estimator of the regression parameter which allows for non-stationarity in the hazard rate. We show that it is asymptotically normal at root- n and that its asymptotic variance attains the information bound for estimation of the regression coefficient. We also consider an estimator of the integrated baseline hazard, and determine its asymptotic properties. The finite sample performance of our estimators are studied.  相似文献   

19.
The unknown or unobservable risk factors in the survival analysis cause heterogeneity between individuals. Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times, the shared frailty models were suggested. The most common shared frailty model is a model in which frailty act multiplicatively on the hazard function. In this paper, we introduce the shared gamma frailty model and the inverse Gaussian frailty model with the reversed hazard rate. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We also apply the proposed models to the Australian twin data set and a better model is suggested.  相似文献   

20.
For the time-to-event outcome, current methods for sample determination are based on the proportional hazard model. However, if the proportionality assumption fails to capture the relationship between the hazard time and covariates, the proportional hazard model is not suitable to analyze survival data. The accelerated failure time (AFT) model is an alternative method to deal with survival data. In this paper, we address the issue that the relationship between the hazard time and the treatment effect is satisfied with the AFT model to design a multiregional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multiregional trial, and the proposed criteria are used to rationalize partition sample size to each region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号