首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   

3.
In comparative clinical trials or animal carcinogenesis studies, the effect of increasing dose levels of an agent or an increasing number of additional modalities are frequently evaluated on the prolonged survival time of patients with a particular disease. It is of particular interest to test the ordered alternative that a treatment level increase leads to better survival. This paper considers an ordered test based on the two–sample weighted Kaplan–Meier statistics (Pepe & Fleming, 1989, 1991). It evaluates asymptotic relative efficiencies of the proposed ordered weighted Kaplan–Meier test, the competing ordered weighted logrank test (Liu et al., 1993) and modified ordered logrank test (Liu & Tsai, 1999) under Lehmann alternatives, for various piecewise exponential survival distributions. Finally, it demonstrates the proposed test on an appropriate dataset.  相似文献   

4.
The class $G^{\rho,\lambda }$ of weighted log‐rank tests proposed by Fleming & Harrington [Fleming & Harrington (1991) Counting Processes and Survival Analysis, Wiley, New York] has been widely used in survival analysis and is nowadays, unquestionably, the established method to compare, nonparametrically, k different survival functions based on right‐censored survival data. This paper extends the $G^{\rho,\lambda }$ class to interval‐censored data. First we introduce a new general class of rank based tests, then we show the analogy to the above proposal of Fleming & Harrington. The asymptotic behaviour of the proposed tests is derived using an observed Fisher information approach and a permutation approach. Aiming to make this family of tests interpretable and useful for practitioners, we explain how to interpret different choices of weights and we apply it to data from a cohort of intravenous drug users at risk for HIV infection. The Canadian Journal of Statistics 40: 501–516; 2012 © 2012 Statistical Society of Canada  相似文献   

5.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

6.
ABSTRACT

This article studies a risk model involving one type of main claims and two types of by-claims, which is an extension of the general risk model with delayed claims. We suppose that every main claim may not induce any by-claims or may induce one by-claim belonging to one of the two types of by-claims with a certain probability. In addition, assume that the by-claim and its associated main claim may occur at the same time and that the occurrence of the by-claim may be delayed. An integro-differential equation system for survival probabilities is derived by using two auxiliary risk models. The expression of the survival probability is obtained by applying Laplace transforms and Rouché theorem. Furthermore, we provide a method for solving the survival probability when the two by-claim amounts satisfy different exponential distributions. As a special case, an explicit expression of survival probability is given when all the claim amounts obey the same exponential distribution. Finally, numerical results are provided to examine the proposed method.  相似文献   

7.
The Kaplan–Meier (KM) estimator is ubiquitously used for estimating survival functions, but it provides only a discrete approximation at the observation times and does not deliver a proper distribution if the largest observation is censored. Using KM as a starting point, we devise an empirical saddlepoint approximation‐based method for producing a smooth survival function that is unencumbered by choice of tuning parameters. The procedure inverts the moment generating function (MGF) defined through a Riemann–Stieltjes integral with respect to an underlying mixed probability measure consisting of the discrete KM mass function weights and an absolutely continuous exponential right‐tail completion. Uniform consistency, and weak and strong convergence results are established for the resulting MGF and its derivatives, thus validating their usage as inputs into the saddlepoint routines. Relevant asymptotic results are also derived for the density and distribution function estimates. The performance of the resulting survival approximations is examined in simulation studies, which demonstrate a favourable comparison with the log spline method (Kooperberg & Stone, 1992) in small sample settings. For smoothing survival functions we argue that the methodology has no immediate competitors in its class, and we illustrate its application on several real data sets. The Canadian Journal of Statistics 47: 238–261; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
In this paper, we introduce a new lifetime distribution by compounding exponential and Poisson–Lindley distributions, named the exponential Poisson–Lindley (EPL) distribution. A practical situation where the EPL distribution is most appropriate for modelling lifetime data than exponential–geometric, exponential–Poisson and exponential–logarithmic distributions is presented. We obtain the density and failure rate of the EPL distribution and properties such as mean lifetime, moments, order statistics and Rényi entropy. Furthermore, estimation by maximum likelihood and inference for large samples are discussed. The paper is motivated by two applications to real data sets and we hope that this model will be able to attract wider applicability in survival and reliability.  相似文献   

9.
The generalized exponential distribution proposed by Gupta and Kundu [Gupta, R.D and Kundu, D., 1999, Generalized exponential distributions. Australian and New Zealand Journal of Statistics, 41(2), 173–188.] is an important lifetime distribution in survival analysis. In this paper, we consider the maximum likelihood estimation procedure of the parameters of the generalized exponential distribution when the data are left censored. We obtain the maximum likelihood estimators of the unknown para-meters and the Fisher information matrix. Simulation studies are carried out to observe the performance of the estimators in small sample.  相似文献   

10.
The present paper is concerned with statistical models for the dependence of survival time or time to occurrence of an event, such as time to tumor, on a vector X of covariates or prognostic variables such as age, sex, blood pressure, length of exposure to a toxic material, etc., measured on a group of individuals in biomedical investigations. It is assumed that the covariates influence the distribution of time to tumor only through a linear predictor μ =βX.

The object of our paper is to investigate the effect due to the covariates on the Life Expectancy and the Percentile Residual Life (PRL) function of a family of organisms under the proportional hazards and the accelerated life models. The key result is that the families of survival distributions under these models have the 'setting the clock back to zero' property if the family of baseline survival distributions does. This property is a generalization of the lack of memory property of the exponential distribution. Simple examples of the members of this family are the linear hazard exponential, Pareto and Gompertz life distributions.

As a simple application of the main results obtained in the present paper, we have considered a stochastic survival model recently proposed by Chiang and Conforti (1989) for the time-to-tumor distribution in the context of a large-scale serial sacrifice experiment by the National Center of Toxicological Research (NCTR). This involves some mice that were fed 2-AAF from infancy and those that developed bladder and/or liver neoplasms, see Farmer et al (1980). It is shown that their stochastic model for tumor incidence intensity at time t leads to a family of survival models that has the setting the clock back to zero property. The survival functions and the effect of the vector X of covariates on the PRL and the tumor-free life expectancies are evaluated for the proportional hazards and accelerated life models.  相似文献   

11.
Abstract

We introduce a new family of distributions using truncated discrete Linnik distribution. This family is a rich family of distributions which includes many important families of distributions such as Marshall–Olkin family of distributions, family of distributions generated through truncated negative binomial distribution, family of distributions generated through truncated discrete Mittag–Leffler distribution etc. Some properties of the new family of distributions are derived. A particular case of the family, a five parameter generalization of Weibull distribution, namely discrete Linnik Weibull distribution is given special attention. This distribution is a generalization of many distributions, such as extended exponentiated Weibull, exponentiated Weibull, Weibull truncated negative binomial, generalized exponential truncated negative binomial, Marshall-Olkin extended Weibull, Marshall–Olkin generalized exponential, exponential truncated negative binomial, Marshall–Olkin exponential and generalized exponential. The shape properties, moments, median, distribution of order statistics, stochastic ordering and stress–strength properties of the new generalized Weibull distribution are derived. The unknown parameters of the distribution are estimated using maximum likelihood method. The discrete Linnik Weibull distribution is fitted to a survival time data set and it is shown that the distribution is more appropriate than other competitive models.  相似文献   

12.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

13.
We implement semiparametric random censorship model aided inference for censored median regression models. This is based on the idea that, when the censoring is specified by a common distribution, a semiparametric survival function estimator acts as an improved weight in the so-called inverse censoring weighted estimating function. We show that the proposed method will always produce estimates of the model parameters that are as good as or better than an existing estimator based on the traditional Kaplan–Meier weights. We also provide an illustration of the method through an analysis of a lung cancer data set.  相似文献   

14.
ABSTRACT

In this paper, we present new one-stage multiple comparison procedures with the average for location parameters of two-parameter exponential distributions under heteroscedasticity by modifying the existing one proposed by Wu [One stage multiple comparisons with the average for exponential location parameters under heteroscedasticity. Comput Stat Data Anal. 2013;68:352–360] with unequal sample sizes. A simulation study is done and the results show that the proposed procedures have shorter confidence length with coverage probabilities closer to the nominal ones. At last, an example of comparing the survival days of patients for four categories of lung cancer is given to demonstrate the proposed procedures.  相似文献   

15.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
In survival analysis and reliability studies, problems with random sample size arise quite frequently. More specifically, in cancer studies, the number of clonogens is unknown and the time to relapse of the cancer is defined by the minimum of the incubation times of the various clonogenic cells. In this article, we have proposed a new model where the distribution of the incubation time is taken as Weibull and the distribution of the random sample size as Bessel, giving rise to a Weibull–Bessel distribution. The maximum likelihood estimation of the model parameters is studied and a score test is developed to compare it with its special submodel, namely, exponential–Bessel distribution. To illustrate the model, two real datasets are examined, and it is shown that the proposed model, presented here, fits better than several other existing models in the literature. Extensive simulation studies are also carried out to examine the performance of the estimates.  相似文献   

17.
This paper is concerned with estimating θ, the mean of an exponential distribution under a single outlier exchangeable model. It is a.ssumed that the single outlying observation is also exponential with mean θ/α, where 0 < α < 1. The estirnators proposed are weighted averages of the order statistics. The formulas for the weights minimizing the mean square error are presented. These weights are calculated for certain combinations of the sample size n and of α. It is found that the optimal weights very nearly have a certain form. The mean square errors of a simplified estitnator are compared lo those of Joshi (1972, 1988) and of Clhikkagoudar and Kunchur (1980). A nlodification of Joshi's iterative procedure is suggested.  相似文献   

18.
Epstein [Truncated life tests in the exponential case, Ann. Math. Statist. 25 (1954), pp. 555–564] introduced a hybrid censoring scheme (called Type-I hybrid censoring) and Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Comm. Statist. Theory Methods 17 (1988), pp. 1857–1870] derived the exact distribution of the maximum-likelihood estimator (MLE) of the mean of a scaled exponential distribution based on a Type-I hybrid censored sample. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann. Inst. Statist. Math. 55 (2003), pp. 319–330] provided an alternate simpler expression for this distribution, and also developed analogous results for another hybrid censoring scheme (called Type-II hybrid censoring). The purpose of this paper is to derive the exact bivariate distribution of the MLE of the parameter vector of a two-parameter exponential model based on hybrid censored samples. The marginal distributions are derived and exact confidence bounds for the parameters are obtained. The results are also used to derive the exact distribution of the MLE of the pth quantile, as well as the corresponding confidence bounds. These exact confidence intervals are then compared with parametric bootstrap confidence intervals in terms of coverage probabilities. Finally, we present some numerical examples to illustrate the methods of inference developed here.  相似文献   

19.
We consider the right truncated exponential distribution where the truncation point is unknown and show that the ML equation has a unique solution over an extended parameter space. In the case of the estimation of the truncation point T we show that the asymptotic distribution of the MLE is not centered at T. A modified MLE is introduced which outperforms all other considered estimators including the minimum variance unbiased estimator. Asymptotic as well as small sample properties of different estimators are investigated and compared. The truncated exponential distribution has an increasing failure rate, ideally suited for use as a survival distribution for biological and industrial data.  相似文献   

20.
This article considers the maximum likelihood and Bayes estimation of the stress–strength reliability based on two-parameter generalized exponential records. Here, we extend the results of Baklizi [Computational Statistics and Data Analysis 52 (2008), 3468–3473] to explain a wide variety of real datasets. We also consider the estimation of R when the same shape parameter is known. The results for exponential distribution can be obtained as a special case with different scale parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号