首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

2.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

3.
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test.  相似文献   

4.
The standard log-rank test has been extended by adopting various weight functions. Cancer vaccine or immunotherapy trials have shown a delayed onset of effect for the experimental therapy. This is manifested as a delayed separation of the survival curves. This work proposes new weighted log-rank tests to account for such delay. The weight function is motivated by the time-varying hazard ratio between the experimental and the control therapies. We implement a numerical evaluation of the Schoenfeld approximation (NESA) for the mean of the test statistic. The NESA enables us to assess the power and to calculate the sample size for detecting such delayed treatment effect and also for a more general specification of the non-proportional hazards in a trial. We further show a connection between our proposed test and the weighted Cox regression. Then the average hazard ratio using the same weight is obtained as an estimand of the treatment effect. Extensive simulation studies are conducted to compare the performance of the proposed tests with the standard log-rank test and to assess their robustness to model mis-specifications. Our tests outperform the Gρ,γ class in general and have performance close to the optimal test. We demonstrate our methods on two cancer immunotherapy trials.  相似文献   

5.
When examining the effect of treatment A versus B, there may be a choice between a parallel group design, an AA/BB design, an AB/BA cross‐over and Balaam's design. In case of a linear mixed effects regression, it is examined, starting from a flexible function of the costs involved and allowing for subject dropout, which design is most efficient in estimating this effect. For no carry‐over, the AB/BA cross‐over design is most efficient as long as the dropout rate at the second measurement does not exceed /(1 + ρ), ρ being the intraclass correlation. For steady‐state carry‐over, depending on the costs involved, the dropout rate and ρ, either a parallel design or an AA/BB design is most efficient. For types of carry‐over that allow for self carry‐over, interest is in the direct treatment effect plus the self carry‐over effect, with either an AA/BB or Balaam's design being most efficient. In case of insufficient knowledge on the dropout rate or ρ, a maximin strategy is devised: choose the design that minimizes the maximum variance of the treatment estimator. Such maximin designs are derived for each type of carry‐over. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

7.
This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four‐step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose‐finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H‐algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design. © 2014 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   

8.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   

9.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   

10.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

11.
The authors consider the problem of constructing standardized maximin D‐optimal designs for weighted polynomial regression models. In particular they show that by following the approach to the construction of maximin designs introduced recently by Dette, Haines & Imhof (2003), such designs can be obtained as weak limits of the corresponding Bayesian q‐optimal designs. They further demonstrate that the results are more broadly applicable to certain families of nonlinear models. The authors examine two specific weighted polynomial models in some detail and illustrate their results by means of a weighted quadratic regression model and the Bleasdale–Nelder model. They also present a capstone example involving a generalized exponential growth model.  相似文献   

12.
This paper proposes an affine‐invariant test extending the univariate Wilcoxon signed‐rank test to the bivariate location problem. It gives two versions of the null distribution of the test statistic. The first version leads to a conditionally distribution‐free test which can be used with any sample size. The second version can be used for larger sample sizes and has a limiting χ22 distribution under the null hypothesis. The paper investigates the relationship with a test proposed by Jan & Randles (1994). It shows that the Pitman efficiency of this test relative to the new test is equal to 1 for elliptical distributions but that the two tests are not necessarily equivalent for non‐elliptical distributions. These facts are also demonstrated empirically in a simulation study. The new test has the advantage of not requiring the assumption of elliptical symmetry which is needed to perform the asymptotic version of the Jan and Randles test.  相似文献   

13.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   

15.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis in biomedical research. The currently commonly used testing methods, e.g. log‐rank test, Wilcoxon test, and Kolmogorov–Smirnov test, may have a significant loss of statistical testing power under certain circumstances. In this paper we propose a new testing method that is robust for the comparison of the overall homogeneity of survival curves based on the absolute difference of the area under the survival curves using normal approximation by Greenwood's formula. Monte Carlo simulations are conducted to investigate the performance of the new testing method compared against the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests under a variety of circumstances. The proposed new method has robust performance with greater power to detect the overall differences than the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests in many scenarios in the simulations. Furthermore, the applicability of the new testing approach is illustrated in a real data example from a kidney dialysis trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Given a random sample(X1, Y1), …,(Xn, Yn) from a bivariate (BV) absolutely continuous c.d.f. H (x, y), we consider rank tests for the null hypothesis of interchangeability H0: H(x, y). Three linear rank test statistics, Wilcoxon (WN), sum of squared ranks (SSRN) and Savage (SN), are described in Section 1. In Section 2, asymptotic relative efficiency (ARE) comparisons of the three types of tests are made for Morgenstern (Plackett, 1965) and Moran (1969)BV alternatives with marginal distributions satisfying G(x) = F(x/θ) for some θ≠ 1. Both gamma and lognormal marginal distributions are used.  相似文献   

17.
In this paper, we construct a new ranked set sampling protocol that maximizes the Pitman asymptotic efficiency of the signed rank test. The new sampling design is a function of the set size and independent order statistics. If the set size is odd and the underlying distribution is symmetric and unimodal, then the new sampling protocol quantifies only the middle observation. On the other hand, if the set size is even, the new sampling design quantifies the two middle observations. This data collection procedure for use in the signed rank test outperforms the data collection procedure in the standard ranked set sample. We show that the exact null distribution of the signed rank statistic WRSS+ based on a data set generated by the new ranked set sample design for odd set sizes is the same as the null distribution of the simple random sample signed rank statistic WSRS+ based on the same number of measured observations. For even set sizes, the exact null distribution of WRSS+ is simulated.  相似文献   

18.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

19.
Many nonparametric tests have been proposed for the hypothesis of no row (treatment) effect in a one-way layout design. Examples of such tests are Kruskal-Wallis H-test, Bhapkar's (1961) V-test and Deshpande's (1965) L-test. However not many tests are available for testing the same hypothesis in a two-way layout design without interaction. Perhaps the only “established” test is the one due to Friedman (1937). However, it applies to the case of one observation per cell only. In this paper, a new distribution-free test is proposed for the hypothesis of row effect in a two-way layout design. It applies to the case of several observations per cell, not necessarily equal. The asymptotic efficiency of the proposed test relative to other tests is studied.  相似文献   

20.
In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830–839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号