首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
With the advent of ever more effective second and third line cancer treatments and the growing use of 'crossover' trial designs in oncology, in which patients switch to the alternate randomized treatment upon disease progression, progression-free survival (PFS) is an increasingly important endpoint in oncologic drug development. However, several concerns exist regarding the use of PFS as a basis to compare treatments. Unlike survival, the exact time of progression is unknown, so progression times might be over-estimated and, consequently, bias may be introduced when comparing treatments. Further, it is not uncommon for randomized therapy to be stopped prior to progression being documented due to toxicity or the initiation of additional anti-cancer therapy; in such cases patients are frequently not followed further for progression and, consequently, are right-censored in the analysis. This article reviews these issues and concludes that concerns relating to the exact timing of progression are generally overstated, with analysis techniques and simple alternative endpoints available to either remove bias entirely or at least provide reassurance via supportive analyses that bias is not present. Further, it is concluded that the regularly recommended manoeuvre to censor PFS time at dropout due to toxicity or upon the initiation of additional anti-cancer therapy is likely to favour the more toxic, less efficacious treatment and so should be avoided whenever possible.  相似文献   

2.
Evaluation (or assessment)–time bias can arise in oncology trials that study progression‐free survival (PFS) when randomized groups have different patterns of timing of assessments. Modelling or computer simulation is sometimes used to explore the extent of such bias; valid results require building such simulations under realistic assumptions concerning the timing of assessments. This paper considers a trial that used a logrank test where computer simulations were based on unrealistic assumptions that severely overestimated the extent of potential bias. The paper shows that seemingly small differences in assumptions can lead to dramatic differences in the apparent operating characteristics of logrank tests. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
In oncology/hematology early phase clinical trials, efficacies were often observed in terms of response rate, depth, timing, and duration. However, the true clinical benefits that eventually support registrational purpose are progression-free survival (PFS) and/or overall survival (OS), the follow-up of which are typically not long enough in early phase trials. This gap imposes challenges in strategies for late phase drug development. In this article, we tackle the question by leveraging published study to establish a quantitative link between early efficacy outcomes and late phase efficacy endpoints. We used solid tumor cancer as disease model. We modeled the disease course of a RECISTv1.1 assessed solid tumor with a continuous Markov chain (CMC) model. We parameterize the transition intensity matrix of a CMC model based on published aggregate-level summary statistics, and then simulate subject-level time-to-event data. The simulated data is shown to have good approximation to published studies. PFS and/or OS could be predicted with the transition intensity matrix modified given clinical knowledge to reflect various assumptions on response rate, depth, timing, and duration. The authors have built a R shiny application named PubPredict, the tool implements the algorithm described above and allows customized features including multiple response levels, treatment crossover and varying follow-up duration. This toolset has been applied to advise phase 3 trial design when only early efficacy data are available from phase 1 or 2 studies.  相似文献   

4.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

5.
Predictive enrichment strategies use biomarkers to selectively enroll oncology patients into clinical trials to more efficiently demonstrate therapeutic benefit. Because the enriched population differs from the patient population eligible for screening with the biomarker assay, there is potential for bias when estimating clinical utility for the screening eligible population if the selection process is ignored. We write estimators of clinical utility as integrals averaging regression model predictions over the conditional distribution of the biomarker scores defined by the assay cutoff and discuss the conditions under which consistent estimation can be achieved while accounting for some nuances that may arise as the biomarker assay progresses toward a companion diagnostic. We outline and implement a Bayesian approach in estimating these clinical utility measures and use simulations to illustrate performance and the potential biases when estimation naively ignores enrichment. Results suggest that the proposed integral representation of clinical utility in combination with Bayesian methods provide a practical strategy to facilitate cutoff decision‐making in this setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

7.
The addendum of the ICH E9 guideline on the statistical principles for clinical trials introduced the estimand framework. The framework is designed to strengthen the dialog between different stakeholders, to introduce greater clarity in the clinical trial objectives and to provide alignment between the estimand and statistical analysis. Estimand framework related publications thus far have mainly focused on randomized clinical trials. The intention of the Early Development Estimand Nexus (EDEN), a task force of the cross-industry Oncology Estimand Working Group ( www.oncoestimand.org ), is to apply it to single arms Phase 1b or Phase 2 trials designed to detect a treatment-related efficacy signal, typically measured by objective response rate. Key recommendations regarding the estimand attributes include that in a single arm early clinical trial, the treatment attribute should start when the first dose is received by the participant. Focusing on the estimation of an absolute effect, the population-level summary measure should reflect only the property used for the estimation. Another major component introduced in the ICH E9 addendum is the definition of intercurrent events and the associated possible ways to handle them. Different strategies reflect different clinical questions of interest that can be answered based on the journeys an individual subject can take during a trial. We provide detailed strategy recommendations for intercurrent events typically seen in early-stage oncology. We highlight where implicit assumptions should be made transparent as whenever follow-up is suspended, a while-on-treatment strategy is implied.  相似文献   

8.
The Pareto distribution is a well-known probability distribution in statistics, which has been widely used in many fields, such as finance, physics, hydrology, geology and astronomy. However, the parameter estimation for the truncated Pareto distribution is much more complicated than that for the Pareto distribution. In this paper, we demonstrate that the bias of the maximum likelihood estimation for the truncated Pareto distribution can be significantly reduced by its jackknife estimation, which has a very simple form.  相似文献   

9.
It is well‐known that a spontaneous reporting system suffers from significant under‐reporting of adverse drug reactions from the source population. The existing methods do not adjust for such under‐reporting for the calculation of measures of association between a drug and the adverse drug reaction under study. Often there is direct and/or indirect information on the reporting probabilities. This work incorporates the reporting probabilities into existing methodologies, specifically to Bayesian confidence propagation neural network and DuMouchel's empirical Bayesian methods, and shows how the two methods lead to biased results in the presence of under‐reporting. Considering all the cases to be reported, the association measure for the source population can be estimated by using only exposure information through a reference sample from the source population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
The identification of factors which are related to human immunodeficiency virus (HIV) disease progression, either by a direct interaction with HIV to increase the rate of disease progression or by providing an indication of an infected individual's likely prognosis, can have great value when understanding HIV pathogenesis and in the development of novel therapeutic approaches. This paper describes the roles of the CD4 cell count and the viral load as markers of disease progression and discusses the recent findings on chemokine receptors in HIV infection. Our current knowledge on these factors is summarized and unresolved statistical issues are highlighted.  相似文献   

11.
SUMMARY Malignancy grade is a histological measure of attributes related to a breast tumour's aggressive potential. It is not established whether the grade is an inate characteristic which remains unchanged throughout the tumour's development or whether it evolves as the tumour grows. It is likely that a proportion of tumours have the potential to evolve, and so a statistical method was required to assess this hypothesis and, if possible, to estimate the proportion with the potential for evolution. Therefore, a mover-stayer mixture of Markov chain models was developed, with the complication that 'movers' were unobservable because tumours were excised on diagnosis. A quasi-likelihood method was used for estimation. The methods are demonstrated using data from the Swedish twocounty trial of breast-cancer screening.  相似文献   

12.
13.
Summary.  Statistical methods of ecological analysis that attempt to reduce ecological bias are empirically evaluated to determine in which circumstances each method might be practicable. The method that is most successful at reducing ecological bias is stratified ecological regression. It allows individual level covariate information to be incorporated into a stratified ecological analysis, as well as the combination of disease and risk factor information from two separate data sources, e.g. outcomes from a cancer registry and risk factor information from the census sample of anonymized records data set. The aggregated individual level model compares favourably with this model but has convergence problems. In addition, it is shown that the large areas that are covered by local authority districts seem to reduce between-area variability and may therefore not be as informative as conducting a ward level analysis. This has policy implications because access to ward level data is restricted.  相似文献   

14.
In this article, we study the moment-based test procedure for a mixture distribution for the Natural exponential family with quadratic variance functions (NEF-QVF) family proposed by Ning et al. (2009b Ning, W., Zhang, S. G. and Yu, C. 2009b. A moment-based test for the homogeneity in mixture natural exponential family with quadratic variance functions. Statistical and Probability Letters, 79: 828834. [Crossref] [Google Scholar]) in the small sample size scenario. We derive the approximation for the null distribution of the test statistic by the Edgeworth expansion. The simulations are conducted for a binomial mixture distribution, which includes the situation corresponding to the detection of the linkage in the genetic analysis, with different sample sizes and family sizes at various significance levels. The simulation results show that our test performs reasonably well. We also apply the proposed method to the real clinical data to verify the significant difference between two drug treatments. The critical values associated with a binomial mixture distribution are also provided.  相似文献   

15.
M. Bloznelis 《Statistics》2013,47(6):489-504
Using the ANOVA decomposition, we obtain an explicit formula for the bias of the jackknife variance estimator in stratified samples drawn without replacement. For a wide class of asymptotically linear statistics, we show the consistency of the jackknife variance estimator and establish the asymptotic normality of their Studentized versions.  相似文献   

16.
In assessing biosimilarity between two products, the question to ask is always “How similar is similar?” Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.  相似文献   

17.
Children represent a large underserved population of “therapeutic orphans,” as an estimated 80% of children are treated off‐label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or “borrowing”) of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure‐response information for antiepileptic drugs to pediatrics.  相似文献   

18.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   

19.
20.
For the time-to-event outcome, current methods for sample determination are based on the proportional hazard model. However, if the proportionality assumption fails to capture the relationship between the hazard time and covariates, the proportional hazard model is not suitable to analyze survival data. The accelerated failure time (AFT) model is an alternative method to deal with survival data. In this paper, we address the issue that the relationship between the hazard time and the treatment effect is satisfied with the AFT model to design a multiregional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multiregional trial, and the proposed criteria are used to rationalize partition sample size to each region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号