首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

3.
A late‐stage clinical development program typically contains multiple trials. Conventionally, the program's success or failure may not be known until the completion of all trials. Nowadays, interim analyses are often used to allow evaluation for early success and/or futility for each individual study by calculating conditional power, predictive power and other indexes. It presents a good opportunity for us to estimate the probability of program success (POPS) for the entire clinical development earlier. The sponsor may abandon the program early if the estimated POPS is very low and therefore permit resource savings and reallocation to other products. We provide a method to calculate probability of success (POS) at an individual study level and also POPS for clinical programs with multiple trials in binary outcomes. Methods for calculating variation and confidence measures of POS and POPS and timing for interim analysis will be discussed and evaluated through simulations. We also illustrate our approaches on historical data retrospectively from a completed clinical program for depression. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In drug development, after completion of phase II proof‐of‐concept trials, the sponsor needs to make a go/no‐go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision‐making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k0 () trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Population pharmacokinetics (POPPK) has many important uses at various stages of drug development and approval. At the phase III stage, one of the major uses of POPPK is to identify covariate influences on human pharmacokinetics, which is important for potential dose adjustment and drug labeling. One common analysis approach is nonlinear mixed‐effect modeling, which typically involves time‐consuming extensive search for best fits among a large number of possible models. We propose that the analysis goal can be better achieved with a more standard confirmatory statistical analysis approach, which uses a prespecified primary analysis and additional sensitivity analyses. We illustrate this approach using a phase III study data set and compare the result with that calculated using the common exploratory approach. We argue that the confirmatory approach not only substantially reduces analysis time but also yields more accurate and interpretable results. Some aspects of this confirmatory approach may also be extended to data analysis in earlier stages of clinical drug development, i.e. phase II and phase I. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

7.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

8.
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u‐shape very similar, but not equal, to a β‐distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
离散型模糊概率在方案选择中的应用   总被引:2,自引:1,他引:2  
针对决策者在进行方案选择时各种自然状态发生概率的不确定性,提出并讨论了离散型模糊状态概率下的方案选择问题。将方案各状态发生概率的三项估计值表示为三角模糊数,从而获得离散型模糊概率分布,利用限制模糊算法确定离散型模糊概率分布的数字特征值,在确定决策者决策准则的前提下,利用方案评价指标的数字特征值选择方案,同时分析了该法处理的优点。  相似文献   

10.
It is often critical to accurately model the upper tail behaviour of a random process. Nonparametric density estimation methods are commonly implemented as exploratory data analysis techniques for this purpose and can avoid model specification biases implied by using parametric estimators. In particular, kernel-based estimators place minimal assumptions on the data, and provide improved visualisation over scatterplots and histograms. However kernel density estimators can perform poorly when estimating tail behaviour above a threshold, and can over-emphasise bumps in the density for heavy tailed data. We develop a transformation kernel density estimator which is able to handle heavy tailed and bounded data, and is robust to threshold choice. We derive closed form expressions for its asymptotic bias and variance, which demonstrate its good performance in the tail region. Finite sample performance is illustrated in numerical studies, and in an expanded analysis of the performance of global climate models.  相似文献   

11.
The performance of computationally inexpensive model selection criteria in the context of tree-structured subgroup analysis is investigated. It is shown through simulation that no single model selection criterion exhibits a uniformly superior performance over a wide range of scenarios. Therefore, a two-stage approach for model selection is proposed and shown to perform satisfactorily. Applied example of subgroup analysis is presented. Problems associated with tree-structured subgroup analysis are discussed and practical solutions are suggested.  相似文献   

12.
ABSTRACT

Student dropout is a worldwide problem, leading private and public universities in developed and underdeveloped countries to study the subject carefully or, as has recently been done, to analyse what drives student success. On this matter, different approaches are used to obtain useful information for decision-making. We propose a model that considers the enrolment date to the dropout or graduation date and also covariates to measure student success rates, to identify what the academic and non-academic factors are, and how they drive the student success. Our proposal assumes that there is one part of the population who is not at risk of dropping out, and that the part of the population at risk is heterogeneous, that is, we assume two types of heterogeneity. We highlight two advantages of our model: one is to identify the period of higher risk to dropout due to considering the academic survival time and the second is due to the inclusion of covariates that enable us to identify the characteristics linked to dropout. In this research, we also demonstrate the identifiability of the model and describe the estimation procedures. To exemplify the applicability of the approach, we use two real datasets.  相似文献   

13.
We consider the estimation of a regression coefficient in a linear regression when observations are missing due to nonresponse. Response is assumed to be determined by a nonobservable variable which is linearly related to an observable variable. The values of the observable variable are assumed to be available for the whole sample but the variable is not includsd in the regression relationship of interest . Several alternative estimators have been proposed for this situation under various simplifying assumptions. A sampling theory approach provides three alternative estimatrs by considering the observatins as obtained from a sub-sample, selected on the basis of the fully observable variable , as formulated by Nathan and Holt (1980). Under an econometric approach, Heckman (1979) proposed a two-stage (probit and OLS) estimator which is consistent under specificconditions. A simulation comparison of the four estimators and the ordinary least squares estimator , under multivariate normality of all the variables involved, indicates that the econometric approach estimator is not robust to departures from the conditions underlying its derivation, while two of the other estimators exhibit a similar degree of stable performance over a wide range of conditions. Simulations for a non-normal distribution show that gains in performance can be obtained if observations on the independent variable are available for the whole population.  相似文献   

14.
In this commentary, we show that the treatment selection probabilities in Morita and Sakamoto [1] could be different if safety information is considered. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Although the statistical methods enabling efficient adaptive seamless designs are increasingly well established, it is important to continue to use the endpoints and specifications that best suit the therapy area and stage of development concerned when conducting such a trial. Approaches exist that allow adaptive designs to continue seamlessly either in a subpopulation of patients or in the whole population on the basis of data obtained from the first stage of a phase II/III design: our proposed design adds extra flexibility by also allowing the trial to continue in all patients but with both the subgroup and the full population as co-primary populations. Further, methodology is presented which controls the Type-I error rate at less than 2.5% when the phase II and III endpoints are different but correlated time-to-event endpoints. The operating characteristics of the design are described along with a discussion of the practical aspects in an oncology setting.  相似文献   

16.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
How do we communicate nuanced regulatory information to different audiences, recognizing that the consumer audience is very different from the physician audience? In particular, how do we communicate the heterogeneity of treatment effects - the potential differences in treatment effects based on sex, race, and age? That is a fundamental question at the heart of this panel discussion. Each panelist addressed a specific “challenge question” during their 5-minute presentation, and the list of questions is provided. The presentations were followed by a question and answer session with members of the audience and the panelists.  相似文献   

18.
An increasing number of contemporary datasets are high dimensional. Applications require these datasets be screened (or filtered) to select a subset for further study. Multiple testing is the standard tool in such applications, although alternatives have begun to be explored. In order to assess the quality of selection in these high-dimensional contexts, Cui and Wilson (2008b Cui , X. , Wilson , J. ( 2008b ). On the probability of correct selection for large k populations with application to microarray data . Biometrical Journal 50 ( 5 ): 870883 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) proposed two viable methods of calculating the probability that any such selection is correct (PCS). PCS thereby serves as a measure of the quality of competing statistics used for selection. The first simulation study of this article investigates the two PCS statistics of the above article. It shows that in the high-dimensional case PCS can be accurately estimated and is robust under certain conditions. The second simulation study investigates a nonparametric estimator of PCS.  相似文献   

19.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

20.
Abstract

In a seamless phase II/III/IIIb trial, K (K?≥?2) doses versus placebo control are evaluated at phase II. Based on phase II results, one dose will be selected for phases III and IIIb. Pre-specified additional numbers of patients will be enrolled into the selected dose and placebo control during phases III and IIIb. Results of the phase III endpoint may be submitted for an early New Drug Application. Final analyses will be conducted for ultimate claims of treatment effects for the selected dose on the phase III and IIIb endpoints. Multiplicity adjustment is performed for the overall type I error rate control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号