首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Modelling and simulation (M&S) is increasingly being applied in (clinical) drug development. It provides an opportune area for the community of pharmaceutical statisticians to pursue. In this article, we highlight useful principles behind the application of M&S. We claim that M&S should be focussed on decisions, tailored to its purpose and based in applied sciences, not relying entirely on data-driven statistical analysis. Further, M&S should be a continuous process making use of diverse information sources and applying Bayesian and frequentist methodology, as appropriate. In addition to forming a basis for analysing decision options, M&S provides a framework that can facilitate communication between stakeholders. Besides the discussion on modelling philosophy, we also describe how standard simulation practice can be ineffective and how simulation efficiency can often be greatly improved.  相似文献   

3.
The option to stop a project is fundamental in drug development. The majority of drugs do not reach the market. Furthermore, many marketed drugs do not repay their development costs. It is therefore crucial to optimize the value of the option to stop. We formulate two examples of statistical models. One is based on success/failure in a series of trials; the other assumes that the commercial value evolves as a stochastic process as more information becomes available. These models are used to study a number of issues: the number and timing of decision points; value of information; speed of development; and order of trials. The results quantify the value of options. They show that early information that can change key decisions is most valuable. That is, we should nip bad projects in the bud. Modelling is also useful to analyse more complex decisions, for example, weighting the value of decision points against the cost of information or the speed of development. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

5.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

6.
Model‐informed drug discovery and development offers the promise of more efficient clinical development, with increased productivity and reduced cost through scientific decision making and risk management. Go/no‐go development decisions in the pharmaceutical industry are often driven by effect size estimates, with the goal of meeting commercially generated target profiles. Sufficient efficacy is critical for eventual success, but the decision to advance development phase is also dependent on adequate knowledge of appropriate dose and dose‐response. Doses which are too high or low pose risk of clinical or commercial failure. This paper addresses this issue and continues the evolution of formal decision frameworks in drug development. Here, we consider the integration of both efficacy and dose‐response estimation accuracy into the go/no‐go decision process, using a model‐based approach. Using prespecified target and lower reference values associated with both efficacy and dose accuracy, we build a decision framework to more completely characterize development risk. Given the limited knowledge of dose response in early development, our approach incorporates a set of dose‐response models and uses model averaging. The approach and its operating characteristics are illustrated through simulation. Finally, we demonstrate the decision approach on a post hoc analysis of the phase 2 data for naloxegol (a drug approved for opioid‐induced constipation).  相似文献   

7.
Summary.  The instigation of mass screening for breast cancer has, over the last three decades, raised various statistical issues and led to the development of new statistical approaches. Initially, the design of screening trials was the main focus of research but, as the evidence in favour of population-based screening programmes mounts, a variety of other applications have also been identified. These include administrative and quality control tasks, for monitoring routine screening services, as well as epidemiological modelling of incidence and mortality. We review the commonly used methods of cancer screening evaluation, highlight some current issues in breast screening and, using examples from randomized trials and established screening programmes, illustrate the role that statistical science has played in the development of clinical research in this field.  相似文献   

8.
In drug development, after completion of phase II proof‐of‐concept trials, the sponsor needs to make a go/no‐go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision‐making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k0 () trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Since the publication of the International Conference on Harmonization E5 guideline, new drug approvals in Japan based on the bridging strategy have been increasing. To further streamline and expedite new drug development in Japan, the Ministry of Health, Labour and Welfare, the Japanese regulatory authority, recently issued the ‘Basic Principles on Global Clinical Trials' guidance to promote Japan's participation in multi‐regional trials. The guidance, in a Q&A format, provides two methods as examples for recommending the number of Japanese patients in a multi‐regional trial. Method 1 in the guidance is the focus of this paper. We derive formulas for the sample size calculations for normal, binary and survival endpoints. Computations and simulation results are provided to compare different approaches. Trial examples are used to illustrate the applications of the approaches. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

11.
The study of HIV dynamics is one of the most important developments in recent AIDS research. It has led to a new understanding of the pathogenesis of HIV infection. Although important findings in HIV dynamics have been published in prestigious scientific journals, the statistical methods for parameter estimation and model-fitting used in those papers appear surprisingly crude and have not been studied in more detail. For example, the unidentifiable parameters were simply imputed by mean estimates from previous studies, and important pharmacological/clinical factors were not considered in the modelling. In this paper, a viral dynamic model is developed to evaluate the effect of pharmacokinetic variation, drug resistance and adherence on antiviral responses. In the context of this model, we investigate a Bayesian modelling approach under a non-linear mixed-effects (NLME) model framework. In particular, our modelling strategy allows us to estimate time-varying antiviral efficacy of a regimen during the whole course of a treatment period by incorporating the information of drug exposure and drug susceptibility. Both simulated and real clinical data examples are given to illustrate the proposed approach. The Bayesian approach has great potential to be used in many aspects of viral dynamics modelling since it allow us to fit complex dynamic models and identify all the model parameters. Our results suggest that Bayesian approach for estimating parameters in HIV dynamic models is flexible and powerful.  相似文献   

12.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

13.
Benefit-risk assessment is a fundamental element of drug development with the aim to strengthen decision making for the benefit of public health. Appropriate benefit-risk assessment can provide useful information for proactive intervention in health care settings, which could save lives, reduce litigation, improve patient safety and health care outcomes, and furthermore, lower overall health care costs. Recent development in this area presents challenges and opportunities to statisticians in the pharmaceutical industry. We review the development and examine statistical issues in comparative benefit-risk assessment. We argue that a structured benefit-risk assessment should be a multi-disciplinary effort involving experts in clinical science, safety assessment, decision science, health economics, epidemiology and statistics. Well planned and conducted analyses with clear consideration on benefit and risk are critical for appropriate benefit-risk assessment. Pharmaceutical statisticians should extend their knowledge to relevant areas such as pharmaco-epidemiology, decision analysis, modeling, and simulation to play an increasingly important role in comparative benefit-risk assessment.  相似文献   

14.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

15.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Compliance with one specified dosing strategy of assigned treatments is a common problem in randomized drug clinical trials. Recently, there has been much interest in methods used for analysing treatment effects in randomized clinical trials that are subject to non-compliance. In this paper, we estimate and compare treatment effects based on the Grizzle model (GM) (ignorable non-compliance) as the custom model and the generalized Grizzle model (GGM) (non-ignorable non-compliance) as the new model. A real data set based on the treatment of knee osteoarthritis is used to compare these models. The results based on the likelihood ratio statistics and simulation study show the advantage of the proposed model (GGM) over the custom model (GGM).  相似文献   

17.
A considerable problem in statistics and risk management is finding distributions that capture the complex behaviour exhibited by financial data. The importance of higher order moments in decision making has been well recognized and there is increasing interest in modelling with distributions that are able to account for these effects. The Pearson system can be used to model a wide scale of distributions with various skewness and kurtosis. This paper provides computational examples of a new easily implemented method for selecting probability density functions from the Pearson family of distributions. We apply this method to daily, monthly, and annual series using a range of data from commodity markets to macroeconomic variables.  相似文献   

18.
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose‐escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose‐concentration and concentration‐QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug–drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose‐escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision‐making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Assessment of the time needed to attain steady state is a key pharmacokinetic objective during drug development. Traditional approaches for assessing steady state include ANOVA‐based methods for comparing mean plasma concentration values from each sampling day, with either a difference or equivalence test. However, hypothesis‐testing approaches are ill suited for assessment of steady state. This paper presents a nonlinear mixed effects modelling approach for estimation of steady state attainment, based on fitting a simple nonlinear mixed model to observed trough plasma concentrations. The simple nonlinear mixed model is developed and proposed for use under certain pharmacokinetic assumptions. The nonlinear mixed modelling estimation approach is described and illustrated by application to trough data from a multiple dose trial in healthy subjects. The performance of the nonlinear mixed modelling approach is compared to ANOVA‐based approaches by means of simulation techniques. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Specific efficacy criteria were defined by the International Headache Society for controlled clinical trials on acute migraine. They are derived from the pain profile and the timing of rescue medication intake. We present a methodology to improve the analysis of such trials. Instead of analysing each endpoint separately, we model the joint distribution and derive success rates in any criteria as predictions. We use cumulative regression models for each response at a time and a multivariate normal copula to model the dependence between responses. Parameters are estimated using maximum likelihood. Benefits of the method include a reduction in the number of tests performed and an increase in their power. The method is well suited to dose–response trials from which predictions can be used to select doses and optimize the design of subsequent trials. More generally, our method permits a very flexible modelling of longitudinal series of ordinal data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号