首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

3.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
5.
One of the primary purposes of an oncology dose‐finding trial is to identify an optimal dose (OD) that is both tolerable and has an indication of therapeutic benefit for subjects in subsequent clinical trials. In addition, it is quite important to accelerate early stage trials to shorten the entire period of drug development. However, it is often challenging to make adaptive decisions of dose escalation and de‐escalation in a timely manner because of the fast accrual rate, the difference of outcome evaluation periods for efficacy and toxicity and the late‐onset outcomes. To solve these issues, we propose the time‐to‐event Bayesian optimal interval design to accelerate dose‐finding based on cumulative and pending data of both efficacy and toxicity. The new design, named “TITE‐BOIN‐ET” design, is nonparametric and a model‐assisted design. Thus, it is robust, much simpler, and easier to implement in actual oncology dose‐finding trials compared with the model‐based approaches. These characteristics are quite useful from a practical point of view. A simulation study shows that the TITE‐BOIN‐ET design has advantages compared with the model‐based approaches in both the percentage of correct OD selection and the average number of patients allocated to the ODs across a variety of realistic settings. In addition, the TITE‐BOIN‐ET design significantly shortens the trial duration compared with the designs without sequential enrollment and therefore has the potential to accelerate early stage dose‐finding trials.  相似文献   

6.
Benefit-risk assessment is a fundamental element of drug development with the aim to strengthen decision making for the benefit of public health. Appropriate benefit-risk assessment can provide useful information for proactive intervention in health care settings, which could save lives, reduce litigation, improve patient safety and health care outcomes, and furthermore, lower overall health care costs. Recent development in this area presents challenges and opportunities to statisticians in the pharmaceutical industry. We review the development and examine statistical issues in comparative benefit-risk assessment. We argue that a structured benefit-risk assessment should be a multi-disciplinary effort involving experts in clinical science, safety assessment, decision science, health economics, epidemiology and statistics. Well planned and conducted analyses with clear consideration on benefit and risk are critical for appropriate benefit-risk assessment. Pharmaceutical statisticians should extend their knowledge to relevant areas such as pharmaco-epidemiology, decision analysis, modeling, and simulation to play an increasingly important role in comparative benefit-risk assessment.  相似文献   

7.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

8.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

9.
Biomarkers that predict efficacy and safety for a given drug therapy become increasingly important for treatment strategy and drug evaluation in personalized medicine. Methodology for appropriately identifying and validating such biomarkers is critically needed, although it is very challenging to develop, especially in trials of terminal diseases with survival endpoints. The marker‐by‐treatment predictiveness curve serves this need by visualizing the treatment effect on survival as a function of biomarker for each treatment. In this article, we propose the weighted predictiveness curve (WPC). Based on the nature of the data, it generates predictiveness curves by utilizing either parametric or nonparametric approaches. Especially for nonparametric predictiveness curves, by incorporating local assessment techniques, it requires minimum model assumptions and provides great flexibility to visualize the marker‐by‐treatment relationship. WPC can be used to compare biomarkers and identify the one with the highest potential impact. Equally important, by simultaneously viewing several treatment‐specific predictiveness curves across the biomarker range, WPC can also guide the biomarker‐based treatment regimens. Simulations representing various scenarios are employed to evaluate the performance of WPC. Application on a well‐known liver cirrhosis trial sheds new light on the data and leads to discovery of novel patterns of treatment biomarker interactions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Failure to adjust for informative non‐compliance, a common phenomenon in endpoint trials, can lead to a considerably underpowered study. However, standard methods for sample size calculation assume that non‐compliance is non‐informative. One existing method to account for informative non‐compliance, based on a two‐subpopulation model, is limited with respect to the degree of association between the risk of non‐compliance and the risk of a study endpoint that can be modelled, and with respect to the maximum allowable rates of non‐compliance and endpoints. In this paper, we introduce a new method that largely overcomes these limitations. This method is based on a model in which time to non‐compliance and time to endpoint are assumed to follow a bivariate exponential distribution. Parameters of the distribution are obtained by equating them with the study design parameters. The impact of informative non‐compliance is investigated across a wide range of conditions, and the method is illustrated by recalculating the sample size of a published clinical trial. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
We propose new ensemble approaches to estimate the population mean for missing response data with fully observed auxiliary variables. We first compress the working models according to their categories through a weighted average, where the weights are proportional to the square of the least‐squares coefficients of model refitting. Based on the compressed values, we develop two ensemble frameworks, under which one is to adjust weights in the inverse probability weighting procedure and the other is built upon an additive structure by reformulating the augmented inverse probability weighting function. The asymptotic normality property is established for the proposed estimators through the theory of estimating functions with plugged‐in nuisance parameter estimates. Simulation studies show that the new proposals have substantial advantages over existing ones for small sample sizes, and an acquired immune deficiency syndrome data example is used for illustration.  相似文献   

12.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

13.
In this study, we investigate the concept of the mean response for a treatment group mean as well as its estimation and prediction for generalized linear models with a subject‐wise random effect. Generalized linear models are commonly used to analyze categorical data. The model‐based mean for a treatment group usually estimates the response at the mean covariate. However, the mean response for the treatment group for studied population is at least equally important in the context of clinical trials. New methods were proposed to estimate such a mean response in generalized linear models; however, this has only been done when there are no random effects in the model. We suggest that, in a generalized linear mixed model (GLMM), there are at least two possible definitions of a treatment group mean response that can serve as estimation/prediction targets. The estimation of these treatment group means is important for healthcare professionals to be able to understand the absolute benefit vs risk. For both of these treatment group means, we propose a new set of methods that suggests how to estimate/predict both of them in a GLMMs with a univariate subject‐wise random effect. Our methods also suggest an easy way of constructing corresponding confidence and prediction intervals for both possible treatment group means. Simulations show that proposed confidence and prediction intervals provide correct empirical coverage probability under most circumstances. Proposed methods have also been applied to analyze hypoglycemia data from diabetes clinical trials.  相似文献   

14.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Three new entropy estimators of multivariate distributions are introduced. The two cases considered here concern when the distribution is supported by a unit sphere and by a unit cube. In the former case, the consistency and the upper bound of the absolute error for the proposed entropy estimator are established. In the latter one, under the assumption that only the moments of the underlying distribution are available, a non‐traditional estimator of the entropy is suggested. We also study the practical performances of the constructed estimators through simulation studies and compare the estimators based on the moment‐recovered approaches with their counterparts derived by using the histogram and k th nearest neighbour constructions. In addition, one worked example is briefly discussed.  相似文献   

16.
The QT interval is regarded as an important biomarker for the assessment of arrhythmia liability, and evidence of QT prolongation has led to the withdrawal and relabeling of numerous compounds. Traditional methods of assessing QT prolongation correct the QT interval for the length of the RR interval (which varies inversely with heart‐rate) in a variety of ways. These methods often disagree with each other and do not take into account changes in autonomic state. Correcting the QT interval for RR reduces a bivariate observation (RR, QT) to a univariate observation (QTc). The development of automatic electrocardiogram (ECG) signal acquisition systems has made it possible to collect continuous (so called ‘beat‐to‐beat’) ECG data. ECG data collected prior to administration of a compound allow us to define a region for (RR, QT) values that encompasses typical activity. Such reference regions are used in clinical applications to define the ‘normal’ region of clinical or laboratory measurements. This paper motivates the need for reference regions of (RR, QT) values from beat‐to‐beat ECG data, and describes a way of constructing these. We introduce a measure of agreement between two reference regions that points to the reliability of 12‐lead digital Holter data. We discuss the use of reference regions in establishing baselines for ECG parameters to assist in the evaluation of cardiac risk and illustrate using data from two methodological studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
The UK body of statisticians in the pharmaceutical industry, PSI, has called on heads of European regulatory agencies responsible for assessing applications for marketing authorizations for new medicines in the EU to employ full time statisticians. In order to assess the present situation a survey has been conducted to identify the number of agencies employing one or more full time statisticians. Out of 29 responding agencies, 12 employed one or more statisticians on a full time basis, whereas 17 did not. Among these 17, 7 involved external experts on a regular basis, 5 involved external statisticians on a case‐by‐case basis, whereas 5 never involved external statistical expertise. Failure to involve statisticians in the assessment of efficacy and safety of medicines does not automatically lead to reports of low quality or invalid assessment of benefit‐risk. However, in depth knowledge of statistical methodology is often necessary to uncover weaknesses and potentially biased efficacy estimates. This might be of importance for the final opinion on granting a marketing authorization, and statistical review should therefore be conducted by those who are professionally expert in the area. A positive trend toward an increased involvement of statistical expertise in the European network of regulatory agencies is observed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
The analysis of adverse events (AEs) is a key component in the assessment of a drug's safety profile. Inappropriate analysis methods may result in misleading conclusions about a therapy's safety and consequently its benefit‐risk ratio. The statistical analysis of AEs is complicated by the fact that the follow‐up times can vary between the patients included in a clinical trial. This paper takes as its focus the analysis of AE data in the presence of varying follow‐up times within the benefit assessment of therapeutic interventions. Instead of approaching this issue directly and solely from an analysis point of view, we first discuss what should be estimated in the context of safety data, leading to the concept of estimands. Although the current discussion on estimands is mainly related to efficacy evaluation, the concept is applicable to safety endpoints as well. Within the framework of estimands, we present statistical methods for analysing AEs with the focus being on the time to the occurrence of the first AE of a specific type. We give recommendations which estimators should be used for the estimands described. Furthermore, we state practical implications of the analysis of AEs in clinical trials and give an overview of examples across different indications. We also provide a review of current practices of health technology assessment (HTA) agencies with respect to the evaluation of safety data. Finally, we describe problems with meta‐analyses of AE data and sketch possible solutions.  相似文献   

20.
One of the main aims of early phase clinical trials is to identify a safe dose with an indication of therapeutic benefit to administer to subjects in further studies. Ideally therefore, dose‐limiting events (DLEs) and responses indicative of efficacy should be considered in the dose‐escalation procedure. Several methods have been suggested for incorporating both DLEs and efficacy responses in early phase dose‐escalation trials. In this paper, we describe and evaluate a Bayesian adaptive approach based on one binary response (occurrence of a DLE) and one continuous response (a measure of potential efficacy) per subject. A logistic regression and a linear log‐log relationship are used respectively to model the binary DLEs and the continuous efficacy responses. A gain function concerning both the DLEs and efficacy responses is used to determine the dose to administer to the next cohort of subjects. Stopping rules are proposed to enable efficient decision making. Simulation results shows that our approach performs better than taking account of DLE responses alone. To assess the robustness of the approach, scenarios where the efficacy responses of subjects are generated from an E max model, but modelled by the linear log–log model are also considered. This evaluation shows that the simpler log–log model leads to robust recommendations even under this model showing that it is a useful approximation to the difficulty in estimating E max model. Additionally, we find comparable performance to alternative approaches using efficacy and safety for dose‐finding. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号