首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   

2.
In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit.  相似文献   

3.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

5.
The FDA released the final guidance on noninferiority trials in November 2016. In noninferiority trials, validity of the assessment of the efficacy of the test treatment depends on the control treatment's efficacy. Therefore, it is critically important that there be a reliable estimate of the control treatment effect—which is generally obtained from historical trials, and often assumed to hold in the current setting (the assay constancy assumption). Validating the constancy assumption requires clinical data, which are typically lacking. The guidance acknowledges that “lack of constancy can occur for many reasons.” We clarify the objectives of noninferiority trials. We conclude that correction for bias, rather than assay constancy, is critical to conducting valid noninferiority trials. We propose that assay constancy not be assumed and discounting or thresholds be used to address concern about loss of historical efficacy. Examples are provided for illustration.  相似文献   

6.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

7.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   

8.
An important evolution in the missing data arena has been the recognition of need for clarity in objectives. The objectives of primary focus in clinical trials can often be categorized as assessing efficacy or effectiveness. The present investigation illustrated a structured framework for choosing estimands and estimators when testing investigational drugs to treat the symptoms of chronic illnesses. Key issues were discussed and illustrated using a reanalysis of the confirmatory trials from a new drug application in depression. The primary analysis used a likelihood‐based approach to assess efficacy: mean change to the planned endpoint of the trial assuming patients stayed on drug. Secondarily, effectiveness was assessed using a multiple imputation approach. The imputation model—derived solely from the placebo group—was used to impute missing values for both the drug and placebo groups. Therefore, this so‐called placebo multiple imputation (a.k.a. controlled imputation) approach assumed patients had reduced benefit from the drug after discontinuing it. Results from the example data provided clear evidence of efficacy for the experimental drug and characterized its effectiveness. Data after discontinuation of study medication were not required for these analyses. Given the idiosyncratic nature of drug development, no estimand or approach is universally appropriate. However, the general practice of pairing efficacy and effectiveness estimands may often be useful in understanding the overall risks and benefits of a drug. Controlled imputation approaches, such as placebo multiple imputation, can be a flexible and transparent framework for formulating primary analyses of effectiveness estimands and sensitivity analyses for efficacy estimands. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
A 3‐arm trial design that includes an experimental treatment, an active reference treatment, and a placebo is useful for assessing the noninferiority of an experimental treatment. The inclusion of a placebo arm enables the assessment of assay sensitivity and internal validation, in addition to the testing of the noninferiority of the experimental treatment compared with the reference treatment. In 3‐arm noninferiority trials, various statistical test procedures have been considered to evaluate the following 3 hypotheses: (i) superiority of the experimental treatment over the placebo, (ii) superiority of the reference treatment over the placebo, and (iii) noninferiority of the experimental treatment compared with the reference treatment. However, hypothesis (ii) can be insufficient and may not accurately assess the assay sensitivity for the noninferiority of the experimental treatment compared with the reference treatment. Thus, demonstrating that the superiority of the reference treatment over the placebo is greater than the noninferiority margin (the nonsuperiority of the reference treatment compared with the placebo) can be necessary. Here, we propose log‐rank statistical procedures for evaluating data obtained from 3‐arm noninferiority trials to assess assay sensitivity with a prespecified margin Δ. In addition, we derive the approximate sample size and optimal allocation required to minimize the total sample size and that of the placebo treatment sample size, hierarchically.  相似文献   

10.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

11.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

12.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
International Conference on Harmonization E10 concerns non-inferiority trials and the assessment of comparative efficacy, both of which often involve indirect comparisons. In the non-inferiority setting, there are clinical trial results directly comparing an experimental treatment with an active control, and clinical trial results directly comparing the active control with placebo, and there is an interest in the indirect comparison of the experimental treatment with placebo. In the comparative efficacy setting, there may be separate clinical trial results comparing each of two treatments with placebo, and there is interest in an indirect comparison of the treatments. First, we show that the sample size required for a trial intended to demonstrate superiority through an indirect comparison is always greater than the sample size required for a direct comparison. In addition, by introducing the concept of preservation of effect, we show that the hypothesis addressed in the two settings is identical. Our main result concerns the logical inconsistency between a reasonable criterion for preference of an experimental treatment to a standard treatment and existing regulatory guidance for approval of the experimental treatment on the basis of an indirect comparison. Specifically, the preferred treatment will not always meet the criterion for regulatory approval. This is due to the fact that the experimental treatment bears the burden of overcoming the uncertainty in the effect of the standard treatment. We consider an alternative approval criterion that avoids this logical inconsistency.  相似文献   

15.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
《Statistics》2012,46(6):1187-1209
ABSTRACT

According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for each alternative hypothesis, this leads to weighing evidence without any shrinkage toward a presumption of the truth of each null hypothesis. That lack of shrinkage can lead to many false positives in settings with large numbers of hypotheses. A related problem is that point hypotheses cannot have more support than their alternatives. Both problems may be solved by fusing the realistic model with a model of a more restricted parameter space for use with the general law of likelihood. Applying the proposed framework of model fusion to data sets from genomics and education yields intuitively reasonable weights of evidence.  相似文献   

17.
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes.  相似文献   

18.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

19.
The increasing concern of antibacterial resistance has been well documented, as has the relative lack of antibiotic development. This paradox is in part due to challenges with clinical development of antibiotics. Because of their rapid progression, untreated bacterial infections are associated with significant morbidity and mortality. As a consequence, placebo-controlled studies of new agents are unethical. Rather, pivotal development studies are mostly conducted using non-inferiority designs versus an active comparator. Further, infections because of comparator-resistant isolates must usually be excluded from the trial programme. Unfortunately, the placebo-controlled data classically used in support of non-inferiority designs are largely unavailable for antibiotics. The only available data are from the 1930s and 1940s and their use is associated with significant concerns regarding constancy and assay sensitivity. Extended public debate on this challenge has led to proposed solutions by some in which these concerns are addressed by using very conservative approaches to trial design, endpoints and non-inferiority margins, in some cases leading to potentially impractical studies. To compound this challenge, different Regulatory Authorities seem to be taking different approaches to these key issues. If harmonisation does not occur, antibiotic development will become increasingly challenging, with the risk of further decreases in the amount of antibiotic drug development. However with clarity on Regulatory requirements and an ability to feasibly conduct global development programmes, it should be possible to bring much needed additional antibiotics to patients.  相似文献   

20.
One of the cornerstones of any non-inferiority trial is the choice of the non-inferiority margin delta. This threshold of clinical relevance is very difficult to determine, and in practice, delta is often "negotiated" between the sponsor of the trial and the regulatory agencies. However, for patient reported, or more precisely patient observed outcomes, the patients' minimal clinically important difference (MCID) can be determined empirically by relating the treatment effect, for example, a change on a 100-mm visual analogue scale, to the patient's satisfaction with the change. This MCID can then be used to define delta. We used an anchor-based approach with non-parametric discriminant analysis and ROC analysis and a distribution-based approach with Norman's half standard deviation rule to determine delta in three examples endometriosis-related pelvic pain measured on a 100-mm visual analogue scale, facial acne measured by lesion counts, and hot flush counts. For each of these examples, all three methods yielded quite similar results. In two of the cases, the empirically derived MCIDs were smaller or similar of deltas used before in non-inferiority trials, and in the third case, the empirically derived MCID was used to derive a responder definition that was accepted by the FDA. In conclusion, for patient-observed endpoints, the delta can be derived empirically. In our view, this is a better approach than that of asking the clinician for a "nice round number" for delta, such as 10, 50%, π, e, or i.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号