首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The minimum clinically important difference (MCID) between treatments is recognized as a key concept in the design and interpretation of results from a clinical trial. Yet even assuming such a difference can be derived, it is not necessarily clear how it should be used. In this paper, we consider three possible roles for the MCID. They are: (1) using the MCID to determine the required sample size so that the trial has a pre-specified statistical power to conclude a significant treatment effect when the treatment effect is equal to the MCID; (2) requiring with high probability, the observed treatment effect in a trial, in addition to being statistically significant, to be at least as large as the MCID; (3) demonstrating via hypothesis testing that the effect of the new treatment is at least as large as the MCID. We will examine the implications of the three different possible roles of the MCID on sample size, expectations of a new treatment, and the chance for a successful trial. We also give our opinion on how the MCID should generally be used in the design and interpretation of results from a clinical trial.  相似文献   

2.
We present likelihood methods for defining the non-inferiority margin and measuring the strength of evidence in non-inferiority trials using the 'fixed-margin' framework. Likelihood methods are used to (1) evaluate and combine the evidence from historical trials to define the non-inferiority margin, (2) assess and report the smallest non-inferiority margin supported by the data, and (3) assess potential violations of the constancy assumption. Data from six aspirin-controlled trials for acute coronary syndrome and data from an active-controlled trial for acute coronary syndrome, Organisation to Assess Strategies for Ischemic Syndromes (OASIS-2) trial, are used for illustration. The likelihood framework offers important theoretical and practical advantages when measuring the strength of evidence in non-inferiority trials. Besides eliminating the influence of sample spaces and prior probabilities on the 'strength of evidence in the data', the likelihood approach maintains good frequentist properties. Violations of the constancy assumption can be assessed in the likelihood framework when it is appropriate to assume a unifying regression model for trial data and a constant control effect including a control rate parameter and a placebo rate parameter across historical placebo controlled trials and the non-inferiority trial. In situations where the statistical non-inferiority margin is data driven, lower likelihood support interval limits provide plausibly conservative candidate margins.  相似文献   

3.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   

4.
In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit.  相似文献   

5.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

6.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

7.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   

8.
We consider regression modeling of survival data subject to right censoring when the full effect of some covariates (e.g. treatment) may be delayed. Several models are proposed, and methods for computing the maximum likelihood estimates of the parameters are described. Consistency and asymptotic normality properties of the estimators are derived. Some numerical examples are used to illustrate the implementation of the modeling and estimation procedures. Finally we apply the theory to interim data from a large scale randomized clinical trial for the prevention of skin cancer.  相似文献   

9.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

10.
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes.  相似文献   

11.
Binary data are commonly used as responses to assess the effects of independent variables in longitudinal factorial studies. Such effects can be assessed in terms of the rate difference (RD), the odds ratio (OR), or the rate ratio (RR). Traditionally, the logistic regression seems always a recommended method with statistical comparisons made in terms of the OR. Statistical inference in terms of the RD and RR can then be derived using the delta method. However, this approach is hard to realize when repeated measures occur. To obtain statistical inference in longitudinal factorial studies, the current article shows that the mixed-effects model for repeated measures, the logistic regression for repeated measures, the log-transformed regression for repeated measures, and the rank-based methods are all valid methods that lead to inference in terms of the RD, OR, and RR, respectively. Asymptotic linear relationships between the estimators of the regression coefficients of these models are derived when the weight (working covariance) matrix is an identity matrix. Conditions for the Wald-type tests to be asymptotically equivalent in these models are provided and powers were compared using simulation studies. A phase III clinical trial is used to illustrate the investigated methods with corresponding SAS® code supplied.  相似文献   

12.
The development of a new pneumococcal conjugate vaccine involves assessing the responses of the new serotypes included in the vaccine. The World Health Organization guidance states that the response from each new serotype in the new vaccine should be compared with the aggregate response from the existing vaccine to evaluate non-inferiority. However, no details are provided on how to define and estimate the aggregate response and what methods to use for non-inferiority comparisons. We investigate several methods to estimate the aggregate response based on binary data including simple average, model-based, and lowest response methods. The response of each new serotype is then compared with the estimated aggregate response for non-inferiority. The non-inferiority test p-value and confidence interval are obtained from Miettinen and Nurminen's method, using an effective sample size. The methods are evaluated using simulations and demonstrated with a real clinical trial example.  相似文献   

13.
基于平均自下而上时间的两种分类方法的比较   总被引:1,自引:1,他引:0  
金华 《统计研究》2008,25(1):98-103
内容提要:诸如疾病分类系统的预后预测和分类方法,常可用于帮助进行临床管理决策。同一疾病总体常可得到多种分类方法,因此有必要比较这些方法以确定最优分类,或者寻找不逊于最优分类的替代方法。本文基于约束平均寿命引入分离度指标来度量分类方法的预后分类效率,这个指标可用来比较以生存时间为结局的两种分类方法的功效,特别是用于非劣性和等效性检验。我们给出了基于配对数据的两个分离度的估计与检验方法。模拟结果提示,检验方法在适当的样本量条件下能够控制第一类错误,两个实例表明在医学临床中的应用。  相似文献   

14.
We consider an approach to deriving Bahadur–Kiefer theorems based on a "delta method" for sequences of minimizers. This approach is used to derive Bahadur–Kiefer theorems for the sample median and other estimators.  相似文献   

15.

Influence diagnostics are investigated in this study. In particular, an approach based on the generalized linear mixed model setting is presented for formulating ordered categorical counts in stratified contingency tables. Deletion diagnostics and their first-order approximations are developed for assessing the stratum-specific influence on parameter estimates in the models. To illustrate the proposed model diagnostic technique, the method is applied to analyze two sets of data: a clinical trial and a survey study. The two examples demonstrate that the presence of influential strata may substantially change the results in ordinal contingency table analysis.  相似文献   

16.
The increasing concern of antibacterial resistance has been well documented, as has the relative lack of antibiotic development. This paradox is in part due to challenges with clinical development of antibiotics. Because of their rapid progression, untreated bacterial infections are associated with significant morbidity and mortality. As a consequence, placebo-controlled studies of new agents are unethical. Rather, pivotal development studies are mostly conducted using non-inferiority designs versus an active comparator. Further, infections because of comparator-resistant isolates must usually be excluded from the trial programme. Unfortunately, the placebo-controlled data classically used in support of non-inferiority designs are largely unavailable for antibiotics. The only available data are from the 1930s and 1940s and their use is associated with significant concerns regarding constancy and assay sensitivity. Extended public debate on this challenge has led to proposed solutions by some in which these concerns are addressed by using very conservative approaches to trial design, endpoints and non-inferiority margins, in some cases leading to potentially impractical studies. To compound this challenge, different Regulatory Authorities seem to be taking different approaches to these key issues. If harmonisation does not occur, antibiotic development will become increasingly challenging, with the risk of further decreases in the amount of antibiotic drug development. However with clarity on Regulatory requirements and an ability to feasibly conduct global development programmes, it should be possible to bring much needed additional antibiotics to patients.  相似文献   

17.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

18.
For testing the non-inferiority (or equivalence) of an experimental treatment to a standard treatment, the odds ratio (OR) of patient response rates has been recommended to measure the relative treatment efficacy. On the basis of an exact test procedure proposed elsewhere for a simple crossover design, we develop an exact sample-size calculation procedure with respect to the OR of patient response rates for a desired power of detecting non-inferiority at a given nominal type I error. We note that the sample size calculated for a desired power based on an asymptotic test procedure can be much smaller than that based on the exact test procedure under a given situation. We further discuss the advantage and disadvantage of sample-size calculation using the exact test and the asymptotic test procedures. We employ an example by studying two inhalation devices for asthmatics to illustrate the use of sample-size calculation procedure developed here.  相似文献   

19.
In many biomedical applications, tests for the classical hypotheses based on the difference of treatment means in a one-way layout can be replaced by tests for ratios (or tests for relative changes). This approach is well noted for its simplicity in defining the margins, as for example in tests for non-inferiority. Here, we derive approximate and efficient sample size formulas in a multiple testing situation and then thoroughly investigate the relative performance of hypothesis testing based on the ratios of treatment means when compared with differences of means. The results will be illustrated with an example on simultaneous tests for non-inferiority.  相似文献   

20.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号