首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The clinical efficacy of a new treatment may often be better evaluated by two or more co-primary endpoints. Recently, in pharmaceutical drug development, there has been increasing discussion regarding establishing statistically significant favorable results on more than one endpoint in comparisons between treatments, which is referred to as a problem of multiple co-primary endpoints. Several methods have been proposed for calculating the sample size required to design a trial with multiple co-primary correlated endpoints. However, because these methods require users to have considerable mathematical sophistication and knowledge of programming techniques, their application and spread may be restricted in practice. To improve the convenience of these methods, in this paper, we provide a useful formula with accompanying numerical tables for sample size calculations to design clinical trials with two treatments, where the efficacy of a new treatment is demonstrated on continuous co-primary endpoints. In addition, we provide some examples to illustrate the sample size calculations made using the formula. Using the formula and the tables, which can be read according to the patterns of correlations and effect size ratios expected in multiple co-primary endpoints, makes it convenient to evaluate the required sample size promptly.  相似文献   

2.
In a clinical trial comparing drug with placebo, where there are multiple primary endpoints, we consider testing problems where an efficacious drug effect can be claimed only if statistical significance is demonstrated at the nominal level for all endpoints. Under the assumption that the data are multivariate normal, the multiple endpoint-testing problem is formulated. The usual testing procedure involves testing each endpoint separately at the same significance level using two-sample t-tests, and claiming drug efficacy only if each t-statistic is significant. In this paper we investigate properties of this procedure. We show that it is identical to both an intersection union test and the likelihood ratio test. A simple expression for the p-value is given. The level and power function are studied; it is shown that the test may be conservative and that it is biased. Computable bounds for the power function are established.  相似文献   

3.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

4.
Some multiple comparison procedures are described for multiple armed studies. The procedures are appropriate for testing all hypotheses for comparing two endpoints and multiple test arms to a single control group, for example three different fixed doses compared to a placebo. The procedure assumes that among the two endpoints, one is designated as a primary endpoint such that for a given treatment arm, no hypothesis for the secondary endpoint can be rejected unless the hypothesis for the primary endpoint was rejected. The procedures described control the family-wise error rate in the strong sense at a specified level α.  相似文献   

5.
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.  相似文献   

6.
In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one‐day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

8.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

9.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

10.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

11.
This paper describes how a multistage analysis strategy for a clinical trial can assess a sequence of hypotheses that pertain to successively more stringent criteria for excess risk exclusion or superiority for a primary endpoint with a low event rate. The criteria for assessment can correspond to excess risk of an adverse event or to a guideline for sufficient efficacy as in the case of vaccine trials. The proposed strategy is implemented through a set of interim analyses, and success for one or more of the less stringent criteria at an interim analysis can be the basis for a regulatory submission, whereas the clinical trial continues to accumulate information to address the more stringent, but not futile, criteria. Simulations show that the proposed strategy is satisfactory for control of type I error, sufficient power, and potential success at interim analyses when the true relative risk is more favorable than assumed for the planned sample size. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
The author considers studies with multiple dependent primary endpoints. Testing hypotheses with multiple primary endpoints may require unmanageably large populations. Composite endpoints consisting of several binary events may be used to reduce a trial to a manageable size. The primary difficulties with composite endpoints are that different endpoints may have different clinical importance and that higher‐frequency variables may overwhelm effects of smaller, but equally important, primary outcomes. To compensate for these inconsistencies, we weight each type of event, and the total number of weighted events is counted. To reflect the mutual dependency of primary endpoints and to make the weighting method effective in small clinical trials, we use the Bayesian approach. We assume a multinomial distribution of multiple endpoints with Dirichlet priors and apply the Bayesian test of noninferiority to the calculation of weighting parameters. We use composite endpoints to test hypotheses of superiority in single‐arm and two‐arm clinical trials. The composite endpoints have a beta distribution. We illustrate this technique with an example. The results provide a statistical procedure for creating composite endpoints. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

13.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   

14.
In many morbidity/mortality studies, composite endpoints are considered. Although the primary interest is to demonstrate that an invention delays death, the expected death rate is often that low that studies focussing on survival exclusively are not feasible. Components of the composite endpoint are chosen such that their occurrence is predictive for time to death. Therefore, if the time to non-fatal events is censored by death, censoring is no longer independent. As a consequence, the analysis of the components of a composite endpoint cannot be reasonably performed using classical methods for the analysis of survival times like Kaplan-Meier estimates or log-rank tests. In this paper we visualize the impact of disregarding dependent censoring during the analysis and discuss practicable alternatives for the analysis of morbidity/mortality studies. In the context of simulations we provide evidence that copula-based methods have the potential to deliver practically unbiased estimates of hazards of components of a composite endpoint. At the same time, they require minimal assumptions, which is important since not all assumptions are generally verifiable because of censoring. Therefore, there are alternative ways to analyze morbidity/mortality studies more appropriately by accounting for the dependencies among the components of composite endpoints. Despite the limitations mentioned, these alternatives can at minimum serve as sensitivity analyses to check the robustness of the currently used methods.  相似文献   

15.
16.
The choice between single-arm designs versus randomized double-arm designs has been contentiously debated in the literature of phase II oncology trials. Recently, as a compromise, the single-to-double arm transition design was proposed, combining the two designs into one trial over two stages. Successful implementation of the two-stage transition design requires a suspension period at the end of the first stage to collect the response data of the already enrolled patients. When the evaluation of the primary efficacy endpoint is overly long, the between-stage suspension period may unfavorably prolong the trial duration and cause a delay in treating future eligible patients. To accelerate the trial, we propose a Bayesian single-to-double arm design with short-term endpoints (BSDS), where an intermediate short-term endpoint is used for making early termination decisions at the end of the single-arm stage, followed by an evaluation of the long-term endpoint at the end of the subsequent double-arm stage. Bayesian posterior probabilities are used as the primary decision-making tool at the end of the trial. Design calibration steps are proposed for this Bayesian monitoring process to control the frequentist operating characteristics and minimize the expected sample size. Extensive simulation studies have demonstrated that our design has comparable power and average sample size but a much shorter trial duration than conventional single-to-double arm design. Applications of the design are illustrated using two phase II oncology trials with binary endpoints.  相似文献   

17.
Tuberculosis (TB) is one of the biggest killers among infectious diseases worldwide. Together with the identification of drugs that can provide benefits to patients, the challenge in TB is also the optimisation of the duration of these treatments. While conventional duration of treatment in TB is 6 months, there is evidence that shorter durations might be as effective but could be associated with fewer side effects and may be associated with better adherence. Based on a recent proposal of an adaptive order-restricted superiority design that employs the ordering assumptions within various duration of the same drug, we propose a non-inferiority (typically used in TB trials) adaptive design that effectively uses the order assumption. Together with the general construction of the hypothesis testing and expression for type I and type II errors, we focus on how the novel design was proposed for a TB trial concept. We consider a number of practical aspects such as choice of the design parameters, randomisation ratios, and timings of the interim analyses, and how these were discussed with the clinical team.  相似文献   

18.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

19.
This paper illustrates how the design and statistical analysis of the primary endpoint of a proof‐of‐concept study can be formulated within a Bayesian framework and is motivated by and illustrated with a Pfizer case study in chronic kidney disease. It is shown how decision criteria for success can be formulated, and how the study design can be assessed in relation to these, both using the traditional approach of probability of success conditional on the true treatment difference and also using Bayesian assurance and pre‐posterior probabilities. The case study illustrates how an informative prior on placebo response can have a dramatic effect in reducing sample size, saving time and resource, and we argue that in some cases, it can be considered unethical not to include relevant literature data in this way. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号