首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 420 毫秒
1.
In this paper we consider mean of success run lengths appearing in a sequence of binary trials. We derive the exact and limiting distributions of mean success run length for i.i.d. Bernoulli trials. The exact distribution of the corresponding random variable is also derived for a sequence of Markov-dependent Bernoulli trials. In addition, a combinatorial formula for the distribution of any success run statistic defined on Markov-dependent trials is presented.  相似文献   

2.
The number of success runs for nonhomogeneous markov dependent trials are represented as the sum of Bernoulli trials and the expected value of runs are obtained by using this representation. The distribution and bounds for the distribution of the longest run are derived for markov dependent trials.  相似文献   

3.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

4.
Traditionally, in clinical development plan, phase II trials are relatively small and can be expected to result in a large degree of uncertainty in the estimates based on which Phase III trials are planned. Phase II trials are also to explore appropriate primary efficacy endpoint(s) or patient populations. When the biology of the disease and pathogenesis of disease progression are well understood, the phase II and phase III studies may be performed in the same patient population with the same primary endpoint, e.g. efficacy measured by HbA1c in non-insulin dependent diabetes mellitus trials with treatment duration of at least three months. In the disease areas that molecular pathways are not well established or the clinical outcome endpoint may not be observed in a short-term study, e.g. mortality in cancer or AIDS trials, the treatment effect may be postulated through use of intermediate surrogate endpoint in phase II trials. However, in many cases, we generally explore the appropriate clinical endpoint in the phase II trials. An important question is how much of the effect observed in the surrogate endpoint in the phase II study can be translated into the clinical effect in the phase III trial. Another question is how much of the uncertainty remains in phase III trials. In this work, we study the utility of adaptation by design (not by statistical test) in the sense of adapting the phase II information for planning the phase III trials. That is, we investigate the impact of using various phase II effect size estimates on the sample size planning for phase III trials. In general, if the point estimate of the phase II trial is used for planning, it is advisable to size the phase III trial by choosing a smaller alpha level or a higher power level. The adaptation via using the lower limit of the one standard deviation confidence interval from the phase II trial appears to be a reasonable choice since it balances well between the empirical power of the launched trials and the proportion of trials not launched if a threshold lower than the true effect size of phase III trial can be chosen for determining whether the phase III trial is to be launched.  相似文献   

5.
The FDA released the final guidance on noninferiority trials in November 2016. In noninferiority trials, validity of the assessment of the efficacy of the test treatment depends on the control treatment's efficacy. Therefore, it is critically important that there be a reliable estimate of the control treatment effect—which is generally obtained from historical trials, and often assumed to hold in the current setting (the assay constancy assumption). Validating the constancy assumption requires clinical data, which are typically lacking. The guidance acknowledges that “lack of constancy can occur for many reasons.” We clarify the objectives of noninferiority trials. We conclude that correction for bias, rather than assay constancy, is critical to conducting valid noninferiority trials. We propose that assay constancy not be assumed and discounting or thresholds be used to address concern about loss of historical efficacy. Examples are provided for illustration.  相似文献   

6.
Randomized controlled trials are recognized as the 'gold standard' for evaluating the effect of health interventions, yet few such trials of human immunodeficiency virus (HIV) preventive interventions have been conducted. We discuss the role of randomized trials in the evaluation of such interventions, and we review the strengths and weaknesses of this and other approaches. Randomization of clusters (groups of individuals) may sometimes be appropriate, and we discuss several issues in the design of such cluster-randomized trials, including sample size, the definition and size of clusters, matching and the role of base-line data. Finally we review some general issues in the design of HIV prevention trials, including the choice of the study population, trial end points and ethical issues. It is argued that randomized trials have an important role to play in the evolution of HIV control.  相似文献   

7.
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests.  相似文献   

8.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
It has recently been suggested 1 that many clinical trials should have a data monitoring and ethics committee, and that on this committee should be a statistician. Such committees are attached to individual trials and are distinct from local ethics committees, which are not required to have a statistician and are not concerned solely with trials. Given the plethora of trials, there will be increasing demand for statisticians to sit on these committees. Although it is both an honour and a privilege, Mike Campbell warns that membership should not be undertaken lightly.  相似文献   

10.
The phase II clinical trials often use the binary outcome. Thus, accessing the success rate of the treatment is a primary objective for the phase II clinical trials. Reporting confidence intervals is a common practice for clinical trials. Due to the group sequence design and relatively small sample size, many existing confidence intervals for phase II trials are much conservative. In this paper, we propose a class of confidence intervals for binary outcomes. We also provide a general theory to assess the coverage of confidence intervals for discrete distributions, and hence make recommendations for choosing the parameter in calculating the confidence interval. The proposed method is applied to Simon's [14] optimal two-stage design with numerical studies. The proposed method can be viewed as an alternative approach for the confidence interval for discrete distributions in general.  相似文献   

11.
This article describes a generalization of the binomial distribution. The closed form probability function for the probability of k successes out of n correlated, exchangeable Bernoulli trials depends on the number of trials and its two parameters: the common success probability and the common correlation. The distribution is derived under the assumption that the common correlation between all pairs of Bernoulli trials remains unchanged conditional on successes in all completed trials. The distribution was developed to model bond defaults but may be suited to biostatistical applications involving clusters of binary data encountered in repeated measurements or toxicity studies of families of organisms. Maximum likelihood estimates for the parameters of the distribution are found for a set of binary data from a developmental toxicity study on litters of mice.  相似文献   

12.
We examine the issue of asymptotic efficiency of estimation for response adaptive designs of clinical trials, from which the collected data set contains a dependency structure. We establish the asymptotic lower bound of exponential rates for consistent estimators. Under certain regularity conditions, we show that the maximum likelihood estimator achieves the asymptotic lower bound for response adaptive trials with dichotomous responses. Furthermore, it is shown that the maximum likelihood estimator of the treatment effect is asymptotically efficient in the Bahadur sense for response adaptive clinical trials.  相似文献   

13.
In Japan rarely repeated are the comparative clinical trials with the same pair of the test and control drugs. The simple approach fur a complete iaudomized block design regarding the trials as blocks cannot therefore be applied It; strengthening the evidence uf the difference between the two drugs. In tins paper a method is discussed lo KTOVCI minimal lull hum the ruliliei ted trials in which those two drugs are respectively compared with the common third drug. The method consists of testing homogeneity of trials, forming a combined est illicit oi uf the n odds latius if the homogeneity is yeiified aiid giving the asymptotic variance of the combined estimator.In particular an approach of the multiple comparisons is taken so as to give a homogeneous subset of trials when the overall homogeneity is not satisfied. Although the paper is motivated from the comparative clinical trials, the resulting method can be applied to general incomplete block experiments if the outcomes are binoinial variables and the nil-interaction hetn-ern thc trtatnlent and block is suspicious. On the other hand in the appplications to clinical trials the other. non mathenlatical conditions for a meta-analysis such as the coincidence of tllc protocol should be satisfied as discussed by Hedges and Olkin(1983).  相似文献   

14.
We present likelihood methods for defining the non-inferiority margin and measuring the strength of evidence in non-inferiority trials using the 'fixed-margin' framework. Likelihood methods are used to (1) evaluate and combine the evidence from historical trials to define the non-inferiority margin, (2) assess and report the smallest non-inferiority margin supported by the data, and (3) assess potential violations of the constancy assumption. Data from six aspirin-controlled trials for acute coronary syndrome and data from an active-controlled trial for acute coronary syndrome, Organisation to Assess Strategies for Ischemic Syndromes (OASIS-2) trial, are used for illustration. The likelihood framework offers important theoretical and practical advantages when measuring the strength of evidence in non-inferiority trials. Besides eliminating the influence of sample spaces and prior probabilities on the 'strength of evidence in the data', the likelihood approach maintains good frequentist properties. Violations of the constancy assumption can be assessed in the likelihood framework when it is appropriate to assume a unifying regression model for trial data and a constant control effect including a control rate parameter and a placebo rate parameter across historical placebo controlled trials and the non-inferiority trial. In situations where the statistical non-inferiority margin is data driven, lower likelihood support interval limits provide plausibly conservative candidate margins.  相似文献   

15.
In drug development, after completion of phase II proof‐of‐concept trials, the sponsor needs to make a go/no‐go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision‐making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k0 () trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
A new generalization of the binomial distribution is introduced that allows dependence between trials, nonconstant probabilities of success from trial to trial, and which contains the usual binomial distribution as a special case. Along with the number of trials and an initial probability of ‘success’, an additional parameter that controls the degree of correlation between trials is introduced. The resulting class of distributions includes the binomial, unirnodal distributions, and bimodal distributions. Formulas for the moments, mean, and variance of this distribution are given along with a method for fitting the distribution to sample data.  相似文献   

17.
Collecting individual patient data has been described as the 'gold standard' for undertaking meta-analysis. If studies involve time-to-event outcomes, conducting a meta-analysis based on aggregate data can be problematical. Two meta-analyses of randomized controlled trials with time-to-event outcomes are used to illustrate the practicality and value of several proposed methods to obtain summary statistic estimates. In the first example the results suggest that further effort should be made to find unpublished trials. In the second example the use of aggregate data for trials where no individual patient data have been supplied allows the totality of evidence to be assessed and indicates previously unrecognized heterogeneity.  相似文献   

18.
The first randomized trial of antiviral therapy in human immunodeficiency virus (HIV) disease included 282 patients with acquired immune deficiency syndrome (AIDS) or AIDS-related complex and was stopped in 1986 after an average follow-up of 4 months because of a substantial reduction in mortality in the group who received zidovudine (AZT). The era of anti-HIV treatment had begun. This paper discusses some of the difficulties which have emerged over the subsequent 10 years as new anti-HIV drugs have been developed requiring evaluation in clinical trials. The trials in which the British Medical Research Council has played a major role (the Concorde, Alpha and Delta trials) and some of the key trials conducted by the AIDS Clinical Trials Group (ACTG) (the ACTG 019 and ACTG 175 trials) and the Community Programs for Clinical Research on AIDS (CPCRA) (the CPCRA 007 trial) in the US will be used to illustrate some of the issues faced by clinical trialists and governmental regulatory agencies in the evaluation of therapies for a disease which, in spite of advances in therapy, still has a high mortality.  相似文献   

19.
Often, single‐arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of ‘insufficient drug activity’ is rejected, the next step could be a randomised two‐arm trial. However, single‐arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single‐arm trials followed by randomised two‐arm trials with stand‐alone single‐stage or group sequential randomised two‐arm trials. Through this, we are able to investigate the utility of single‐arm trials and determine the most efficient drug development plans, setting our work in the context of a published single‐arm non‐small‐cell lung cancer trial. Reference priors, reflecting the opinions of ‘sceptical’ and ‘enthusiastic’ investigators, are used to quantify and guide the suitability of single‐arm trials in this setting. We observe that the explored development plans incorporating single‐arm trials are often non‐optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single‐arm trials avoided, for the non‐small‐cell lung cancer treatment through direct progression to a group sequential randomised two‐arm trial. Careful consideration should thus be given to the use of single‐arm trials in oncological drug development when a randomised trial will follow. Copyright © 2015 The Authors. Pharmaceutical Statistics published by JohnWiley & Sons Ltd.  相似文献   

20.
Because of its simplicity, the Q statistic is frequently used to test the heterogeneity of the estimated intervention effect in meta-analyses of individually randomized trials. However, it is inappropriate to apply it directly to the meta-analyses of cluster randomized trials without taking clustering effects into account. We consider the properties of the adjusted Q statistic for testing heterogeneity in the meta-analyses of cluster randomized trials with binary outcomes. We also derive an analytic expression for the power of this statistic to detect heterogeneity in meta-analyses, which can be useful when planning a meta-analysis. A simulation study is used to assess the performance of the adjusted Q statistic, in terms of its Type I error rate and power. The simulation results are compared to that obtained from the proposed formula. It is found that the adjusted Q statistic has a Type I error rate close to the nominal level of 5%, as compared to the unadjusted Q statistic commonly used to test for heterogeneity in the meta-analyses of individually randomized trials with an inflated Type I error rate. Data from a meta-analysis of four cluster randomized trials are used to illustrate the procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号