首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Non-parametric group sequential designs in randomized clinical trials   总被引:1,自引:0,他引:1  
This paper examines some non‐parametric group sequential designs applicable for randomized clinical trials, for comparing two continuous treatment effects taking the observations in matched pairs, or applicable in event‐based analysis. Two inverse binomial sampling schemes are considered, of which the second one is an adaptive data‐dependent design. These designs are compared with some fixed sample size competitors. Power and expected sample sizes are calculated for the proposed procedures.  相似文献   

2.
Optimal three-stage designs with equal sample sizes at each stage are presented and compared to fixed sample designs, fully sequential designs, designs restricted to use the fixed sample critical value at the final stage, and to modifications of other group sequential designs previously proposed in the literature. Typically, the greatest savings realized with interim analyses are obtained by the first interim look. More than 50% of the savings possible with a fully sequential design can be realized with a simple two-stage design. Three-stage designs can realize as much as 75% of the possible savings. Without much loss in efficiency, the designs can be modified so that the critical value at the final stage equals the usual fixed sample value while maintaining the overall level of significance, alleviating some potential confusion should a final stage be necessary. Some common group sequential designs, modified to allow early acceptance of the null hypothesis, are shown to be nearly optimal in some settings while performing poorly in others. An example is given to illustrate the use of several three-stage plans in the design of clinical trials.  相似文献   

3.
Two-stage k-sample designs for the ordered alternative problem   总被引:2,自引:0,他引:2  
In preclinical studies and clinical dose-ranging trials, the Jonckheere-Terpstra test is widely used in the assessment of dose-response relationships. Hewett and Spurrier (1979) presented a two-stage analog of the test in the context of large sample sizes. In this paper, we propose an exact test based on Simon's minimax and optimal design criteria originally used in one-arm phase II designs based on binary endpoints. The convergence rate of the joint distribution of the first and second stage test statistics to the limiting distribution is studied, and design parameters are provided for a variety of assumed alternatives. The behavior of the test is also examined in the presence of ties, and the proposed designs are illustrated through application in the planning of a hypercholesterolemia clinical trial. The minimax and optimal two-stage procedures are shown to be preferable as compared with the one-stage procedure because of the associated reduction in expected sample size for given error constraints.  相似文献   

4.
In the usual design and analysis of a phase II trial, there is no differentiation between complete response and partial response. Since complete response is considered more desirable this paper proposes a weighted score method which extends Simon's (1989) two-stage design to the situation where the complete and partial responses are differentiated. The weight assigned to the complete response is suggested by examining the likelihood ratio (LR) statistic for testing a simple hypothesis of a trinomial distribution. Both optimal and minimax designs are tabulated for a wide range of design parameters. The weighted score approach is shown to give more efficient designs, especially when the response probability is moderate to large.  相似文献   

5.
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule.  相似文献   

6.
For a group‐sequential trial with two pre‐planned analyses, stopping boundaries can be calculated using a simple SAS? programme on the basis of the asymptotic bivariate normality of the interim and final test statistics. Given the simplicity and transparency of this approach, it is appropriate for researchers to apply their own bespoke spending function as long as the rate of alpha spend is pre‐specified. One such application could be an oncology trial where progression free survival (PFS) is the primary endpoint and overall survival (OS) is also assessed, both at the same time as the analysis of PFS and also later following further patient follow‐up. In many circumstances it is likely, if PFS is significantly extended, that the protocol will be amended to allow patients in the control arm to start receiving the experimental regimen. Such an eventuality is likely to result in the diminution of any effect on OS. It is shown that spending a greater proportion of alpha at the first analysis of OS, using either Pocock or bespoke boundaries, will maintain and in some cases result in greater power given a fixed number of events. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
8.
This paper is concerned with the problem of allocating a fixed number of trials between two independent binomial populations with unknown success probabilities θ1 and θ2, in order to estimate θ1 - θ2 with squared error loss. Introducing independent beta priors on θ1 and θ2, a heuristic allocation procedure is introduced and compared both with the optimal and with the best fixed allocation procedure. Numerical and asymptotic results of these comparisons are given and seem to indicate that there are situations when the best fixed allocation procedure performs almost as well as the optimal procedure.  相似文献   

9.
10.
This article deals with multistage group screening in which group-factors contain the same number of factors. A usual assumption of this procedure is that the directions of possible effects are known. In practice, however, this assumption i s often unreasonable. This paper examines, in the case of no errors in observations, the performance of multistage group screening when this assumption is false . This enails consideration of cancellation effects within group-factors.  相似文献   

11.
The efficiency of a sequential test is related to the “importance” of the trials within the test. This relationship is used to find the optimal test for selecting the greater of two binomial probabilities, pα and pb, namely, the stopping rule is “gambler's ruin” and the optimal discipline when pα+pb 1 (≥ 1) is play-the-winner (loser), i.e. an α-trial which results in a success is followed by an α-trial (b-trial) whereas an α-trial which results in a failure is followed by α b-trid (α-trial) and correspondingly for b-trials.  相似文献   

12.
Through random cut‐points theory, the author extends inference for ordered categorical data to the unspecified continuum underlying the ordered categories. He shows that a random cut‐point Mann‐Whitney test yields slightly smaller p‐values than the conventional test for most data. However, when at least P% of the data lie in one of the k categories (with P = 80 for k = 2, P = 67 for k = 3,…, P = 18 for k = 30), he also shows that the conventional test can yield much smaller p‐values, and hence misleadingly liberal inference for the underlying continuum. The author derives formulas for exact tests; for k = 2, the Mann‐Whitney test is but a binomial test.  相似文献   

13.
The effects of inspection error on a two-stage procedure for identification of defective units is studied. The first stage is intended to provide the number of defective units in a group of n units; the second stage consists of individual inspection until the status of all units is (apparently) established  相似文献   

14.
The two-way two-levels crossed factorial design is a commonly used design by practitioners at the exploratory phase of industrial experiments. The F-test in the usual linear model for analysis of variance (ANOVA) is a key instrument to assess the impact of each factor and of their interactions on the response variable. However, if assumptions such as normal distribution and homoscedasticity of errors are violated, the conventional wisdom is to resort to nonparametric tests. Nonparametric methods, rank-based as well as permutation, have been a subject of recent investigations to make them effective in testing the hypotheses of interest and to improve their performance in small sample situations. In this study, we assess the performances of some nonparametric methods and, more importantly, we compare their powers. Specifically, we examine three permutation methods (Constrained Synchronized Permutations, Unconstrained Synchronized Permutations and Wald-Type Permutation Test), a rank-based method (Aligned Rank Transform) and a parametric method (ANOVA-Type Test). In the simulations, we generate datasets with different configurations of distribution of errors, variance, factor's effect and number of replicates. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the conditions under which some tests cannot be used, the tradeoff between power and type I error, and the bias of the power on one main factor analysis due to the presence of effect of the other factor. A dataset from an industrial engineering experiment for thermoformed packaging production is used to illustrate the application of the various methods of analysis, taking into account the power of the test suggested by the objective of the experiment.  相似文献   

15.
In this paper, the two-sample scale problem is addressed within the rank framework which does not require to specify the underlying continuous distribution. However, since the power of a rank test depends on the underlying distribution, it would be very useful for the researcher to have some information on it in order to use the possibly most suitable test. A two-stage adaptive design is used with adaptive tests where the data from the first stage are used to compute a selector statistic to select the test statistic for stage 2. More precisely, an adaptive scale test due to Hall and Padmanabhan and its components are considered in one-stage and several adaptive and non-adaptive two-stage procedures. A simulation study shows that the two-stage test with the adaptive choice in the second stage and with Liptak combination, when it is not more powerful than the corresponding one-stage test, shows, however, a quite similar power behavior. The test procedures are illustrated using two ecological applications and a clinical trial.  相似文献   

16.
Group sequential trialswith time to event end points can be complicated to design. Notonly are there unlimited choices for the number of events requiredat each stage, but for each of these choices, there are unlimitedcombinations of accrual and follow-up at each stage that providethe required events. Methods are presented for determining optimalcombinations of accrual and follow-up for two-stage clinicaltrials with time to event end points. Optimization is based onminimizing the expected total study length as a function of theexpected accrual duration or sample size while providing an appropriateoverall size and power. Optimal values of expected accrual durationand minimum expected total study length are given assuming anexponential proportional hazards model comparing two treatmentgroups. The expected total study length can be substantiallydecreased by including a follow-up period during which accrualis suspended. Conditions that warrant an interim follow-up periodare considered, and the gain in efficiency achieved by includingan interim follow-up period is quantified. The gain in efficiencyshould be weighed against the practical difficulties in implementingsuch designs. An example is given to illustrate the use of thesetechniques in designing a clinical trial to compare two chemotherapyregimens for lung cancer. Practical considerations of includingan interim follow-up period are discussed.  相似文献   

17.
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes.  相似文献   

18.
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases.  相似文献   

19.
In this paper we focus on the problem of supersaturated (fewer runs than factors) screening experiments. We consider two major types of designs which have been proposed in this situ¬ation: random balance and two-stage group screening. We discuss the relative merits and demerits of each strategy. In addition, we compare the performance of these strategies by means of a case study in which 100 factors are screened in 20,42,62, and 84 runs.  相似文献   

20.
How do we communicate nuanced regulatory information to different audiences, recognizing that the consumer audience is very different from the physician audience? In particular, how do we communicate the heterogeneity of treatment effects - the potential differences in treatment effects based on sex, race, and age? That is a fundamental question at the heart of this panel discussion. Each panelist addressed a specific “challenge question” during their 5-minute presentation, and the list of questions is provided. The presentations were followed by a question and answer session with members of the audience and the panelists.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号