首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

2.
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule.  相似文献   

3.
Abstract

Optimized group sequential designs proposed in the literature have designs minimizing average sample size with respect to a prior distribution of treatment effect with overall type I and type II error rates well-controlled (i.e., at final stage). The optimized asymmetric group sequential designs that we present here additionally consider constrains on stopping probabilities at stage one: probability of stopping for futility at stage one when no drug effect exists as well as the probability of rejection when the maximum effect size is true at stage one so that accountability of group sequential design is ensured from the first stage throughout.  相似文献   

4.
We consider the empirical Bayes decision theory where the component problems are the optimal fixed sample size decision problem and a sequential decision problem. With these components, an empirical Bayes decision procedure selects both a stopping rule function and a terminal decision rule function. Empirical Bayes stopping rules are constructed for each case and the asymptotic behaviours are investigated.  相似文献   

5.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

6.
We introduce a new design for dose-finding in the context of toxicity studies for which it is assumed that toxicity increases with dose. The goal is to identify the maximum tolerated dose, which is taken to be the dose associated with a prespecified “target” toxicity rate. The decision to decrease, increase or repeat a dose for the next subject depends on how far an estimated toxicity rate at the current dose is from the target. The size of the window within which the current dose will be repeated is obtained based on the theory of Markov chains as applied to group up-and-down designs. But whereas the treatment allocation rule in Markovian group up-and-down designs is only based on information from the current cohort of subjects, the treatment allocation rule for the proposed design is based on the cumulative information at the current dose. We then consider an extension of this new design for clinical trials in which the subject's outcome is not known immediately. The new design is compared to the continual reassessment method.  相似文献   

7.
The adaptive cluster sampling (ACS) is a suitable sampling design for rare and clustered populations. In environmental and ecological applications, biological populations are generally animals or plants with highly patchy spatial distribution. However, ACS would be a less efficient design when the study population is not rare with low aggregation since the final sample size could be easily out of control. In this paper, a new variant of ACS is proposed in order to improve the performance (in term of precision and cost) of ACS versus simple random sampling (SRS). The idea is to detect the optimal sample size by means of a data-driven stopping rule in order to determine when to stop the adaptive procedure. By introducing a stopping rule the theoretical basis of ACS are not respected and the behaviour of the ordinary estimators used in ACS is explored by using Monte Carlo simulations. Results show that the proposed variant of ACS allows to control the effective sample size and to prevent from excessive efficiency loss typical of ACS when the population is less clustered than anticipated. The proposed strategy may be recommended especially when no prior information about the population structure is available as it does not require a prior knowledge of the degree of rarity and clustering of the population of interest.  相似文献   

8.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   

9.
In the present work, we formulate a two-treatment single period two-stage adaptive allocation design for achieving larger allocation proportion to the better treatment arm in the course of the trial with increased precision of the parameter estimator. We examine some properties of the proposed rule and compare it with some of the existing allocation rules and report substantial gain in efficiency with a considerably larger number of allocations to the better treatment even for moderate sample sizes.  相似文献   

10.
This paper studies a sequential procedure R for selecting a random size subset that contains the multinomial cell which has the smallest cell probability. The stopping rule of the proposed procedure R is the composite of the stopping rules of curtailed sampling, inverse sampling, and the Ramey-Alam sampling. A reslut on the worst configuration is shown and it is employed in computing the procedure parameters that guarantee certain probability requirements. Tables of these procedure parameters, the corresponding probability of correct selection, the expected sample size, and the expected subset size are given for comparison purpose.  相似文献   

11.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Use of full Bayesian decision-theoretic approaches to obtain optimal stopping rules for clinical trial designs typically requires the use of Backward Induction. However, the implementation of Backward Induction, apart from simple trial designs, is generally impossible due to analytical and computational difficulties. In this paper we present a numerical approximation of Backward Induction in a multiple-arm clinical trial design comparing k experimental treatments with a standard treatment where patient response is binary. We propose a novel stopping rule, denoted by τ p , as an approximation of the optimal stopping rule, using the optimal stopping rule of a single-arm clinical trial obtained by Backward Induction. We then present an example of a double-arm (k=2) clinical trial where we use a simulation-based algorithm together with τ p to estimate the expected utility of continuing and compare our estimates with exact values obtained by an implementation of Backward Induction. For trials with more than two treatment arms, we evaluate τ p by studying its operating characteristics in a three-arm trial example. Results from these examples show that our approximate trial design has attractive properties and hence offers a relevant solution to the problem posed by Backward Induction.  相似文献   

13.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

14.
Consider a finite population of large but unknown size of hidden objects. Consider searching for these objects for a period of time, at a certain cost, and receiving a reward depending on the sizes of the objects found. Suppose that the size and discovery time of the objects both have unknown distributions, but the conditional distribution of time given size is exponential with an unknown non-negative and non-decreasing function of the size as failure rate. The goal is to find an optimal way to stop the discovery process. Assuming that the above parameters are known, an optimal stopping time is derived and its asymptotic properties are studied. Then, an adaptive rule based on order restricted estimates of the distributions from truncated data is presented. This adaptive rule is shown to perform nearly as well as the optimal stopping time for large population size.  相似文献   

15.
In this paper, given an arbitrary fixed target sample size, we describe a sequential allocation scheme for comparing two competing treatments in clinical trials. The proposed scheme is a compromise between ethical and optimum allocations. Using some specific probability models, we have shown that, for estimating the risk difference (RD) between two treatment effects, the scheme provides smaller variance than that provided by the corresponding fixed sample size equal allocation sampling scheme.  相似文献   

16.
In a two-sample testing problem, sometimes one of the sample observations are difficult and/or costlier to collect compared to the other one. Also, it may be the situation that sample observations from one of the populations have been previously collected and for operational advantages we do not wish to collect any more observations from the second population that are necessary for reaching a decision. Partially sequential technique is found to be very useful in such situations. The technique gained its popularity in statistics literature due to its very nature of capitalizing the best aspects of both fixed and sequential procedures. The literature is enriched with various types of partially sequential techniques useable under different types of data set-up. Nonetheless, there is no mention of multivariate data framework in this context, although very common in practice. The present paper aims at developing a class of partially sequential nonparametric test procedures for two-sample multivariate continuous data. For this we suggest a suitable stopping rule adopting inverse sampling technique and propose a class of test statistics based on the samples drawn using the suggested sampling scheme. Various asymptotic properties of the proposed tests are explored. An extensive simulation study is also performed to study the asymptotic performance of the tests. Finally the benefit of the proposed test procedure is demonstrated with an application to a real-life data on liver disease.  相似文献   

17.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
金勇进  石可 《统计研究》2000,17(2):56-60
一、问题的提出分层抽样中样本量在各层中如何分配,这是抽样设计中的一个重要问题。计算各层的样本量需要一些辅助信息,如各层中目标变量的方差。在抽样调查的实践中,特别是一次性的抽样调查中,上述所需的辅助信息常常不具备,因此,我们面临着在信息量最小的条件下如何在各层中分配样本量的问题。本文产生于作者在美国NORC(NationalOpinionResearchCenter)进行研究期间所做的调查设计中的一个实例,这里对其进行了归纳,加工,提炼与分析,希望能够就极小信息量条件下如何在分层抽样中进行样本量的分配这一问题…  相似文献   

19.
This paper considers the problem of sequential point estimation, under an appropriate loss function, of the location parameter when the errors form an autoregressive process with unknown scale and autoregressive parameters, A sequential procedure is developed and an asymptotic second order expansion is provided for the difference between expected stopping time and the optimal fixed sample size procedure. Also, the asymptotic normality of the stopping time is proved. Though the procedure Is asymptotically risk efficient, it. Is not clear whether it has bounded regret.  相似文献   

20.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号