首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

2.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   

3.
In a two-sample testing problem, sometimes one of the sample observations are difficult and/or costlier to collect compared to the other one. Also, it may be the situation that sample observations from one of the populations have been previously collected and for operational advantages we do not wish to collect any more observations from the second population that are necessary for reaching a decision. Partially sequential technique is found to be very useful in such situations. The technique gained its popularity in statistics literature due to its very nature of capitalizing the best aspects of both fixed and sequential procedures. The literature is enriched with various types of partially sequential techniques useable under different types of data set-up. Nonetheless, there is no mention of multivariate data framework in this context, although very common in practice. The present paper aims at developing a class of partially sequential nonparametric test procedures for two-sample multivariate continuous data. For this we suggest a suitable stopping rule adopting inverse sampling technique and propose a class of test statistics based on the samples drawn using the suggested sampling scheme. Various asymptotic properties of the proposed tests are explored. An extensive simulation study is also performed to study the asymptotic performance of the tests. Finally the benefit of the proposed test procedure is demonstrated with an application to a real-life data on liver disease.  相似文献   

4.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
In group sequential clinical trials, there are several sample size re-estimation methods proposed in the literature that allow for change of sample size at the interim analysis. Most of these methods are based on either the conditional error function or the interim effect size. Our simulation studies compared the operating characteristics of three commonly used sample size re-estimation methods, Chen et al. (2004), Cui et al. (1999), and Muller and Schafer (2001). Gao et al. (2008) extended the CDL method and provided an analytical expression of lower and upper threshold of conditional power where the type I error is preserved. Recently, Mehta and Pocock (2010) extensively discussed that the real benefit of the adaptive approach is to invest the sample size resources in stages and increasing the sample size only if the interim results are in the so called “promising zone” which they define in their article. We incorporated this concept in our simulations while comparing the three methods. To test the robustness of these methods, we explored the impact of incorrect variance assumption on the operating characteristics. We found that the operating characteristics of the three methods are very comparable. In addition, the concept of promising zone, as suggested by MP, gives the desired power and smaller average sample size, and thus increases the efficiency of the trial design.  相似文献   

6.
The Hosmer–Lemeshow (H–L) test is a widely used method when assessing the goodness-of-fit of a logistic regression model. However, the H–L test is sensitive to the sample sizes and the number of groups in H–L test. Cautions need to be taken for interpreting an H–L test with a large sample size. In this paper, we propose a simple test procedure to evaluate the model fit of logistic regression model with a large sample size, in which a bootstrap method is used and the test result is determined by the power of H–L test at the target sample size. Simulation studies show that the proposed method can effectively standardize the power of the H–L test under the pre-specified level of type I error. Application to the two datasets illustrates the usefulness of the proposed model.  相似文献   

7.
In this study we discuss the group sequential procedures for comparing two treatments based on multivariate observations in clinical trials. Also we suppose that a response vector on each of two treatments has a multivariate normal distribution with unknown covariance matrix. Then we propose a group sequential x2 statistic in order to carry out repeated significance test for hypothesis of no difference between two population mean vectors. In order to realize the group sequential test where average sample number is reduced, we propose another modified group sequential x2 statistic by extension of Jennison and Turnbull ( 1991 ). After construction of repeated confidence boundaries for making the repeated significance test, we compare two group sequential procedures based on two statistics regarding the average sample number and the power of the test in the simulations.  相似文献   

8.
summary In this paper we derive the predictive density function of a future observation under the assumption of Edgeworth-type non-normal prior distribution for the unknown mean of a normal population. Fixed size single sample and sequential sampling inspection plans, in a decisive prediction framework, are examined for their sensitivity to departures from normality of the prior distribution. Numerical illustrations indicate that the decision to market the remaining items of a given lot for a fixed size plan may be sensitive to the presence of skewness or kurtosis in the prior distribution. However, Bayes'decision based on the sequential plan may not change though expected gains may change with variation in the non-normality of the prior distribution.  相似文献   

9.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

10.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

11.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

12.
In early drug development, especially when studying new mechanisms of action or in new disease areas, little is known about the targeted or anticipated treatment effect or variability estimates. Adaptive designs that allow for early stopping but also use interim data to adapt the sample size have been proposed as a practical way of dealing with these uncertainties. Predictive power and conditional power are two commonly mentioned techniques that allow predictions of what will happen at the end of the trial based on the interim data. Decisions about stopping or continuing the trial can then be based on these predictions. However, unless the user of these statistics has a deep understanding of their characteristics important pitfalls may be encountered, especially with the use of predictive power. The aim of this paper is to highlight these potential pitfalls. It is critical that statisticians understand the fundamental differences between predictive power and conditional power as they can have dramatic effects on decision making at the interim stage, especially if used to re-evaluate the sample size. The use of predictive power can lead to much larger sample sizes than either conditional power or standard sample size calculations. One crucial difference is that predictive power takes account of all uncertainty, parts of which are ignored by standard sample size calculations and conditional power. By comparing the characteristics of each of these statistics we highlight important characteristics of predictive power that experimenters need to be aware of when using this approach.  相似文献   

13.
In this paper, we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a prespecified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described.  相似文献   

14.
Sequential methods are developed for testing multiple hypotheses, resulting in a statistical decision for each individual test and controlling the familywise error rate and the familywise power in the strong sense. Extending the ideas of step-up and step-down methods for multiple comparisons to sequential designs, the new techniques improve over the Bonferroni and closed testing methods proposed earlier by a substantial reduction of the expected sample size.  相似文献   

15.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

16.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   

17.
We consider the empirical Bayes decision theory where the component problems are the optimal fixed sample size decision problem and a sequential decision problem. With these components, an empirical Bayes decision procedure selects both a stopping rule function and a terminal decision rule function. Empirical Bayes stopping rules are constructed for each case and the asymptotic behaviours are investigated.  相似文献   

18.
When there are more than two treatments under comparison, we may consider the use of the incomplete block crossover design (IBCD) to save the number of patients needed for a parallel groups design and reduce the duration of a crossover trial. We develop an asymptotic procedure for simultaneously testing equality of two treatments versus a control treatment (or placebo) in frequency data under the IBCD with two periods. We derive a sample size calculation procedure for the desired power of detecting the given treatment effects at a nominal-level and suggest a simple ad hoc adjustment procedure to improve the accuracy of the sample size determination when the resulting minimum required number of patients is not large. We employ Monte Carlo simulation to evaluate the finite-sample performance of the proposed test, the accuracy of the sample size calculation procedure, and that with the simple ad hoc adjustment suggested here. We use the data taken as a part of a crossover trial comparing the number of exacerbations between using salbutamol or salmeterol and a placebo in asthma patients to illustrate the sample size calculation procedure.  相似文献   

19.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

20.
The concept of a partially sequential hypothesis test was introduced by Wolfe (1977a), an{associated procedures were developed for both parametric and nonparametric assumptions. In this paper we consider distribution-free extensions of those indicator tests, based on the placements of the sequentially obtained observations among the previously collected fixed size sample. Exact and asymptotic, as the fixed sample size in¬creases to infinity, properties of these sequential placements procedures are obtained, including statements about the power and expected number of sequentially obtained observations. The results of a Monte Carlo study are used to differentiate be¬tween various placement scoring schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号