首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of accounting for multiplicity for two correlated endpoints in the comparison of two treatments using weighted hypothesis tests. Various weighted testing procedures are reviewed, and a more powerful method (a variant of the weighted Simes test) is evaluated for the general bivariate normal case and for a particular clinical trial example. Results from these evaluations are summarized and indicate that the weighted methods perform in a manner similar to unweighted methods. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
A generalization of step-up and step-down multiple test procedures is proposed. This step-up-down procedure is useful when the objective is to reject a specified minimum number, q, out of a family of k hypotheses. If this basic objective is met at the first step, then it proceeds in a step-down manner to see if more than q hypotheses can be rejected. Otherwise it proceeds in a step-up manner to see if some number less than q hypotheses can be rejected. The usual step-down procedure is the special case where q = 1, and the usual step-up procedure is the special case where q = k. Analytical and numerical comparisons between the powers of the step-up-down procedures with different choices of q are made to see how these powers depend on the actual number of false hypotheses. Examples of application include comparing the efficacy of a treatment to a control for multiple endpoints and testing the sensitivity of a clinical trial for comparing the efficacy of a new treatment with a set of standard treatments.  相似文献   

3.
We present step-wise test procedures based on the Bonferroni-Holm principle for multi-way ANOVA-type models. It is shown for two plausible modifications that the multiple level α is preserved. These theoretical results are supplemented by a simulation study, in a two-way ANOVA setting, to compare the multiple procedures with respect to their simultaneous power and the relative frequency of correctly rejected false hypotheses. Financial support of the Deutsche Forschungsgemeinschaft is gratefully acknowledged.  相似文献   

4.
The comparison of increasing doses of a compound to a zero dose control is of interest in medical and toxicological studies. Assume that the mean dose effects are non-decreasing among the non-zero doses of the compound. A simple procedure that modifies Dunnett's procedure is proposed to construct simultaneous confidence intervals for pairwise comparisons of each dose group with the zero dose control by utilizing the ordering of the means. The simultaneous lower bounds and upper bounds by the new procedure are monotone, which is not the case with Dunnett's procedure. This is useful to categorize dose levels. The expected gains of the new procedure over Dunnett's procedure are studied. The procedure is shown by real data to compare well with its predecessor.  相似文献   

5.
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests.  相似文献   

6.
Abstract

A radio frequency (RF) repeater is a wireless electronic device that transmits signals from a base transceiver station to a mobile station. When inspecting RF repeaters, various items are required to be tested to ensure their quality. In this paper, we propose a systematic procedure for the inspection by using a multiple linear regression method. The basic idea is to predict the inspection result without real inspection. In particular, a multicollinearity problem is considered in the regression analysis. Two case studies are conducted for validating the proposed method with an RF repeater production company in Korea.  相似文献   

7.
Randomised controlled trials are considered the gold standard in trial design. However, phase II oncology trials with a binary outcome are often single-arm. Although a number of reasons exist for choosing a single-arm trial, the primary reason is that single-arm designs require fewer participants than their randomised equivalents. Therefore, the development of novel methodology that makes randomised designs more efficient is of value to the trials community. This article introduces a randomised two-arm binary outcome trial design that includes stochastic curtailment (SC), allowing for the possibility of stopping a trial before the final conclusions are known with certainty. In addition to SC, the proposed design involves the use of a randomised block design, which allows investigators to control the number of interim analyses. This approach is compared with existing designs that also use early stopping, through the use of a loss function comprised of a weighted sum of design characteristics. Comparisons are also made using an example from a real trial. The comparisons show that for many possible loss functions, the proposed design is superior to existing designs. Further, the proposed design may be more practical, by allowing a flexible number of interim analyses. One existing design produces superior design realisations when the anticipated response rate is low. However, when using this design, the probability of rejecting the null hypothesis is sensitive to misspecification of the null response rate. Therefore, when considering randomised designs in phase II, we recommend the proposed approach be preferred over other sequential designs.  相似文献   

8.
In in most cases, the distribution of communications is unknown and one may summarize social network communications with categorical attributes in a contingency table. Due to the categorical nature of the data and a large number of features, there are many parameters to be considered and estimated in the model. Hence, the accuracy of estimators decreases. To overcome the problem of high dimensionality and unknown communications distribution, multiple correspondence analysis is used to reduce the number of parameters. Then the rescaled data are studied in a Dirichlet model in which the parameters should be estimated. Moreover, two control charts, Hotelling’s T2 and multivariate exponentially weighted moving average (MEWMA), are developed to monitor the parameters of the Dirichlet distribution. The performance of the proposed method is evaluated through simulation studies in terms of average run length criterion. Finally, the proposed method is applied to a real case.  相似文献   

9.
By applying Tiku's MML robust procedure to Brown and Forsythe's (1974) statistic, this paper derives a robust and more powerful procedure for comparing several means under hetero-scedasticity and nonnormality. Some Monte Carlo studies indicate clearly that among five nonnormal distributions, except for the uniform distribution, the new test is more powerful than the Brown and Forsythe test under nonnormal distributions in all cases investigated and has substantially the same power as the Brown and Forsythe test under normal distribution.  相似文献   

10.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   

11.
This paper considers p-value based step-wise rejection procedures for testing multiple hypotheses. The existing procedures have used constants as critical values at all steps. With the intention of incorporating the exact magnitude of the p-values at the earlier steps into the decisions at the later steps, this paper applies a different strategy that the critical values at the later steps are determined as functions of the p-values from the earlier steps. As a result, we have derived a new equality and developed a two-step rejection procedure following that. The new procedure is a short-cut of a step-up procedure, and it possesses great simplicity. In terms of power, the proposed procedure is generally comparable to the existing ones and exceptionally superior when the largest p-value is anticipated to be less than 0.5.  相似文献   

12.
The case of selecting between a set of fixed models is considered. The true model is assumed to be contained in the set of proposed models and errors are taken to be normally distributed. A sequential procedure which yeilds probabilities of incorrect selections is proposed. The procedure is shown to have optimal properties and is extended to the estimated model case by a bootstrap procedure.  相似文献   

13.
A robust Bayesian design is presented for a single-arm phase II trial with an early stopping rule to monitor a time to event endpoint. The assumed model is a piecewise exponential distribution with non-informative gamma priors on the hazard parameters in subintervals of a fixed follow up interval. As an additional comparator, we also define and evaluate a version of the design based on an assumed Weibull distribution. Except for the assumed models, the piecewise exponential and Weibull model based designs are identical to an established design that assumes an exponential event time distribution with an inverse gamma prior on the mean event time. The three designs are compared by simulation under several log-logistic and Weibull distributions having different shape parameters, and for different monitoring schedules. The simulations show that, compared to the exponential inverse gamma model based design, the piecewise exponential design has substantially better performance, with much higher probabilities of correctly stopping the trial early, and shorter and less variable trial duration, when the assumed median event time is unacceptably low. Compared to the Weibull model based design, the piecewise exponential design does a much better job of maintaining small incorrect stopping probabilities in cases where the true median survival time is desirably large.  相似文献   

14.
This paper describes how a multistage analysis strategy for a clinical trial can assess a sequence of hypotheses that pertain to successively more stringent criteria for excess risk exclusion or superiority for a primary endpoint with a low event rate. The criteria for assessment can correspond to excess risk of an adverse event or to a guideline for sufficient efficacy as in the case of vaccine trials. The proposed strategy is implemented through a set of interim analyses, and success for one or more of the less stringent criteria at an interim analysis can be the basis for a regulatory submission, whereas the clinical trial continues to accumulate information to address the more stringent, but not futile, criteria. Simulations show that the proposed strategy is satisfactory for control of type I error, sufficient power, and potential success at interim analyses when the true relative risk is more favorable than assumed for the planned sample size. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
This paper demonstrates how certain statistics, computed from a sample of size n (from almost any distribution) may be simulated by using a sequence of substantially less than n random normal variates. Many statistics, θ, including almost all maximum likelihood estimates, can be expressed in terms of the sample trigonometric moments, STM. The STM are asymptotically multivariate normal with a mean vector and variance-covariance matrix easily expressible in terms of equally spaced characteristic function evaluations. Thus one only needs to know the Fourier transform or equivalently the characteristic function associated with elements of any moderate to large i. i. d. sample and have access to a normal random number generator to generate a sequence of STM with distributional properties almost identical to those of STM computed from that sample. These STM can in turn be used to compute the desired statistic θ.  相似文献   

16.
A method for controlling the familywise error rate combining the Bonferroni adjustment and fixed testing sequence procedures is proposed. This procedure allots Type I error like the Bonferroni adjustment, but allows the Type I error to accumulate whenever a null hypothesis is rejected. In this manner, power for hypotheses tested later in a prespecified order will be increased. The order of the hypothesis tests needs to be prespecified as in a fixed sequence testing procedure, but unlike the fixed sequence testing procedure all hypotheses can always be tested, allowing for an a priori method of concluding a difference in the various endpoints. An application will be in clinical trials in which mortality is a concern, but it is expected that power to distinguish a difference in mortality will be low. If the effect on mortality is larger than anticipated, this method allows a test with a prespecified method of controlling the Type I error rate. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
A version of the multiple decsion problem is studied in which the procedure is based only on the current observation and the previous decision. A necessary and sufficient condition for inconsistency of the stepwise maximum likelihood procedure is shown to be the boundedness of the likelihood ratios. In the case of consistency the (typically slow) rate of convergence to zero of the error probabilities is determined.  相似文献   

18.
All-pairs power in a one-way ANOVA is the probability of detecting all true differences between pairs of means. Ramsey (1978) found that for normal distributions having equal variances, step-down multiple comparison procedures can have substantially more all-pairs power than single-step procedures, such as Tukey’s HSD, when equal sample sizes are randomly sampled from each group. This paper suggests a step-down procedure for the case of unequal variances and compares it to Dunnett's T3 technique. The new procedure is similar in spirit to one of the heteroscedastic procedures described by Hochberg and Tamhane (1987), but it has certain advantages that are discussed in the paper. Included are results on unequal sample sizes.  相似文献   

19.
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together, we call these conditions a ‘rejection principle for sequential tests’, which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next, the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and finding the maximum safe dose of a treatment.  相似文献   

20.
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号