首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
For testing the non-inferiority (or equivalence) of an experimental treatment to a standard treatment, the odds ratio (OR) of patient response rates has been recommended to measure the relative treatment efficacy. On the basis of an exact test procedure proposed elsewhere for a simple crossover design, we develop an exact sample-size calculation procedure with respect to the OR of patient response rates for a desired power of detecting non-inferiority at a given nominal type I error. We note that the sample size calculated for a desired power based on an asymptotic test procedure can be much smaller than that based on the exact test procedure under a given situation. We further discuss the advantage and disadvantage of sample-size calculation using the exact test and the asymptotic test procedures. We employ an example by studying two inhalation devices for asthmatics to illustrate the use of sample-size calculation procedure developed here.  相似文献   

2.
Sunset Salvo     
The Wilcoxon—Mann—Whitney test enjoys great popularity among scientists comparing two groups of observations, especially when measurements made on a continuous scale are non-normally distributed. Triggered by different results for the procedure from two statistics programs, we compared the outcomes from 11 PC-based statistics packages. The findings were that the delivered p values ranged from significant to nonsignificant at the 5% level, depending on whether a large-sample approximation or an exact permutation form of the test was used and, in the former case, whether or not a correction for continuity was used and whether or not a correction for ties was made. Some packages also produced pseudo-exact p values, based on the null distribution under the assumption of no ties. A further crucial point is that the variant of the algorithm used for computation by the packages is rarely indicated in the output or documented in the Help facility and the manuals. We conclude that the only accurate form of the Wilcoxon—Mann—Whitney procedure is one in which the exact permutation null distribution is compiled for the actual data.  相似文献   

3.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we consider a nonparametric test procedure for multivariate data with grouped components under the two sample problem setting. For the construction of the test statistic, we use linear rank statistics which were derived by applying the likelihood ratio principle for each component. For the null distribution of the test statistic, we apply the permutation principle for small or moderate sample sizes and derive the limiting distribution for the large sample case. Also we illustrate our test procedure with an example and compare with other procedures through simulation study. Finally, we discuss some additional interesting features as concluding remarks.  相似文献   

5.
A sequential method for approximating a general permutation test (SAPT) is proposed and evaluated. Permutations are randomly generated from some set G, and a sequential probability ratio test (SPRT) is used to determine whether an observed test statistic falls sufficiently far in the tail of the permutation distribution to warrant rejecting some hypothesis. An estimate and bounds on the power function of the SPRT are used to find bounds on the effective significance level of the SAPT. Guidelines are developed for choosing parameters in order to obtain a desired significance level and minimize the number of permutations needed to reach a decision. A theoretical estimate of the average number of permutations under the null hypothesis is given along with simulation results demonstrating the power and average number of permutations for various alternatives. The sequential approximation retains the generality of the permutation test,- while avoiding the computational complexities that arise in attempting to computer the full permutation distribution exactly  相似文献   

6.
In this article, we consider nonparametric test procedures based on a group of quantile test statistics. We consider the quadratic form for the two-sided test and the maximal and summing types of statistics for the one-sided alternatives. Then we derive the null limiting distributions of the proposed test statistics using the large sample approximation theory. Also, we consider applying the permutation principle to obtain the null distribution. In this vein, we may consider the supremum type, which should use the permutation principle for obtaining the null distribution. Then we illustrate our procedure with an example and compare the proposed tests with other existing tests including the individual quantile tests by obtaining empirical powers through simulation study. Also, we comment on the related discussions to this testing procedure as concluding remarks. Finally we prove the lemmas and theorems in the appendices.  相似文献   

7.
Covariance changes detection in multivariate time series   总被引:1,自引:0,他引:1  
This paper studies the detection of step changes in the variances and in the correlation structure of the components of a vector of time series. Two procedures based on the likelihood ratio test (LRT) statistic and on a cumulative sums (cusum) statistic are considered and compared in a simulation study. We conclude that for a single covariance change the cusum procedure is more powerful in small and medium samples, whereas the likelihood ratio test is more powerful in large samples. However, for several covariance changes the cusum procedure works clearly better. The procedures are illustrated in two real data examples.  相似文献   

8.
In this paper, we propose a nonparametric test for homogeneity of overall variabilities for two multi-dimensional populations. Comparisons between the proposed nonparametric procedure and the asymptotic parametric procedure and a permutation test based on standardized generalized variances are made when the underlying populations are multivariate normal. We also study the performance of these test procedures when the underlying populations are non-normal. We observe that the nonparametric procedure and the permutation test based on standardized generalized variances are not as powerful as the asymptotic parametric test under normality. However, they are reliable and powerful tests for comparing overall variability under other multivariate distributions such as the multivariate Cauchy, the multivariate Pareto and the multivariate exponential distributions, even with small sample sizes. A Monte Carlo simulation study is used to evaluate the performance of the proposed procedures. An example from an educational study is used to illustrate the proposed nonparametric test.  相似文献   

9.
We consider the issue of performing accurate small-sample testing inference in beta regression models, which are useful for modeling continuous variates that assume values in (0,1), such as rates and proportions. We derive the Bartlett correction to the likelihood ratio test statistic and also consider a bootstrap Bartlett correction. Using Monte Carlo simulations we compare the finite sample performances of the two corrected tests to that of the standard likelihood ratio test and also to its variant that employs Skovgaard's adjustment; the latter is already available in the literature. The numerical evidence favors the corrected tests we propose. We also present an empirical application.  相似文献   

10.
The most popular goodness of fit test for a multinomial distribution is the chi-square test. But this test is generally biased if observations are subject to misclassification, In this paper we shall discuss how to define a new test procedure when we have double sample data obtained from the true and fallible devices. An adjusted chi-square test based on the imputation method and the likelihood ratio test are considered, Asymptotically, these two procedures are equivalent. However, an example and simulation results show that the former procedure is not only computationally simpler but also more powerful under finite sample situations.  相似文献   

11.
It is well known that the testing of zero variance components is a non-standard problem since the null hypothesis is on the boundary of the parameter space. The usual asymptotic chi-square distribution of the likelihood ratio and score statistics under the null does not necessarily hold because of this null hypothesis. To circumvent this difficulty in balanced linear growth curve models, we introduce an appropriate test statistic and suggest a permutation procedure to approximate its finite-sample distribution. The proposed test alleviates the necessity of any distributional assumptions for the random effects and errors and can easily be applied for testing multiple variance components. Our simulation studies show that the proposed test has Type I error rate close to the nominal level. The power of the proposed test is also compared with the likelihood ratio test in the simulations. An application on data from an orthodontic study is presented and discussed.  相似文献   

12.
In tumorigenicity experiments, each animal begins in a tumor-free state and then either develops a tumor or dies before developing a tumor. Animals that develop a tumor either die from the tumor or from other competing causes. All surviving animals are sacrificed at the end of the experiment, normally two years. The two most commonly used statistical tests are the logrank test for comparing hazards of death from rapidly lethal tumors and the Hoel-Walburg test for comparing prevalences of nonlethal tumors. However, the data obtained from a carcinogenicity experiment generally contains a mixture of fatal and incidental tumors. Peto et al.(1980)suggested combining the fatal and incidental tests for a comparison of tumor onset distributions.

Extensive simulations show that the trend test for tumor onset using the Peto procedure has the proper size, under the simulation constraints, when each group has identical mortality patterns, and the test with continuity correction tends to be conservative. When the animals n the dosed groups have reduced survival rates, the type I error rate is likely to exceed the nominal level. The continuity correction is recommended for a small reduction in survival time among the dosed groups to ensure the proper size. However, when there is a large reduction in survival times in the dosed groups, the onset test does not have the proper size.  相似文献   

13.
For testing the equality of two independent binomial populations the Fisher exact test and the chi-squared test with Yates's continuity correction are often suggested for small and intermediate size samples. The use of these tests is inappropriate in that they are extremely conservative. In this article we demonstrate that, even for small samples, the uncorrected chi-squared test (i.e., the Pearson chi-squared test) and the two-independent-sample t test are robust in that their actual significance levels are usually close to or smaller than the nominal levels. We encourage the use of these latter two tests.  相似文献   

14.
The permutation distribution of a statistic T equivalent to the usual F ratio for the completely randomized design is considered. A correction to the second moment of T derived by Robinson (1983) is presented and the third and fourth moments are educed. Inadequacies in the conventional permutation distribution approximations are demonstrated.  相似文献   

15.
It is shown that the nonparametric two-saniDle test recently proposed by Baumgartner, WeiB, Schindler (1998, Biometrics, 54, 1129-1135) does not control the type I error rate in case of small sample sizes. We investigate the exact permutation test based on their statistic and demonstrate that this test is almost not conservative. Comparing exact tests, the procedure based on the new statistic has a less conservative size and is, according to simulation results, more powerful than the often employed Wilcoxon test. Furthermore, the new test is also powerful with regard to less restrictive settings than the location-shift model. For example, the test can detect location-scale alternatives. Therefore, we use the test to create a powerful modification of the nonparametric location-scale test according to Lepage (1971, Biometrika, 58, 213-217). Selected critical values for the proposed tests are given.  相似文献   

16.
A modification of the sequential probability ratio test is proposed in which Wald's parallel boundaries are broken at some preassigned point of the sample number axis and Anderson's converging boundaries are used prior to that. Read's partial sequential probability ratio test can be considered as a special case of the proposed procedure. As far as 'the maximum average sample number reducing property is concerned, the procedure is as good as Anderson's modified sequential probability ratio test.  相似文献   

17.
ABSTRACT

Motivated by an example in marine science, we use Fisher’s method to combine independent likelihood ratio tests (LRTs) and asymptotic independent score tests to assess the equivalence of two zero-inflated Beta populations (mixture distributions with three parameters). For each test, test statistics for the three individual parameters are combined into a single statistic to address the overall difference between the two populations. We also develop non parametric and semiparametric permutation-based tests for simultaneously comparing two or three features of unknown populations. Simulations show that the likelihood-based tests perform well for large sample sizes and that the statistics based on combining LRT statistics outperforms the ones based on combining score test statistics. The permutation-based tests have overall better performance in terms of both power and type I error rate. Our methods are easy to implement and computationally efficient, and can be expanded to more than two populations and to other multiple parameter families. The permutation tests are entirely generic and can be useful in various applications dealing with zero (or other) inflation.  相似文献   

18.
Robust tests for comparing scale parameters, based on deviances—absolute deviations from the median—are examined. Higgins (2004) proposed a permutation test for comparing two treatments based on the ratio of deviances, but the performance of this procedure has not been investigated. A simulation study examines the performance of Higgins’ test relative to other tests of scale utilizing deviances that have been shown in the literature to have good properties. An extension of Higgins’ procedure to three or more treatments is proposed, and a second simulation study compares its performance to other omnibus tests for comparing scale. While no procedure emerged as a preferred choice in every scenario, Higgins’ tests are found to perform well overall with respect to Type I error rate and power.  相似文献   

19.
ABSTRACT

In this article we present a new solution to test for effects in unreplicated two-level factorial designs. The proposed test statistic, in case the error components are normally distributed, follows an F random variable, though our attention is on its nonparametric permutation version. The proposed procedure does not require any transformation of data such as residualization and it is exact for each effect and distribution-free. Our main aim is to discuss a permutation solution conditional to the original vector of responses. We give two versions of the same nonparametric testing procedure in order to control both the individual error rate and the experiment-wise error rate. A power comparison with Loughin and Noble's test is provided in the case of a unreplicated 24 full factorial design.  相似文献   

20.
《统计学通讯:理论与方法》2012,41(16-17):3020-3029
Standard asymptotic chi-square distribution of the likelihood ratio and score statistics under the null hypothesis does not hold when the parameter value is on the boundary of the parameter space. In mixed models it is of interest to test for a zero random effect variance component. Some available tests for the variance component are reviewed and a new test within the permutation framework is presented. The power and significance level of the different tests are investigated by means of a Monte Carlo simulation study. The proposed test has a significance level closer to the nominal one and it is more powerful.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号