首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A number of tests are available for testing the equality of several population variances. Some are claimed to be robust. We compared six of those claimed robust procedures by Monte Carlo simulated experiments, particularly for cases of small and unequal sample sizes. Our results show that the jack-knife test compares favorably with the other tests.  相似文献   

2.
Optimality of equal versus unequal cluster sizes in the context of multilevel intervention studies is examined. A Monte Carlo study is done to examine to what degree asymptotic results on the optimality hold for realistic sample sizes and for different estimation methods. The relative D-criterion, comparing equal versus unequal cluster sizes, almost always exceeded 85%, implying that loss of information due to unequal cluster sizes can be compensated for by increasing the number of clusters by 18%. The simulation results are in line with asymptotic results, showing that, for realistic sample sizes and various estimation methods, the asymptotic results can be used in planning multilevel intervention studies.  相似文献   

3.
Alternative ways of using Monte Carlo methods to implement a Cox-type test for separate families of hypotheses are considered. Monte Carlo experiments are designed to compare the finite sample performances of Pesaran and Pesaran's test, a RESET test, and two Monte Carlo hypothesis test procedures. One of the Monte Carlo tests is based on the distribution of the log-likelihood ratio and the other is based on an asymptotically pivotal statistic. The Monte Carlo results provide strong evidence that the size of the Pesaran and Pesaran test is generally incorrect, except for very large sample sizes. The RESET test has lower power than the other tests. The two Monte Carlo tests perform equally well for all sample sizes and are both clearly preferred to the Pesaran and Pesaran test, even in large samples. Since the Monte Carlo test based on the log-likelihood ratio is the simplest to calculate, we recommend using it.  相似文献   

4.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

5.
For time‐to‐event data, the power of the two sample logrank test for the comparison of two treatment groups can be greatly influenced by the ratio of the number of patients in each of the treatment groups. Despite the possible loss of power, unequal allocations may be of interest due to a need to collect more data on one of the groups or to considerations related to the acceptability of the treatments to patients. Investigators pursuing such designs may be interested in the cost of the unbalanced design relative to a balanced design with respect to the total number of patients required for the study. We present graphical displays to illustrate the sample size adjustment factor, or ratio of the sample size required by an unequal allocation compared to the sample size required by a balanced allocation, for various survival rates, treatment hazards ratios, and sample size allocation ratios. These graphical displays conveniently summarize information in the literature and provide a useful tool for planning sample sizes for the two sample logrank test. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
In a typical carcinogenicity study, animals, usually rats or mice. are divided into a control and two to three dose groups of 50 or more by randomization. A chemical is administered at a constant daily dose rate for a major portion of the lifetime of the test animals, for example, two years. In general, such an experiment is expensive and time consuming In this paper, we propose an efficient design with reduced sample size and/or shortened study duration. An equal number of animals per dose group is considered in this study. A power study of the age-adjusted trend test, for the turnor incidence rate for single-sacrifice experiments proposed by Kodell et al. (Drug Information Journal, 1997) is conducted. A Monte Carlo simulation study is performed to compare the performance of the trend test for the standard design and various reduced designs. Based on the Kodell et al. test, the 21-month study duration with sample size 50 per group is recommended as the best, reduced design over the traditional 2-year study design with the same sample size.  相似文献   

7.
A Monte Carlo simulation was conducted to compare the type I error rate and test power of the analysis of means (ANOM) test to the one-way analysis of variance F-test (ANOVA-F). Simulation results showed that as long as the homogeneity of the variance assumption was satisfied, regardless of the shape of the distribution, number of group and the combination of observations, both ANOVA-F and ANOM test have displayed similar type I error rates. However, both tests have been negatively affected from the heterogeneity of the variances. This case became more obvious when the variance ratios increased. The test power values of both tests changed with respect to the effect size (Δ), variance ratio and sample size combinations. As long as the variances are homogeneous, ANOVA-F and ANOM test have similar powers except unbalanced cases. Under unbalanced conditions, the ANOVA-F was observed to be powerful than the ANOM-test. On the other hand, an increase in total number of observations caused the power values of ANOVA-F and ANOM test approach to each other. The relations between effect size (Δ) and the variance ratios affected the test power, especially when the sample sizes are not equal. As ANOVA-F has become to be superior in some of the experimental conditions being considered, ANOM is superior in the others. However, generally, when the populations with large mean have larger variances as well, ANOM test has been seen to be superior. On the other hand, when the populations with large mean have small variances, generally, ANOVA-F has observed to be superior. The situation became clearer when the number of the groups is 4 or 5.  相似文献   

8.
The present study investigates the performance of Johnson's transformation trimmed t statistic, Welch's t test, Yuen's trimmed t , Johnson's transformation untrimmed t test, and the corresponding bootstrap methods for the two-sample case with small/unequal sample sizes when the distribution is non-normal and variances are heterogeneous. The Monte Carlo simulation is conducted in two-sided as well as one-sided tests. When the variance is proportional to the sample size, Yuen's trimmed t is as good as Johnson's transformation trimmed t . However, when the variance is disproportional to the sample size, the bootstrap Yuen's trimmed t and the bootstrap Johnson's transformation trimmed t are recommended in one-sided tests. For two-sided tests, Johnson's transformation trimmed t is not only valid but also powerful in comparison to the bootstrap methods.  相似文献   

9.
The balanced half-sample, jackknife and linearization methods are used to estimate the variance of the slope of a linear regression under a variety of computer generated situations. The basic sampling design is one in which two PSU's are selected from each of a number of strata . The variance estimation techniques are compared with a Monte Carlo experiment. Results show that variance estimates may be highly biased and variable unless sizeable numbers of observations are available from each stratum. The jackknife and linearization estimates appear superior to the balanced half sample method - particularly when the number of strata or the number of available observations from each stratum is small.  相似文献   

10.
The common principal components (CPC) model provides a way to model the population covariance matrices of several groups by assuming a common eigenvector structure. When appropriate, this model can provide covariance matrix estimators of which the elements have smaller standard errors than when using either the pooled covariance matrix or the per group unbiased sample covariance matrix estimators. In this article, a regularized CPC estimator under the assumption of a common (or partially common) eigenvector structure in the populations is proposed. After estimation of the common eigenvectors using the Flury–Gautschi (or other) algorithm, the off-diagonal elements of the nearly diagonalized covariance matrices are shrunk towards zero and multiplied with the orthogonal common eigenvector matrix to obtain the regularized CPC covariance matrix estimates. The optimal shrinkage intensity per group can be estimated using cross-validation. The efficiency of these estimators compared to the pooled and unbiased estimators is investigated in a Monte Carlo simulation study, and the regularized CPC estimator is applied to a real dataset to demonstrate the utility of the method.  相似文献   

11.
In this paper, we have reviewed 25 test procedures that are widely reported in the literature for testing the hypothesis of homogeneity of variances under various experimental conditions. Since a theoretical comparison was not possible, a simulation study has been conducted to compare the performance of the test statistics in terms of robustness and empirical power. Monte Carlo simulation was performed for various symmetric and skewed distributions, number of groups, sample size per group, degree of group size inequalities, and degree of variance heterogeneity. Using simulation results and based on the robustness and power of the tests, some promising test statistics are recommended for practitioners.  相似文献   

12.
In this article, an extensive Monte Carlo simulation study is conducted to evaluate and compare nonparametric multiple comparison tests under violations of classical analysis of variance assumptions. Simulation space of the Monte Carlo study is composed of 288 different combinations of balanced and unbalanced sample sizes, number of groups, treatment effects, various levels of heterogeneity of variances, dependence between subgroup levels, and skewed error distributions under the single factor experimental design. By this large simulation space, we present a detailed analysis of effects of the violations of assumptions on the performance of nonparametric multiple comparison tests in terms of three error and four power measures. Observations of this study are beneficial to decide the optimal nonparametric test according to requirements and conditions of undertaken experiments. When some of the assumptions of analysis of variance are violated and number of groups is small, use of stepwise Steel-Dwass procedure with Holm's approach is appropriate to control type I error at a desired level. Dunn's method should be employed for greater number of groups. When subgroups are unbalanced and number of groups is small, Nemenyi's procedure with Duncan's approach produces high power values. Conover's procedure successfully provides high power values with a small number of unbalanced groups or with a greater number of balanced or unbalanced groups. At the same time, Conover's procedure is unable to control type I error rates.  相似文献   

13.
the estimation of variance components of heteroscedastic random model is discussed in this paper. Maximum Likelihood (ML) is described for one-way heteroscedastic random models. The proportionality condition that cell variance is proportional to the cell sample size, is used to eliminate the efffect of heteroscedasticity. The algebraic expressions of the estimators are obtained for the model. It is seen that the algebraic expressions of the estimators depend mainly on the inverse of the variance-covariance matrix of the observation vector. So, the variance-covariance matrix is obtained and the formulae for the inversions are given. A Monte Carlo study is conducted. Five different variance patterns with different numbers of cells are considered in this study. For each variance pattern, 1000 Monte Carlo samples are drawn. Then the Monte Carlo biases and Monte Carlo MSE’s of the estimators of variance components are calculated. In respect of both bias and MSE, the Maximum Likelihood (ML) estimators of variance components are found to be sufficiently good.  相似文献   

14.
In this study, we propose a new test based on a computational approach to test the equality of several log-normal means. We compare this test with some existing methods in terms of the type-I error rate and power using Monte Carlo simulations for varying values of number of groups and sample sizes. The simulation results indicate that the proposed test could be suggested as a good alternative for testing the equality of several log-normal means.  相似文献   

15.
We consider the test based on theL 1-version of the Cramér-von Mises statistic for the nonparametric two-sample problem. Some quantiles of the exact distribution under H0 of the test statistic are computed for small sample sizes. We compare the test in terms of power against general alternatives to other two-sample tests, namely the Wilcoxon rank sum test, the Smirnov test and the Cramér-von Mises test in the case of unbalanced small sample sizes. The computation of the power is rather complicated when the sample sizes are unequal. Using Monte Carlo power estimates it turns out that the Smirnov test is more sensitive to non stochastically ordered alternatives than the new test. And under location-contamination alternatives the power estimates of the new test and of the competing tests are equal.  相似文献   

16.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

17.
In sample survey, post-stratification is often used when the identification of stratum cannot be achieved in advance of the survey. If the sample size is large, post-stratification is usually as effective as the ordinary stratification with proportional allocation. However, in the case of small samples, no general acceptable theory or technique has been well developed. One of the main difficulties is the possibility of obtaining zero sample sizes in some strata for small samples. In this paper, we overcome this difficulty by employing a sampling scheme referred to as the multiple inverse sampling such that each stratum is ensured to be sampled a specified number of observations. A Monte Carlo simulation is carried out to compare the estimator obtained from the multiple inverse sampling with some other existing estimators. The estimator under multiple inverse sampling is superior in the sense that it is unbiased and its variance does not depend on the values of stratum means in the population.  相似文献   

18.
The use of goodness-of-fit test based on Anderson–Darling (AD) statistic is discussed, with reference to the composite hypothesis that a sample of observations comes from a generalized Rayleigh distribution whose parameters are unspecified. Monte Carlo simulation studies were performed to calculate the critical values for AD test. These critical values are then used for testing whether a set of observations follows a generalized Rayleigh distribution when the scale and shape parameters are unspecified and are estimated from the sample. Functional relationship between the critical values of AD is also examined for each shape parameter (α), sample size (n) and significance level (γ). The power study is performed with the hypothesized generalized Rayleigh against alternate distributions.  相似文献   

19.
In the analysis of stationary stochastic process, one has to deal with covariance matrix of Toeplitz (or Laurent) structure. Such structure has a feature that not only the elements on the principal diagonal but also those lying on each of the parallel sub-diagonals are equal as well. The present investigation is on the problem of large sample testing of the Toeplitz pattern of the population covariance matrix. Apart from usual application of likelihood ratio and Rao’s efficient score criteria, some heuristic two-stage tests are suggested. The results of Monte Carlo experiment are reported for the size of the proposed tests.  相似文献   

20.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号