首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 260 毫秒
1.
We propose replacing the usual Student's-t statistic, which tests for equality of means of two distributions and is used to construct a confidence interval for the difference, by a biweight-“t” statistic. The biweight-“t” is a ratio of the difference of the biweight estimates of location from the two samples to an estimate of the standard error of this difference. Three forms of the denominator are evaluated: weighted variance estimates using both pooled and unpooled scale estimates, and unweighted variance estimates using an unpooled scale estimate. Monte Carlo simulations reveal that resulting confidence intervals are highly efficient on moderate sample sizes, and that nominal levels are nearly attained, even when considering extreme percentage points.  相似文献   

2.
The distribution of the sample correlation coefficient is derived when the population is a mixture of two bivariate normal distributions with zero mean but different covariances and mixing proportions 1 - λ and λ respectively; λ will be called the proportion of contamination. The test of ρ = 0 based on Student's t, Fisher's z, arcsine, or Ruben's transformation is shown numerically to be nonrobust when λ, the proportion of contamination, lies between 0.05 and 0.50 and the contaminated population has 9 times the variance of the standard (bivariate normal) population. These tests are also sensitive to the presence of outliers.  相似文献   

3.
When the two-sample t-test has equal sample slies, it is widely considered to be a robust procedure (with respect to the significaoce level) under violatioa of the assuaptioo of equal variances. This paper is coa-earned with a quantification of the amount of robustness which this procedure has under such violations, The approach is through the concept of "religion of robustness" and the resluts show an extremely strong degree of robustness for the equal an extremely strong degree of robustness for the equal sample size t-test, probably more so than most statistyicians realise. This extremely high level of robustness, however, reduces quickly as the sample sizes begin to vary from equality. The regions of robustnes obtained show that while most users would likely be satisfied with the degree of robustness inherent when the two sample sizes each vary by 10% from equality, most would wish to be much more cautions when the variation is 20%. The study covers sample sizes n1 -= n 2 = 5(5)30(10)50 plus 10% and 20% variations thereof for the two-tailed test and nominal significance levels of 0.01 and 0.05.  相似文献   

4.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   

5.
SUMMARY When the assumptions of parametric statistical tests for the difference between two means are violated, it is commonly advised that non-parametric tests are a more robust substitute. The history of the investigation of this issue is summarized. The robustness of the t -test was evaluated, by repeated computer testing for differences between samples from two populations of equal means but non-normal distributions and with different variances and sample sizes. Two common alternatives to t -Welch's approximate t and the Mann-Whitney U -test-were evaluated in the same way. The t -test is sufficiently robust for use in all likely cases, except when skew is severe or when population variances and sample sizes both differ. The Welch test satisfactorily addressed the latter problem, but was itself sensitive to departures from normality. Contrary to its popular reputation, the U -test showed a dramatic 'lack of robustness' in many cases-largely because it is sensitive to population differences other than between means, so it is not properly a 'non-parametric analogue' of the t -test, as it is too often described.  相似文献   

6.
In this paper we compare the power properties of some location tests. The most widely used such test is Student's t. Recently bootstrap-based tests have received much attention in the literature. A bootstrap version of the t-test will be included in our comparison. Finally, the nonparametric tests based on the idea of permuting the signs will be represented in our comparison. Again, we will initially concentrate on a version of that test based on the mean. The permutation tests predate the bootstrap by about fourty years. Theoretical results of Pitman (1937) and Bickel & Freedman (1981) show that these three methods are asymptotically equivalent if the underlying distribution is symmetric and has finite second moment. In the modern literature, the use of the nonparametric techniques is advocated on the grounds that the size of the test would be either exact, or more nearly exact. In this paper we report on a simulation study that compares the power curves and we show that it is not necessary to use resampling tests with a statistic based on the mean of the sample.  相似文献   

7.
Tibor K. Pogány 《Statistics》2013,47(6):1363-1369
The need for the convolution of normal and Student's t random variables arises in many areas. Since the 1930s, various authors have attempted to derive closed-form expressions for the probability density function (pdf) of the convolution, but with little success. Here, general closed-form expressions are derived for the pdf.  相似文献   

8.
The authors propose a class of statistics based on Rao's score for the sequential testing of composite hypotheses comparing two treatments (populations). Asymptotic approximations of the statistics lead them to propose sequential tests and to derive their monitoring boundaries. As special cases, they construct sequential versions of the two‐sample t‐test for normal populations and two‐sample z‐score tests for binomial populations. The proposed algorithms are simple and easy to compute, as no numerical integration is required. Furthermore, the user can analyze the data at any time regardless of how many inspections have been made. Monte Carlo simulations allow the authors to compare the power and the average stopping time (also known as average sample number) of the proposed tests to those of nonsequential and group sequential tests. A two‐armed comparative clinical trial in patients with adult leukemia allows them to illustrate the efficiency of their methods in the case of binary responses.  相似文献   

9.
We propose two tests for testing compound periodicities which are the uniformly most powerful invariant decision procedures against simple periodicities. The second test can provide an excellent estimation of a compound periodic non linear function from observed data. These tests were compared with the tests proposed by Fisher and Siegel by Monte Carlo studies and we found that all the tests showed high power and high probability of a correct decision when all the amplitudes of underlying periods were the same. However, if there are at least several different periods with unequal amplitudes, then the second test proposed always showed high power and high probability of a correct decision, whereas the tests proposed by Fisher and Siegel gave 0 for the power and 0 for the probability of a correct decision, whatever the standard deviation of pseudo normal random numbers. Overall, the second test proposed is the best of all in view of the probability of a correct decision and power.  相似文献   

10.
The Bartlett's test (1937) for equality of variances is based on the χ2 distribution approximation. This approximation deteriorates either when the sample size is small (particularly < 4) or when the population number is large. According to a simulation investigation, we find a similar varying trend for the mean differences between empirical distributions of Bartlett's statistics and their χ2 approximations. By using the mean differences to represent the distribution departures, a simple adjustment approach on the Bartlett's statistic is proposed on the basis of equal mean principle. The performance before and after adjustment is extensively investigated under equal and unequal sample sizes, with number of populations varying from 3 to 100. Compared with the traditional Bartlett's statistic, the adjusted statistic is distributed more closely to χ2 distribution, for homogeneity samples from normal populations. The type I error is well controlled and the power is a little higher after adjustment. In conclusion, the adjustment has good control on the type I error and higher power, and thus is recommended for small samples and large population number when underlying distribution is normal.  相似文献   

11.
Fisher's method of combining independent tests is used to construct tests of means of multivariate normal populations when the covariance matrix has intraclass correlation structure. Monte Carlo studies are reported which show that the tests are more powerful than Hotelling's T 2-test in both one and two sample situations.  相似文献   

12.
The definition of distance between two populations of equal covariance matrices is extended to two and more than two populations with unequal covariance matrices and Rao’s U test for testing the conditional contribution of a subset of variables to the distance is extended to this situation, even when sample sizes are not necessarily the same.  相似文献   

13.
A hierarchical Bayesian approach to the problem of estimating the largest normal mean is considered. Calculation of the posterior mean and the posterior variance involves, at worst, 3-dimensional numerical integration, for which an efficient Monte Carlo method of evaluation is given. An example is presented to illustrate the methodology. In the two populations case, computation of the posterior estimates can be substantially simplified and in special cases can actually be performed using closed form solutions. A simulation study has been done to compare mean square errors of some hierarchical Bayesian estimators that are expressed in closed forms and several existing estimators of the larger mean.  相似文献   

14.
The inverse Gaussian family of non negative, skewed random variables is analytically simple, and its inference theory is well known to be analogous to the normal theory in numerous ways. Hence, it is widely used for modeling non negative positively skewed data. In this note, we consider the problem of testing homogeneity of order restricted means of several inverse Gaussian populations with a common unknown scale parameter using an approach based on the classical methods, such as Fisher's, for combining independent tests. Unlike the likelihood approach which can only be readily applied to a limited number of restrictions and the settings of equal sample sizes, this approach is applicable to problems involving a broad variety of order restrictions and arbitrary sample size settings, and most importantly, no new null distributions are needed. An empirical power study shows that, in case of the simple order, the test based on Fisher's combination method compares reasonably with the corresponding likelihood ratio procedure.  相似文献   

15.
Although the percentage points of the Student-t distribution have been widely tabulated, a simple approximation is given and derived in this article. The approximation can be re-derived easily, since it is based on the percentage points from the Gaussian distribution, and can thus be used for applications requiring non-integer degrees of freedom (e.g., Welch's two-sample t test) and for arbitrary significance levels (e.g., for Bonferroni multiple comparison procedures). Comparisons between this approximation and others suggested in the literature indicate three-digit accuracy for even small degrees of freedom and tail areas.  相似文献   

16.
We revisit the question about optimal performance of goodness-of-fit tests based on sample spacings. We reveal the importance of centering of the test-statistic and of the sample size when choosing a suitable test-statistic from a family of statistics based on power transformations of sample spacings. In particular, we find that a test-statistic based on empirical estimation of the Hellinger distance between hypothetical and data-supported distribution does possess some optimality properties for moderate sample sizes. These findings confirm earlier statements about the robust behaviour of the test-statistic based on the Hellinger distance and are in contrast to findings about the asymptotic (when sample size approaches infinity) of statistics such as Moran's and/or Greenwood's statistic. We include simulation results that support our findings.  相似文献   

17.
Likelihood ratio tests for the homogeneity of k normal means with the alternative restricted by an increasing trend are considered as well as the likelihood ratio tests of the null hypothesis that the means satisfy the trend. While the work is primarily a survey of results concerning the power functions of these tests, the extensions of some results to the case of not necessarily equal sample sizes are presented. For the case of known or unknown population variances, exact expressions are given for the power functions for k=3,4, and approximations are discussed for larger k. The topics of consistency, bias and monotonicity of the power functions are included. Also, Bartholomew's conjectures concerning minimal and maximal powers are investigated, with results of a new numerical study given.  相似文献   

18.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

19.
The Durbin–Watson (DW) test for lag 1 autocorrelation has been generalized (DWG) to test for autocorrelations at higher lags. This includes the Wallis test for lag 4 autocorrelation. These tests are also applicable to test for the important hypothesis of randomness. It is found that for small sample sizes a normal distribution or a scaled beta distribution by matching the first two moments approximates well the null distribution of the DW and DWG statistics. The approximations seem to be adequate even when the samples are from nonnormal distributions. These approximations require the first two moments of these statistics. The expressions of these moments are derived.  相似文献   

20.
Various statistical tests have been developed for testing the equality of means in matched pairs with missing values. However, most existing methods are commonly based on certain distributional assumptions such as normality, 0-symmetry or homoscedasticity of the data. The aim of this paper is to develop a statistical test that is robust against deviations from such assumptions and also leads to valid inference in case of heteroscedasticity or skewed distributions. This is achieved by applying a clever randomization approach to handle missing data. The resulting test procedure is not only shown to be asymptotically correct but is also finitely exact if the distribution of the data is invariant with respect to the considered randomization group. Its small sample performance is further studied in an extensive simulation study and compared to existing methods. Finally, an illustrative data example is analysed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号