首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A double sampling plan based on truncated life tests is proposed and designed under a general life distribution. The design parameters such as sample sizes and acceptance numbers for the first and the second samples are determined so as to minimize the average sample number subject to satisfying the consumer's and producer's risks at the respectively specified quality levels. The resultant tables can be used regardless of the underlying distribution as long as the reliability requirements are specified at two risks. In addition, Gamma and Weibull distributions are particularly considered to report the design parameters according to the quality levels in terms of the mean ratios.  相似文献   

2.
The Durbin–Watson (DW) test for lag 1 autocorrelation has been generalized (DWG) to test for autocorrelations at higher lags. This includes the Wallis test for lag 4 autocorrelation. These tests are also applicable to test for the important hypothesis of randomness. It is found that for small sample sizes a normal distribution or a scaled beta distribution by matching the first two moments approximates well the null distribution of the DW and DWG statistics. The approximations seem to be adequate even when the samples are from nonnormal distributions. These approximations require the first two moments of these statistics. The expressions of these moments are derived.  相似文献   

3.
The size of the two-sample t test is generally thought to be robust against nonnormal distributions if the sample sizes are large. This belief is based on central limit theory, and asymptotic expansions of the moments of the t statistic suggest that robustness may be improved for moderate sample sizes if the variance, skewness, and kurtosis of the distributions are matched, particularly if the sample sizes are also equal.

It is shown that asymptotic arguments such as these can be misleading and that, in fact, the size of the t test can be as large as unity if the distributions are allowed to be completely arbitrary. Restricting the distributions to be identical or symmetric (but otherwise arbitrary) does not guarantee that the size can be controlled either, but controlling the tail-heaviness of the distributions does. The last result is proved more generally for the k-sample F test.  相似文献   

4.
Test statistics from the class of two-sample linear rank tests are commonly used to compare a treatment group with a control group. Two independent random samples of sizes m and n are drawn from two populations. As a result, N = m + n observations in total are obtained. The aim is to test the null hypothesis of identical distributions. The alternative hypothesis is that the populations are of the same form but with a different measure of central tendency. This article examines mid p-values from the null permutation distributions of tests based on the class of two-sample linear rank statistics. The results obtained indicate that normal approximation-based computations are very close to the permutation simulations, and they provide p-values that are close to the exact mid p-values for all practical purposes.  相似文献   

5.
Bartholomew's statistics for testing homogeneity of normal means with ordered alternatives have null distributions which are mixtures of chi-squared or beta distributions according as the variances are known or not. If the sample sizes are not equal, the mixing coefficients can be difficult to compute. For a simple order and a simple tree ordering, approximations to the significance levels of these tests have been developed which are based on patterns in the weight sets. However, for a moderate or large number of means, these approximations can be tedious to implement. Employing the same approach that was used in the development of these approximations, two-moment chisquared and beta approximations are derived for these significance levels. Approximations are also developed for the testing situation in which the order restriction is the null hypothesis. Numerical studies show that in each of the cases the two-moment approximation is quite satisfactory for most practical purposes.  相似文献   

6.
Heterogeneity of variances of treatment groups influences the validity and power of significance tests of location in two distinct ways. First, if sample sizes are unequal, the Type I error rate and power are depressed if a larger variance is associated with a larger sample size, and elevated if a larger variance is associated with a smaller sample size. This well-established effect, which occurs in t and F tests, and to a lesser degree in nonparametric rank tests, results from unequal contributions of pooled estimates of error variance in the computation of test statistics. It is observed in samples from normal distributions, as well as non-normal distributions of various shapes. Second, transformation of scores from skewed distributions with unequal variances to ranks produces differences in the means of the ranks assigned to the respective groups, even if the means of the initial groups are equal, and a subsequent inflation of Type I error rates and power. This effect occurs for all sample sizes, equal and unequal. For the t test, the discrepancy diminishes, and for the Wilcoxon–Mann–Whitney test, it becomes larger, as sample size increases. The Welch separate-variance t test overcomes the first effect but not the second. Because of interaction of these separate effects, the validity and power of both parametric and nonparametric tests performed on samples of any size from unknown distributions with possibly unequal variances can be distorted in unpredictable ways.  相似文献   

7.
Sample size determination for testing the hypothesis of equality of two proportions against an alternative with specified type I and type II error probabilities is considered for two finite populations. When two finite populations involved are quite different in sizes, the equal size assumption may not be appropriate. In this paper, we impose a balanced sampling condition to determine the necessary samples taken without replacement from the finite populations. It is found that our solution requires smaller samples as compared to those using binomial distributions. Furthermore, our solution is consistent with the sampling with replacement or when population size is large. Finally, three examples are given to show the application of the derived sample size formula.  相似文献   

8.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

9.
There are several statistical hypothesis tests available for assessing normality assumptions, which is an a priori requirement for most parametric statistical procedures. The usual method for comparing the performances of normality tests is to use Monte Carlo simulations to obtain point estimates for the corresponding powers. The aim of this work is to improve the assessment of 9 normality hypothesis tests. For that purpose, random samples were drawn from several symmetric and asymmetric nonnormal distributions and Monte Carlo simulations were carried out to compute confidence intervals for the power achieved, for each distribution, by two of the most usual normality tests, Kolmogorov–Smirnov with Lilliefors correction and Shapiro–Wilk. In addition, the specificity was computed for each test, again resorting to Monte Carlo simulations, taking samples from standard normal distributions. The analysis was then additionally extended to the Anderson–Darling, Cramer-Von Mises, Pearson chi-square Shapiro–Francia, Jarque–Bera, D'Agostino and uncorrected Kolmogorov–Smirnov tests by determining confidence intervals for the areas under the receiver operating characteristic curves. Simulations were performed to this end, wherein for each sample from a nonnormal distribution an equal-sized sample was taken from a normal distribution. The Shapiro–Wilk test was seen to have the best global performance overall, though in some circumstances the Shapiro–Francia or the D'Agostino tests offered better results. The differences between the tests were not as clear for smaller sample sizes. Also to be noted, the SW and KS tests performed generally quite poorly in distinguishing between samples drawn from normal distributions and t Student distributions.  相似文献   

10.
This paper looks at the effects of rounding data sampled from the exponential distribution. It examines the nature of the rounded distribution, together with the resulting error distribution. The influence of these distributions on estimates and tests of hypothesis is investigated. The results indicate that even a moderate degree of rounding can cause the bias in an estimator to increase, whereas in hypothesis tests level of significance is altered.  相似文献   

11.
The use of a range estimator of the population standard deviation, sigma (σ), for determining sample sizes is discussed in this study. Standardized mean ranges (dn's), when divided into the ranges of sampling frames, provide estimates of the standard deviation of the population. These estimates can be used for determining sample sizes. The dn's are provided for seven different distributions for sampling frame sizes that range from 2 to 2000, For each of the seven distributions, functional relationships are developed such that dn = f(nSF) where nSF is the size of the sample frame. From these functions, dn's can be estimated for sampling frame sizes which are not presented in the study.  相似文献   

12.
This paper deals with the preliminary test estimation (PTE) of the parameters of the exponential and Pareto distributions in censored samples. The biases, risk functions, efficiency tables and the graphs for the relative efficiency for the proposed estimators for the parameters of the exponential and Pareto distributions are given. We find that the proposed estimators dominate the corresponding unrestricted (usual) estimators in the neighborhood of null hypothesis. The range of the parameters for which the proposed estimators dominate the corresponding usual estimators for different sample sizes and level of significance are given. The findings of the paper will be useful for the practitioners who are dealing with the censored samples in life testing experiments.  相似文献   

13.
Conditional Studentized Survival Tests for Randomly Censored Models   总被引:1,自引:0,他引:1  
It is shown that in the case of heterogenous censoring distributions Studentized survival tests can be carried out as conditional permutation tests given the order statistics and their censoring status. The result is based on a conditional central limit theorem for permutation statistics. It holds for linear test statistics as well as for sup-statistics. The procedure works under one of the following general circumstances for the two-sample problem: the unbalanced sample size case, highly censored data, certain non-convergent weight functions or under alternatives. For instance, the two-sample log rank test can be carried out asymptotically as a conditional test if the relative amount of uncensored observations vanishes asymptotically as long as the number of uncensored observations becomes infinite. Similar results hold whenever the sample sizes and are unbalanced in the sense that and hold.  相似文献   

14.
Null as well as alternative distributions of two types of statistics used to test for multiple outliers in exponential samples are obtained. Of these two types one is based on the ratio of sum of the observations suspected to be outliers to sum of the sample observations and the other is Dixon's type. Powers of the tests based on these statistics are compared.

  相似文献   

15.
We study various bootstrap and permutation methods for matched pairs, whose distributions can have different shapes even under the null hypothesis of no treatment effect. Although the data may not be exchangeable under the null, we investigate different permutation approaches as valid procedures for finite sample sizes. It will be shown that permutation or bootstrap schemes, which neglect the dependency structure in the data, are asymptotically valid. Simulation studies show that these new tests improve the power of the t-test under non-normality.  相似文献   

16.
An empirical distribution function estimator for the difference of order statistics from two independent populations can be used for inference between quantiles from these populations. The inferential properties of the approach are evaluated in a simulation study where different sample sizes, theoretical distributions, and quantiles are studied. Small to moderate sample sizes, tail quantiles, and quantiles which do not coincide with the expectation of an order statistic are identified as problematic for appropriate Type I error control.  相似文献   

17.
Abstract.  An expectation maximization (EM) algorithm is proposed to find fibre length distributions in standing trees. The available data come from cylindric wood samples (increment cores). The sample contains uncut fibres as well as fibres cut once or twice. The sample contains not only fibres, but also other cells, the so-called 'fines'. The lengths are measured by an automatic fibre-analyser, which is not able to distinguish fines from fibres and cannot tell if a cell has been cut. The data thus come from a censored version of a mixture of the fine and fibre length distributions in the tree. The parameters of the length distributions are estimated by a stochastic version of the EM algorithm, and an estimate of the corresponding covariance matrix is derived. The method is applied to data from northern Sweden. A simulation study is also presented. The method works well for sample sizes commonly obtained from increment cores.  相似文献   

18.
This paper describes a permutation procedure to test for the equality of selected elements of a covariance or correlation matrix across groups. It involves either centring or standardising each variable within each group before randomly permuting observations between groups. Since the assumption of exchangeability of observations between groups does not strictly hold following such transformations, Monte Carlo simulations were used to compare expected and empirical rejection levels as a function of group size, the number of groups and distribution type (Normal, mixtures of Normals and Gamma with various values of the shape parameter). The Monte Carlo study showed that the estimated probability levels are close to those that would be obtained with an exact test except at very small sample sizes (5 or 10 observations per group). The test appears robust against non-normal data, different numbers of groups or variables per group and unequal sample sizes per group. Power was increased with increasing sample size, effect size and the number of elements in the matrix and power was decreased with increasingly unequal numbers of observations per group.  相似文献   

19.
Goodness-of-fit tests for the innovation distribution in GARCH models based on measuring deviations between the empirical characteristic function of the residuals and the characteristic function under the null hypothesis have been proposed in the literature. The asymptotic distributions of these test statistics depend on unknown quantities, so their null distributions are usually estimated through parametric bootstrap (PB). Although easy to implement, the PB can become very computationally expensive for large sample sizes, which is typically the case in applications of these models. This work proposes to approximate the null distribution through a weighted bootstrap. The procedure is studied both theoretically and numerically. Its asymptotic properties are similar to those of the PB, but, from a computational point of view, it is more efficient.  相似文献   

20.
In this article the authors show how by adequately decomposing the null hypothesis of the multi-sample block-scalar sphericity test it is possible to obtain the likelihood ratio test statistic as well as a different look over its exact distribution. This enables the construction of well-performing near-exact approximations for the distribution of the test statistic, whose exact distribution is quite elaborate and non-manageable. The near-exact distributions obtained are manageable and perform much better than the available asymptotic distributions, even for small sample sizes, and they show a good asymptotic behavior for increasing sample sizes as well as for increasing number of variables and/or populations involved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号