首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bartholomew's statistics for testing homogeneity of normal means with ordered alternatives have null distributions which are mixtures of chi-squared or beta distributions according as the variances are known or not. If the sample sizes are not equal, the mixing coefficients can be difficult to compute. For a simple order and a simple tree ordering, approximations to the significance levels of these tests have been developed which are based on patterns in the weight sets. However, for a moderate or large number of means, these approximations can be tedious to implement. Employing the same approach that was used in the development of these approximations, two-moment chisquared and beta approximations are derived for these significance levels. Approximations are also developed for the testing situation in which the order restriction is the null hypothesis. Numerical studies show that in each of the cases the two-moment approximation is quite satisfactory for most practical purposes.  相似文献   

2.
Goodness-of-fit tests are proposed for unimodal densities and U-shaped hazards. The tests are based on maximum-product-of-spacings estimators, and incorporate unimodality or U-shapedness using order restrictions. A slightly improved “maximum violator” algorithm is given for computing the order-restricted estimates and test statistics. Modified spacings such as “k-spacings”, which may actually increase power, ensure computational feasibility when sample sizes are large. Simulations demonstrate that for samples of size less than twenty, the use of order restrictions can increase power, even with modified spacings. The proposed methods can be used as approximations in cases of null hypotheses that are specified only up to unknown parameters that are estimated.  相似文献   

3.
Estimation of two normal means with an order restriction is considered when a covariance matrix is known. It is shown that restricted maximum likelihood estimator (MLE) stochastically dominates both estimators proposed by Hwang and Peddada [Confidence interval estimation subject to order restrictions. Ann Statist. 1994;22(1):67–93] and Peddada et al. [Estimation of order-restricted means from correlated data. Biometrika. 2005;92:703–715]. The estimators are also compared under the Pitman nearness criterion and it is shown that the MLE is closer to ordered means than the other two estimators. Estimation of linear functions of ordered means is also considered and a necessary and sufficient condition on the coefficients is given for the MLE to dominate the other estimators in terms of mean squared error.  相似文献   

4.
Likelihood ratio tests are considered for two testing situations; testing for the homogeneity of k normal means against the alternative restricted by a simple tree ordering trend and testing the null hypothesis that the means satisfy the trend against all alternatives. Exact expressions are given for the power functions for k = 3 and 4 and unequal sample sizes, both for the case of known and unknown population variances, and approximations are discussed for larger k. Also, Bartholomew’s conjectures concerning minimal and maximal powers are investigated for the case of equal and unequal sample sizes. The power formulas are used to compute powers for a numerical example.  相似文献   

5.
Inferences for survival curves based on right censored data are studied for situations in which it is believed that the treatments have survival times at least as large as the control or at least as small as the control. Testing homogeneity with the appropriate order restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods, which are obtained by applying order restricted procedures to the estimates of the regression coefficients, and ordered analogues to the log rank test, which are based on the score statistics, are considered. Mau's (1988) test, which does not require proportional hazards, is extended to this ordering on the survival curves. Using Monte Carlo techniques, the type I error rates are found to be close to the nominal level and the powers of these tests are compared. Other order restrictions on the survival curves are discussed briefly.  相似文献   

6.
Tests of homogeneity of normal means with the alternative restricted by an ordering on the means are considered. The simply ordered case, μ1 ≤ μ2 ≤ ··· ≤ μk, and the simple tree ordering, μ1 ≤ μj, for; j= 2, 3,…, k, are emphasized. A modification of the likelihood-ratio test is proposed which is asymptotically equivalent to it but is more robust to violations of the hypothesized orderings. The new test has power at the points satisfying the hypothesized ordering which is similar to that of the likelihood-ratio test provided the degrees of freedom are not too small. The modified test is shown to be unbiased and consistent.  相似文献   

7.
Approximations to the power functions of the likelihood ratio tests of homogeneity of normal means against the simple loop ordering at slippage alternatives are considered. If a researcher knows which mean is smallest and which is largest, but does not know how the other means are ordered, then a simple loop ordering is appropriate. The accuracy of the several moment approximations are studied for the case of known variances and it is found that for powers in the range typically of interest, the two-moment approximation seems quite adequate. Approximations based on mixtures of noncentral F variables are developed for the case of unknown variances. The critical values of the test statistics are also tabulated for selected levels of significance.  相似文献   

8.
Let X (n) and X (1) be the largest and smallest order statistics, respectively, of a random sample of fixed size n. Quite generally, X (1) and X (n) are approximately independent for n sufficiently large. In this article, we study the dependence properties of random extremes in terms of their copula, when the sample size has a left-truncated binomial distribution and show that they tend to be more dependent in this case. We also give closed-form formulas for the measures of association Kendall's τ and Spearman's ρ to measure the amount of dependence between two extremes.  相似文献   

9.
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator(MLE) of a systems lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.  相似文献   

10.
In this paper, we discuss some stochastic comparisons for the sample median in a random sample from a normal distribution. Specifically, we establish that the sample median is stochastically farther than the sample mean to the population mean. To verify the result of comparison, we derive an upper bound for some distributional characteristics of the distance between the sample median and the population mean. The stochastic ordering considered here is the likelihood ratio order.  相似文献   

11.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

12.
Xiong Cai  Yiying Zhang 《Statistics》2017,51(3):615-626
In this paper, we compare the hazard rate functions of the second-order statistics arising from two sets of independent multiple-outlier proportional hazard rates (PHR) samples. It is proved that the submajorization order between the sample size vectors together with the supermajorization order between the hazard rate vectors imply the hazard rate ordering between the corresponding second-order statistics from multiple-outlier PHR random variables. The results established here provide theoretical guidance both for the winner's price for the bid in the second-price reverse auction in auction theory and fail-safe system design in reliability. Some numerical examples are also provided for illustration.  相似文献   

13.
Normal probability plots for a simple random sample and normal probability plots for residuals from linear regression are not treated differently in statistical text books. In the statistical literature, 1 ? α simultaneous probability intervals for augmenting a normal probability plot for a simple random sample are available. The first purpose of this article is to demonstrate that the tests associated with the 1 ? α simultaneous probability intervals for a simple random sample may have a size substantially different from α when applied to the residuals from linear regression. This leads to the second purpose of this article: construction of four normal probability plot-based tests for residuals, which have size α exactly. We then compare the powers of these four graphical tests and a non-graphical test for residuals in order to assess the power performances of the graphical tests and to identify the ones that have better power. Finally, an example is provided to illustrate the methods.  相似文献   

14.
The sample linear discriminant function (LDF) is known to perform poorly when the number of features p is large relative to the size of the training samples, A simple and rarely applied alternative to the sample LDF is the sample Euclidean distance classifier (EDC). Raudys and Pikelis (1980) have compared the sample LDF with three other discriminant functions, including thesample EDC, when classifying individuals from two spherical normal populations. They have concluded that the sample EDC outperforms the sample LDF when p is large relative to the training sample size. This paper derives conditions for which the two classifiers are equivalent when all parameters are known and employs a Monte Carlo simulation to compare the sample EDC with the sample LDF no only for the spherical normal case but also for several nonspherical parameter configurations. Fo many practical situations, the sample EDC performs as well as or superior to the sample LDF, even for nonspherical covariance configurations.  相似文献   

15.
The aim of this paper is to find an optimal alternative bivariate ranked-set sample for one-sample location model bivariate sign test. Our numerical and theoretical results indicated that the optimal designs for the bivariate sign test are the alternative designs with quantifying order statistics with labels {((r+1)/2, (r+1)/2)}, when the set size r is odd and {(r/2+1, r/2), (r/2, r/2+1)} when the set size r is even. The asymptotic distribution and Pitman efficiencies of these designs are derived. A simulation study is conducted to investigate the power of the proposed optimal designs. Illustration using real data with the Bootstrap algorithm for P-value estimation is used.  相似文献   

16.
This paper provides closed form expressions for the sample size for two-level factorial experiments when the response is the number of defectives. The sample sizes are obtained by approximating the two-sided test for no effect through tests for the mean of a normal distribution, and borrowing the classical sample size solution for that problem. The proposals are appraised relative to the exact sample sizes computed numerically, without appealing to any approximation to the binomial distribution, and the use of the sample size tables provided is illustrated through an example.  相似文献   

17.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

18.
An unbiased estimator for the common mean of k normal distributions is suggested. A necessary and sufficient condition for the estimator Lo have a smaller variance than each sample mean is given. In the case of estimating the common mean vector of k p-variate (p ≤ 3) normal distributions a combined unbiased estimator may be used. We give a class of estimators which are better than the combined estimator when the loss is quadratic and the restriction of unbiasedness is removed.  相似文献   

19.
The need to establish the relative superiority of each treatment when compared to all the others, i.e., ordering the underlying populations according to some pre-specified criteria, often occurs in many applied research studies and technical/business problems. When populations are multivariate in nature, the problem may become quite difficult to deal with especially in case of small sample sizes or unreplicated designs. The purpose of this work is to propose a new approach for the problem of ranking several multivariate normal populations. It will be theoretically argued and numerically proved that our method controls the risk of false ranking classification under the hypothesis of population homogeneity while under the nonhomogeneity alternatives we expect that the true rank can be estimated with satisfactory accuracy, especially for the “best” populations. Our simulation study proved also that the method is robust in the case of moderate deviations from multivariate normality. Finally, an application to a real case study in the field of life cycle assessment is proposed to highlight the practical relevance of the proposed methodology.  相似文献   

20.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号