首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Using the methods of asymptotic decision theory asymptotically optimal for translation and scale families as well as for certian nonparmetric families. Moreover, two new classes of nonlinear rank tests are introduced. These tests are designed for detecting either “ omnibus alternatives ” or “ one sided alternatives of trend ”. Under the null hypothesis of randomness all tests are distribution - free. The asymptotic distributions of the test statistics are derived under contiguous alternatives.  相似文献   

2.
For two-parameter exponential populations with the same scale parameter (known or unknown) comparisons are made between the location parameters. This is done by constructing confidence intervals, which can then be used for selection procedures. Comparisons are made with a control, and with the (unknown) “best” or “worst” population. Emphasis is laid on finding approximations to the confidence so that calculations are simple and tables are not necessary. (Since we consider unequal sample sizes, tables for exact values would need to be extensive.)  相似文献   

3.
The cumulative count of conforming (CCC) chart is effective in detecting very low fraction of nonconforming items for high yield manufacturing processes. In this study, a combination of runs rules and variable sampling interval feature is proposed to a lower sided CCC chart by inspecting the items one by one. The performance measures of the control chart are derived by using the Markov chain approach. The numerical comparisons show that the performance of the CCC chart can be improved by adding the runs rules and varying the sampling interval.  相似文献   

4.
Confidence intervals for location parameters are expanded (in either direction) to some “crucial” points and the resulting increase in the confidence coefficient investigated.Particaular crucial points are chosen to illuminate some hypothesis testing problems.Special results are dervied for the normal distribution with estimated variance and, in particular, for the problem of classifiying treatments as better or worse than a control.For this problem the usual two-sided Dunnett procedure is seen to be inefficient.Suggestions are made for the use of already published tables for this problem.Mention is made of the use of expanded confidence intervals for all pairwise comparisons of treatments using an “honest ordering difference” rather than Tukey's “honest siginificant difference”.  相似文献   

5.
Teresa Ledwina 《Statistics》2013,47(1):105-118
We state some necessary and sufficient conditions for admissibility of tests for a simple and a composite null hypotheses against ”one-sided” alternatives on multivariate exponential distributions with discrete support.

Admissibility of the maximum likelihood test for “one –sided” alternatives and z χ2test for the independence hypothesis in r× scontingency tables is deduced among others.  相似文献   

6.
In regression analysis we are often interested in using an estimator which is “precise” and which simultaneously provides a model with “good fit”, In this paper we consider the risk properties of several estimators of the regression coefficient vector "trader “balanced” loss, This loss function (Zellner, 1994) reflects both of the described attributes. Under a particular form of balanced loss, we derive the predictive risk of the pre-test estimator which results after a test for exact linear restrictions on the coefficient vector. The corresponding risks of Stein-rule and positive-part Stein-rale estimators are also established. The risks based on loss functions which allow only for estimation precision, or only for goodness of fit, are special cases of our results, and we draw appropriate comparisons, In particular, we show that some of the well-known results under (quadratic) precision-only loss are not robust to our generalization of the loss function  相似文献   

7.
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged.  相似文献   

8.
Consider a two-way factorial experiment involving a “treatment” factor A with fixed effects, a “blocking” factor B with random effects, and interaction effects perhaps non-negligible. The degree of balance required for multiple comparison procedures to be applicable for the comparison of the treatment effects using ordinary least-squares estimates is investigated. For main effects to be estimated independently of MSAB, a sufficient condition is that the design consist of identical blocks, a strong condition of proportional frequencies. Surprisingly, under this condition of proportional frequencies, MSAB does not provide an appropriate variance estimate for inferences on each treatment contrast, even though the statistics F = MSA/MSAB is appropriate for testing equality of the treatment effects. In short, when factor B is random, standard methods of multiple comparisons apply using the interaction mean square MSAB as a variance estimator only when the treatment-block incidences nn are constant. Nevertheless, for designs with identical blocks, appropriate variance estimates can be identified to allow for conservative or approximate multiple comparisons. This is illustrated for certain treatment balanced designs for comparisons with a control.  相似文献   

9.
The purpose of this paper is to revisit the response surface technique ridge analysis within the context of the “trust region” problem in numerical analysis. It is found that these two approaches inherently solve the same problem. We introduce the computational difficulty, termed the “hard case”, which originates in the trust region methods, also exists in ridge analysis but has never been formally discussed in response surface methodology (RSM). The dual response global optimization algorithm (DRSALG) based on the trust region method is applied (with a certain modification) to solving the ridge analysis problem. Some numerical comparisons against a general-purpose nonlinear optimization algorithm are illustrated in terms of examples appearing in the literature  相似文献   

10.
Abstract

Designs for the first order trigonometric regression model over an interval on the real line are considered for the situation where estimation of the slope of the response surface at various points in the factor space is of primary interest. Minimization of the variance of the estimated slope at a point maximized over all points in the region of interest is taken as the design criterion. Optimal designs under the minimax criterion are derived for the situation where the design region and the region of interest are identical and a symmetric “partial cycle”. Some comparisons of the minimax designs with the traditional D- and A-optimal designs are provided. Efficiencies of some exact designs under the minimax criterion are also investigated.  相似文献   

11.
The main objective of the study is to compare four different procedures to test for the stability of regression coefficients. The comparisons are based on a numerical study and are with respect to their abilities to detect various simple forms of parameter instabilities. Besides the power comparisons a special interest is directed towards the choice of “window length” in the tests based on moving sums of squared recursive and ordinary least-squares residuals.  相似文献   

12.
Abstract

This paper provides an extension for “sequential order statistics” (SOS) introduced by Kamps. It is called “developed sequential order statistics” (DSOS) and is useful for describing lifetimes of engineering systems when component lifetimes are dependent. Explicit expressions for the joint density function, the marginal distributions and the means of DSOS are derived. Under the well known “conditional proportional hazard rate” (CPHR) model and the Gumbel families of copulas for dependency among component lifetimes, some findings are reported. For example, it is proved that the joint density functions of DSOS and SOS have the same structure. Various illustrative examples are also given.  相似文献   

13.
We introduce a matrix operator, which we call “vecd” operator. This operator stacks up “diagonals” of a symmetric matrix. This operator is more convenient for some statistical analyses than the commonly used “vech” operator. We show an explicit relationship between the vecd and vech operators. Using this relationship, various properties of the vecd operator are derived. As applications of the vecd operator, we derive concise and explicit expressions of the Wald and score tests for equal variances of a multivariate normal distribution and for the diagonality of variance coefficient matrices in a multivariate generalized autoregressive conditional heteroscedastic (GARCH) model, respectively.  相似文献   

14.
In this paper a derivation of the Akaike's Information Criterion (AIC) is presented to select the number of bins of a histogram given only the data, showing that AIC strikes a balance between the “bias” and “variance” of the histogram estimate. Consistency of the criterion is discussed, an asymptotically optimal histogram bin width for the criterion is derived and its relationship to penalized likelihood methods is shown. A formula relating the optimal number of bins for a sample and a sub-sample obtained from it is derived. A number of numerical examples are presented.  相似文献   

15.
Recent work, spearheaded by Charles Dunnett (1980a), leads to the conclusion that the Tukey-Kramer (TK) method (popularly known as “Kramer's Method”) is the recommended multiple comparisons procedure for the simultaneous estimation of all pairwise differences of means in an imbalanced one-way ANOVA design with homogeneous variances. Nine other multiple comparisons methods are compared to each other and to the TK method using the criteria of conservativeness, narrowness of confidence intervals, robustness, and ease of use. The degree of superiority of the TK method over these methods, especially over the popular Bonferroni method, is sufficient to warrant recommending its use. Because of the lack of robustness of the TK method in heterogeneous variance cases, other methods are recommended.  相似文献   

16.
This article presents the results of a simulation study investigating the performance of an approach developed by Miller and Landis (1991) for the analysis of clustered categorical responses. Evaluation of this “two-step” approach, which utilizes the method of moments to estimate the extra-variation pardmeters and subsequently incorporates these parameters into estimating equations for modelling the marginal expectations, is carried out in an experimental setting involving a comparison between two groups of observations. We assume that data for both groups are collected from each cluster and responses are measured on a three-point ordinal scale. The performance of the estimators used in both “steps” of the analysisis investigated and comparisons are made to an alternative analysismethod that ignores the clustering. The results indicate that in the chosen setting the test for a difference between groups generally operatbs at the nominal α=0.05 for 10 or more clusters and hasincreasing power with both an increasing number of clusters and an inrreasing treatment effect. These results provide a striking contrasc to those obtained from an improper analysis that ignores clustering.  相似文献   

17.
In most practical situations to which the analysis of variance tests are applied, they do not supply the information that the experimenter aims at. If, for example, in one-way ANOVA the hypothesis is rejected in actual application of the F-test, the resulting conclusion that the true means θ1,…,θk are not all equal, would by itself usually be insufficient to satisfy the experimenter. In fact his problems would begin at this stage. The experimenter may desire to select the “best” population or a subset of the “good” populations; he may like to rank the populations in order of “goodness” or he may like to draw some other inferences about the parameters of interest.

The extensive literature on selection and ranking procedures depends heavily on the use of independence between populations (block, treatments, etc.) in the analysis of variance. In practical applications, it is desirable to drop this assumption or independence and consider cases more general than the normal.

In the present paper, we derive a method to construct optimal (in some sense) selection procedures to select a nonempty subset of the k populations containing the best population as ranked in terms of θi’s which control the size of the selected subset and which maximizes the minimum average probability of selecting the best. We also consider the usual selection procedures in one-way ANOVA based on the generalized least squares estimates and apply the method to two-way layout case. Some examples are discussed and some results on comparisons with other procedures are also obtained.  相似文献   

18.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

19.
20.
Nonparametric families of aging distributions have been the subject of investigation for more than three decades. Both probabilistic and statistical properties of these distributions were studied for such families as “increasing failure rate”, “new better than used”, “new better than used in expectation”, and “harmonic new better than used in expectation”. In the present work, moments inequalities are derived for the above-mentioned four families that demonstrate that if the mean life is finite for any of them then all higher-order moments exist. Next, based on these inequalities, new testing procedures for exponentiality against any one of the above classes are introduced and studied showing that they are simpler than most earlier ones and hold high relative efficiency for some commonly used alternatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号