首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The goal of achieving high quality products has led to an emphasis on reducing variation in performance characteristics. It may often happen that one of the product's components is responsible for much of the observed variation. This research is stimulated by the problem of detecting a component that impairs quality by systematically inflating the variance in a product that is assembled from “interchangeable components.” We consider the class of “disassembly-reassembly” experiments, in which components are swapped among assemblies. The specific units used in the experiment are sampled from a large population of units, so it is natural to measure the influence of each factor by its variance component. We present the model for these experiments as a special case of the mixed linear model, compare several estimators for the variance components and consider the problem of testing hypotheses to identify troublesome components.  相似文献   

2.
A new approach to inference on variance components is propounded. This approach not only gives a new justification for Fisher's fiducial, distribution for the “between classes” component, but also leads to a new distribution for the “within classes” component. This latter distribution is studied, and has some intuitively very reasonable properties. Numerical results are given.  相似文献   

3.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   

4.
The fiducial approach to the two components of variance random effects model developed by Venables and James (1978) is related to the Bayesian approach of Box and Tiao (1973). The operating characteristics, under repeated sampling, of the resulting interval estimators for the “within classes” variance component are investigated, and the behaviour of the two sets of intervals is found to be very similar, the coverage frequency of 95% probability intervals being approximately 91% when the “between classes” variance component is zero but rising rapidly to 95% as the between component increases. The probability intervals are shown to be shorter on average than a comparable confidence interval based upon the within classes sum of squares, and to be robust against nonnormality in the class means.  相似文献   

5.
Burk at al (1984) gave a results concerning the comparison of the length of the two different confidence intervals for variance ratio, when the construction of the intervals was based on the principle of “equal tails”11. The purpose of this paper is to be solve the similar problem in case of the principle of “minimal length”.  相似文献   

6.
In the context of accident theory, the bivariate generalized Waring distribution (Xekalaki, 1984) is known to offer the possibility of obtaining distinguishable estimates of the “contribution” of chance, risk exposure and proneness to an accident situation. In this paper an estimation procedure based on the first and second order factorial moments is discussed for fitting the distribution to data. Expressions for the asymptotic standard errors of the estimators of the distribution parameters as well as of the resulting estimators of the variance components that represent the roles of the above mentioned factors are given.  相似文献   

7.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

8.
The problem of choosing optimal levels of the acceleration variable for accelerated testing is an important issue in reliability analysis. Most recommendations have focused on minimizing the variance of an estimator of a particular characteristic, such as a percentile, for a specific parametric model. In this paper, a general approach based on “locally penalized” D-optimality (LPD-optimality) is proposed, which simultaneously minimizes the variances of the model parameter estimators. Application of the method is illustrated for inverse Gaussian-accelerated test models fitted to carbon fiber tensile strength data, where the fiber length is the “acceleration variable”.  相似文献   

9.
The subject of this paper is Bayesian inference about the fixed and random effects of a mixed-effects linear statistical model with two variance components. It is assumed that a priori the fixed effects have a noninformative distribution and that the reciprocals of the variance components are distributed independently (of each other and of the fixed effects) as gamma random variables. It is shown that techniques similar to those employed in a ridge analysis of a response surface can be used to construct a one-dimensional curve that contains all of the stationary points of the posterior density of the random effects. The “ridge analysis” (of the posterior density) can be useful (from a computational standpoint) in finding the number and the locations of the stationary points and can be very informative about various features of the posterior density. Depending on what is revealed by the ridge analysis, a multivariate normal or multivariate-t distribution that is centered at a posterior mode may provide a satisfactory approximation to the posterior distribution of the random effects (which is of the poly-t form).  相似文献   

10.
Nearest Neighbor Adjusted Best Linear Unbiased Prediction   总被引:1,自引:0,他引:1  
Statistical inference for linear models has classically focused on either estimation or hypothesis testing of linear combinations of fixed effects or of variance components for random effects. A third form of inference—prediction of linear combinations of fixed and random effects—has important advantages over conventional estimators in many applications. None of these approaches will result in accurate inference if the data contain strong, unaccounted for local gradients, such as spatial trends in field-plot data. Nearest neighbor methods to adjust for such trends have been widely discussed in recent literature. So far, however, these methods have been developed exclusively for classical estimation and hypothesis testing. In this article a method of obtaining nearest neighbor adjusted (NNA) predictors, along the lines of “best linear unbiased prediction,” or BLUP, is developed. A simulation study comparing “NNABLUP” to conventional NNA methods and to non-NNA alternatives suggests considerable potential for improved efficiency.  相似文献   

11.
In many areas of application mixed linear models serve as a popular tool for analyzing highly complex data sets. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators, (RE)ML, are commonly pursued. However, it is well-known that these fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to other violations of distributional assumptions. In this article, we propose a new class of robust-efficient estimators for inference in mixed linear models. The new three-step estimation procedure provides truncated generalized least squares and variance components' estimators with hard-rejection weights adaptively computed from the data. More specifically, our data re-weighting mechanism first detects and removes within-subject outliers, then identifies and discards between-subject outliers, and finally it employs maximum likelihood procedures on the “clean” data. Theoretical efficiency and robustness properties of this approach are established.  相似文献   

12.
Several authors have discussed Kalman filtering procedures using a mixture of normals as a model for the distributions of the noise in the observation and/or the state space equations. Under this model, resulting posteriors involve a mixture of normal distributions, and a “collapsing method” must be found in order to keep the recursive procedure simple. We prove that the Kullback-Leibler distance between the mixture posterior and that of a single normal distribution is minimized when we choose the mean and variance of the single normal distribution to be the mean and variance of the mixture posterior. Hence, “collapsing by moments” is optimal in this sense. We then develop the resulting optimal algorithm for “Kalman filtering” for this situation, and illustrate its performance with an example.  相似文献   

13.
A multivariate “errors in variables” regression model is proposed which generalizes a model previously considered by Gleser and Watson (1973). Maximum likelihood estimators [MLE's] for the parameters of this model are obtained, and the consistency properties of these estimators are investigated. Distribution of the MLE of the “error” variance is obtained in a simple case while the mean and the variance of the estimator are obtained in this case without appealing to the exact distribution.  相似文献   

14.
There are available several point estimators of the percentiles of a normal distribution with both mean and variance unknown. Consequently, it would seem appropriate to make a comparison among the estimators through some “closeness to the true value” criteria. Along these lines, the concept of Pitman-closeness efficiency is introduced. Essentially, when comparing two estimators, the Pit-man-closeness efficiency gives the “odds” in favor of one of the estimators being closer to the true value than is the other in a given situation. Through the use of Pitman-closeness efficiency, this paper compares (a) the maximum likelihood estimator, (b) the minimum variance unbiased estimator, (c) the best invariant estimator, and (d) the median unbiased estimator within a class of estimators which includes (a), (b), and (c). Mean squared efficiency is also discussed.  相似文献   

15.
In this paper a derivation of the Akaike's Information Criterion (AIC) is presented to select the number of bins of a histogram given only the data, showing that AIC strikes a balance between the “bias” and “variance” of the histogram estimate. Consistency of the criterion is discussed, an asymptotically optimal histogram bin width for the criterion is derived and its relationship to penalized likelihood methods is shown. A formula relating the optimal number of bins for a sample and a sub-sample obtained from it is derived. A number of numerical examples are presented.  相似文献   

16.
This paper is concerned with methods of reducing variability and computer time in a simulation study. The Monte Carlo swindle, through mathematical manipulations, has been shown to yield more precise estimates than the “naive” approach. In this study computer time is considered in conjunction with the variance estimates. It is shown that by this measure the naive method is often a viable alternative to the swindle. This study concentrates on the problem of estimating the variance of an estimator of location. The advantage of one technique over another depends upon the location estimator, the sample size, and the underlying distribution. For a fixed number of samples, while the naive method gives a less precise estimate than the swindle, it requires fewer computations. In addition, for certain location estimators and distributions, the naive method is able to take advantage of certain shortcuts in the generation of each sample. The small amount of time required by this “enlightened” naive method often more than compensates for its relative lack of precision.  相似文献   

17.
Selection of the “best” t out of k populations has been considered in the indifferece zone formulation by Bachhofer (1954) and in the subset selection formulation by Carroll, Gupta and Huang (1975). The latter approach is used here to obtain conservative solutions for the goals of selecting (i) all the “good” or (ii) only “good” populations, where “good” means having a location parameter among the largest t. For the case of normal distributions, with common unknown variance, tables are produced for implementing these procedures. Also, for this case, simulation results suggest that the procedure may not be too conservative.  相似文献   

18.
We propose replacing the usual Student's-t statistic, which tests for equality of means of two distributions and is used to construct a confidence interval for the difference, by a biweight-“t” statistic. The biweight-“t” is a ratio of the difference of the biweight estimates of location from the two samples to an estimate of the standard error of this difference. Three forms of the denominator are evaluated: weighted variance estimates using both pooled and unpooled scale estimates, and unweighted variance estimates using an unpooled scale estimate. Monte Carlo simulations reveal that resulting confidence intervals are highly efficient on moderate sample sizes, and that nominal levels are nearly attained, even when considering extreme percentage points.  相似文献   

19.
This paper uses random scales similar to random effects used in the generalized linear mixed models to describe “inter-location” population variation in variance components for modeling complicated data obtained from applications such as antenna manufacturing. Our distribution studies lead to a complicated integrated extended quasi-likelihood (IEQL) for parameter estimations and large sample inference derivations. Laplace's expansion and several approximation methods are employed to simplify the IEQL estimation procedures. Asymptotic properties of the approximate IEQL estimates are derived for general structures of the covariance matrix of random scales. Focusing on a few special covariance structures in simpler forms, the authors further simplify IEQL estimates such that typically used software tools such as weighted regression can compute the estimates easily. Moreover, these special cases allow us to derive interesting asymptotic results in much more compact expressions. Finally, numerical simulation results show that IEQL estimates perform very well in several special cases studied.  相似文献   

20.
“Step down” or “sequentially rejective” procedures for comparisons with a control are considered for both one sided and two sided comparisons. Confidence bounds (in terms of the control) are derived for those (location) parameters not in a selected set. Special results are derived for the normal distribution with unknown variance where the sample numbers are (possibly) unequal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号