首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
《统计学通讯:理论与方法》2012,41(13-14):2405-2418
In this article, we consider two linear models, ?1 = {y, X β, V 1} and ?2 = {y, X β, V 2}, which differ only in their covariance matrices. Our main focus lies on the difference of the best linear unbiased estimators, BLUEs, of X β under these models. The corresponding problems between the models {y, X β, I n } and {y, X β, V}, i.e., between the OLSE (ordinary least squares estimator) and BLUE, are pretty well studied. Our purpose is to review the corresponding considerations between the BLUEs of X β under ?1 and ?2. This article is an expository one presenting also new results.  相似文献   

2.
Consider the linear regression model y =β01 ++ in the usual notation. It is argued that the class of ordinary ridge estimators obtained by shrinking the least squares estimator by the matrix (X1X + kI)-1X'X is sensitive to outliers in the ^variable. To overcome this problem, we propose a new class of ridge-type M-estimators, obtained by shrinking an M-estimator (instead of the least squares estimator) by the same matrix. Since the optimal value of the ridge parameter k is unknown, we suggest a procedure for choosing it adaptively. In a reasonably large scale simulation study with a particular M-estimator, we found that if the conditions are such that the M-estimator is more efficient than the least squares estimator then the corresponding ridge-type M-estimator proposed here is better, in terms of a Mean Squared Error criteria, than the ordinary ridge estimator with k chosen suitably. An example illustrates that the estimators proposed here are less sensitive to outliers in the y-variable than ordinary ridge estimators.  相似文献   

3.
Among criteria for the least squares estimator in a linear model (y, , V) to be simultaneously the best linear unbiased estimator, one convenient for applications is that of Anderson (1971, 1972). His result, however, has been developed under assumptions of full column rank for X and nonsingularity for V. Subsequently, this result has been extended by Styan (1973) to the case when the restriction on X is removed. In this note, it is shown that also the restriction on V can be relaxed and, consequently, that Anderson's criterion is applicable to the general linear model without any rank assumptions at all.  相似文献   

4.
In this note we consider the equality of the ordinary least squares estimator (OLSE) and the best linear unbiased estimator (BLUE) of the estimable parametric function in the general Gauss–Markov model. Especially we consider the structures of the covariance matrix V for which the OLSE equals the BLUE. Our results are based on the properties of a particular reparametrized version of the original Gauss–Markov model.   相似文献   

5.
Several estimators of X β under the general Gauss–Markov model are considered. Particular attention is paid to those estimators whose efficiency lies between that of the ordinary least squares estimator and that of the best linear unbiased estimator.  相似文献   

6.
We consider the Gauss-Markoff model (Y,X0β,σ2V) and provide solutions to the following problem: What is the class of all models (Y,Xβ,σ2V) such that a specific linear representation/some linear representation/every linear representation of the BLUE of every estimable parametric functional p'β under (Y,X0β,σ2V) is (a) an unbiased estimator, (b) a BLUE, (c) a linear minimum bias estimator and (d) best linear minimum bias estimator of p'β under (Y,Xβ,σ2V)? We also analyse the above problems, when attention is restricted to a subclass of estimable parametric functionals.  相似文献   

7.
Two often-quoted necessary and sufficient conditions for ordinary least squares estimators to be best linear unbiased estimators are described. Another necessary and sufficient condition is described, providing an additional tool for checking to see whether the covariance matrix of a given linear model is such that the ordinary least squares estimator is also the best linear unbiased estimator. The new condition is used to show that one of the two published conditions is only a sufficient condition.  相似文献   

8.
The equality of ordinary least squares estimator (OLSE), best linear unbiased estimator (BLUE) and best linear unbiased predictor (BLUP) in the general linear model with new observations is investigated through matrix rank method, some new necessary and sufficient conditions are given.  相似文献   

9.
10.
We consider the estimation of the parameters in two partitioned linear models, denoted by 𝒜 = {y, X 1 β 1 + X 2 β 2, V 𝒜} and ? = {y, X 1 β 1 + X 2 β 2, V ?}, which we call full models. Correspondingly, we define submodels 𝒜1 = {y, X 1 β 1, V 𝒜} and ?1 = {y, X 1 β 1, V ?}. Using the so-called Pandora's Box approach introduced by Rao (1971 Rao , C. R. ( 1971 ). Unified theory of linear estimation . Sankhy?, Ser. A 33 : 371394 . [Corrigendum (1972), 34, p. 194, 477.]  [Google Scholar], we give new necessary and sufficient conditions for the equality between the best linear unbiased estimators (BLUEs) of X 1 β 1 under 𝒜1 and ?1 as well as under 𝒜 and ?. In our considerations we will utilise the Frisch–Waugh–Lovell theorem which provides a connection between the full model 𝒜 and the reduced model 𝒜 r  = {M 2 y, M 2 X 1 β 1, M 2 V 𝒜 M 2} with M 2 being an appropriate orthogonal projector. Moreover, we consider the equality of the BLUEs under the full models assuming that they are equal under the submodels.  相似文献   

11.
ABSTRACT

Regression models are usually used in forecasting (predicting) unknown values of the response variable y. This article considers the predictive performance of the almost unbiased Liu estimator compared to the ordinary least-squares estimator, principal component regression estimator, and Liu estimator. Finally, we present a numerical example to explain the theoretical results and we obtain a region where the almost unbiased Liu estimator is uniformly superior to the ordinary least-squares estimator, principal component regression estimator, and Liu estimator.  相似文献   

12.
Hilmar Drygas 《Statistics》2013,47(2):211-231
This paper deals with the existence of best quadratic unbiased estimators in variance covariance component models. It extends and unifies results previously obtained by Seely, Zyskind, Klonecki, Zmy?lony, Gnot, Kleffe and Pincus. The author considers a quasinormally distributed random vector y such that Ey = , Cov yL, where L is a linear space of symmetric square matrices. Conditions for the existence of a BLUE of Ey and a BQUE of Cov y (Eyy′) are investigated. A BLUE exists iff symmetry conditions for certain matrices are met while a BQUE exists iff some modified quadratic subspace conditions are met. At the end of the paper three examples are studied in which all these conditions are met: The Random Coefficient Regression Model, the multivariate linear model and the Behrens-Fisher model. The proofs of the theorems are obtained by considering linear model in y and yy′, respectively.  相似文献   

13.
It is well known that when the true values of the independent variable are unobservable due to measurement error, the least squares estimator for a regression model is biased and inconsistent. When repeated observations on each xi are taken, consistent estimators for the linear-plateau model can be formed. The repeated observations are required to classify each observation to the appropriate line segment. Two cases of repeated observations are treated in detail. First, when a single value of yi is observed with the repeated observations of xi the least squares estimator using the mean of the repeated xi observations is consistent and asymptotically normal. Second, when repeated observations on the pair (xi, yi ) are taken the least squares estimator is inconsistent, but two consistent estimators are proposed: one that consistently estimates the bias of the least squares estimator and adjusts accordingly; the second is the least squares estimator using the mean of the repeated observations on each pair.  相似文献   

14.
In the standard linear regression model with independent, homoscedastic errors, the Gauss—Markov theorem asserts that = (X'X)-1(X'y) is the best linear unbiased estimator of β and, furthermore, that is the best linear unbiased estimator of c'β for all p × 1 vectors c. In the corresponding random regressor model, X is a random sample of size n from a p-variate distribution. If attention is restricted to linear estimators of c'β that are conditionally unbiased, given X, the Gauss—Markov theorem applies. If, however, the estimator is required only to be unconditionally unbiased, the Gauss—Markov theorem may or may not hold, depending on what is known about the distribution of X. The results generalize to the case in which X is a random sample without replacement from a finite population.  相似文献   

15.
This paper considers the general linear regression model yc = X1β+ut under the heteroscedastic structure E(ut) = 0, E(u2) =σ2- (Xtβ)2, E(ut us) = 0, tæs, t, s= 1, T. It is shown that any estimated GLS estimator for β is asymptotically equivalent to the GLS estimator under some regularity conditions. A three-step GLS estimator, which calls upon the assumption E(ut2) =s?2(X,β)2 for the estimation of the disturbance covariance matrix, is considered.  相似文献   

16.
Consistency and asymptotic normality of the maximum likelihood estimator of β in the loglinear model E(yi) = eα+βXi, where yi are independent Poisson observations, 1 iaan, are proved under conditions which are near necessary and sufficient. The asymptotic distribution of the deviance test for β=β0 is shown to be chi-squared with 1 degree of freedom under the same conditions, and a second order correction to the deviance is derived. The exponential model for censored survival data is also treated by the same methods.  相似文献   

17.
Let [^(\varveck)]{\widehat{\varvec{\kappa}}} and [^(\varveck)]r{\widehat{\varvec{\kappa}}_r} denote the best linear unbiased estimators of a given vector of parametric functions \varveck = \varvecKb{\varvec{\kappa} = \varvec{K\beta}} in the general linear models M = {\varvecy, \varvecX\varvecb, s2\varvecV}{{\mathcal M} = \{\varvec{y},\, \varvec{X\varvec{\beta}},\, \sigma^2\varvec{V}\}} and Mr = {\varvecy, \varvecX\varvecb | \varvecR \varvecb = \varvecr, s2\varvecV}{{\mathcal M}_r = \{\varvec{y},\, \varvec{X}\varvec{\beta} \mid \varvec{R} \varvec{\beta} = \varvec{r},\, \sigma^2\varvec{V}\}}, respectively. A bound for the Euclidean distance between [^(\varveck)]{\widehat{\varvec{\kappa}}} and [^(\varveck)]r{\widehat{\varvec{\kappa}}_r} is expressed by the spectral distance between the dispersion matrices of the two estimators, and the difference between sums of squared errors evaluated in the model M{{\mathcal M}} and sub-restricted model Mr*{{\mathcal M}_r^*} containing an essential part of the restrictions \varvecR\varvecb = \varvecr{\varvec{R}\varvec{\beta} = \varvec{r}} with respect to estimating \varveck{\varvec{\kappa}}.  相似文献   

18.
General mixed linear models for experiments conducted over a series of sltes and/or years are described. The ordinary least squares (OLS) estlmator is simple to compute, but is not the best unbiased estimator. Also, the usuaL formula for the varlance of the OLS estimator is not correct and seriously underestimates the true variance. The best linear unbiased estimator is the generalized least squares (GLS) estimator. However, t requires an inversion of the variance-covariance matrix V, whlch is usually of large dimension. Also, in practice, V is unknown.

We presented an estlmator [Vcirc] of the matrix V using the estimators of variance components [for sites, blocks (sites), etc.]. We also presented a simple transformation of the data, such that an ordinary least squares regression of the transformed data gives the estimated generalized least squares (EGLS) estimator. The standard errors obtained from the transformed regression serve as asymptotic standard errors of the EGLS estimators. We also established that the EGLS estlmator is unbiased.

An example of fitting a linear model to data for 18 sites (environments) located in Brazil is given. One of the site variables (soil test phosphorus) was measured by plot rather than by site and this established the need for a covariance model such as the one used rather than the usual analysis of variance model. It is for this variable that the resulting parameter estimates did not correspond well between the OLS and EGLS estimators. Regression statistics and the analysis of variance for the example are presented and summarized.  相似文献   

19.
In this article, we discuss on how to predict a combined quadratic parametric function of the form β H β + hσ2 in a general linear model with stochastic regression coefficients denoted by y  =  X β +  e . Firstly, the quadratic predictability of β H β + hσ2 is investigated to obtain a quadratic unbiased predictor (QUP) via a general method of structuring an unbiased estimator. This QUP is also optimal in some situations and therefore we hope it will be a fine predictor. To show this idea, we apply the Lagrange multipliers method to this problem and finally reach the expected conclusion through permutation matrix techniques.  相似文献   

20.
This note investigates the efficiency of using near-best or approximate L1 estimators as starting values in L1 linear programming procedures. In particular, it is shown that the total computer time can often be reduced if one first computes the least squares estimator, β, and then adjust y to y - Xβ in Barrodale and Roberts’ improved algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号