Linear models that allow perfect estimation |
| |
Authors: | Ronald Christensen Yong Lin |
| |
Affiliation: | 1. Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM, 87131-0001, USA
|
| |
Abstract: | The general Gauss–Markov model, Y = Xβ + e, E(e) = 0, Cov(e) = σ 2 V, has been intensively studied and widely used. Most studies consider covariance matrices V that are nonsingular but we focus on the most difficult case wherein C(X), the column space of X, is not contained in C(V). This forces V to be singular. Under this condition there exist nontrivial linear functions of Q′Xβ that are known with probability 1 (perfectly) where ${C(Q)=C(V)^perp}$ . To treat ${C(X) not subset C(V)}$ , much of the existing literature obtains estimates and tests by replacing V with a pseudo-covariance matrix T = V + XUX′ for some nonnegative definite U such that ${C(X) subset C(T)}$ , see Christensen (Plane answers to complex questions: the theory of linear models, 2002, Chap. 10). We find it more intuitive to first eliminate what is known about Xβ and then to adjust X while keeping V unchanged. We show that we can decompose β into the sum of two orthogonal parts, β = β 0 + β 1, where β 0 is known. We also show that the unknown component of X β is ${Xbeta_1 equiv tilde{X} gamma}$ , where ${C(tilde{X})=C(X)cap C(V)}$ . We replace the original model with ${Y-Xbeta_0=tilde{X}gamma+e}$ , E(e) = 0, ${Cov(e)=sigma^2V}$ and perform estimation and tests under this new model for which the simplifying assumption ${C(tilde{X}) subset C(V)}$ holds. This allows us to focus on the part of that parameters that are not known perfectly. We show that this method provides the usual estimates and tests. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|