首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we obtain the complete class of representations and useful subclasses of MV-UB-LE and MV-MB-LE (minimum variance unbiased and minimum bias linear estimators) of linear parametric functions in the Gauss-Markoff model (Y,Xβ, σ 2V) when V is possibly singular.  相似文献   

2.
In this note we consider the equality of the ordinary least squares estimator (OLSE) and the best linear unbiased estimator (BLUE) of the estimable parametric function in the general Gauss–Markov model. Especially we consider the structures of the covariance matrix V for which the OLSE equals the BLUE. Our results are based on the properties of a particular reparametrized version of the original Gauss–Markov model.   相似文献   

3.
Consider the linear model (y, Xβ V), where the model matrix X may not have a full column rank and V might be singular. In this paper we introduce a formula for the difference between the BLUES of Xβ under the full model and the model where one observation has been deleted. We also consider the partitioned linear regression model where the model matrix is (X1: X2) the corresponding vector of unknown parameters being (β′1 : β′2)′. We show that the BLUE of X1 β1 under a specific reduced model equals the corresponding BLUE under the original full model and consider some interesting consequences of this result.  相似文献   

4.
Growth curve models are used to analyze repeated measures data (longitudinal data), which are functions of time. In this paper, some necessary and sufficient conditions for linear function B1YB2 to be the best linear unbiased estimator (BLUE) of estimable functions X1ΘX2 (or K1ΘK2) under the general growth curve model were established. In addition, the representations of BLUE(K1ΘK2) (or BLUE(X1ΘX2)) were derived when the conditions are satisfied. Two special notions of linear sufficiency with respect to the general growth curve model are given in the end. The findings of this paper enrich some known results in the literature.  相似文献   

5.
Let (?,X) be a random vector such that E(X|?) = ? and Var(x|?) a + b? + c?2 for some known constants a, b and c. Assume X1,…,Xn are independent observations which have the same distribution as X. Let t(X) be the linear regression of ? on X. The linear empirical Bayes estimator is used to approximate the linear regression function. It is shown that under appropriate conditions, the linear empirical Bayes estimator approximates the linear regression well in the sense of mean squared error.  相似文献   

6.
7.
This paper considers the general linear regression model yc = X1β+ut under the heteroscedastic structure E(ut) = 0, E(u2) =σ2- (Xtβ)2, E(ut us) = 0, tæs, t, s= 1, T. It is shown that any estimated GLS estimator for β is asymptotically equivalent to the GLS estimator under some regularity conditions. A three-step GLS estimator, which calls upon the assumption E(ut2) =s?2(X,β)2 for the estimation of the disturbance covariance matrix, is considered.  相似文献   

8.
In this paper we consider linear sufficiency and linear completeness in the context of estimating the estimable parametric function KβKβ under the general Gauss–Markov model {y,Xβ2V}{y,Xβ,σ2V}. We give new characterizations for linear sufficiency, and define and characterize linear completeness in a case of estimation of KβKβ. Also, we consider a predictive approach for obtaining the best linear unbiased estimator of KβKβ, and subsequently, we give the linear analogues of the Rao–Blackwell and Lehmann–Scheffé Theorems in the context of estimating KβKβ.  相似文献   

9.
Consider the Gauss-Markoff model (Y, Xβ, σ2 V) in the usual notation (Rao, 1973a, p. 294). If V is singular, there exists a matrix N such that N'Y has zero covariance. The minimum variance unbiased estimator of an estimable parametric function p'β is obtained in the wider class of (non-linear) unbiased estimators of the form f(N'Y) + Y'g(N'Y) where f is a scalar and g is a vector function.  相似文献   

10.
Rasul A. Khan 《Statistics》2015,49(3):705-710
Let X1, X2, …, Xn be iid N(μ, aμ2) (a>0) random variables with an unknown mean μ>0 and known coefficient of variation (CV) √a. The estimation of μ is revisited and it is shown that a modified version of an unbiased estimator of μ [cf. Khan RA. A note on estimating the mean of a normal distribution with known CV. J Am Stat Assoc. 1968;63:1039–1041] is more efficient. A certain linear minimum mean square estimator of Gleser and Healy [Estimating the mean of a normal distribution with known CV. J Am Stat Assoc. 1976;71:977–981] is also modified and improved. These improved estimators are being compared with the maximum likelihood estimator under squared-error loss function. Based on asymptotic consideration, a large sample confidence interval is also mentioned.  相似文献   

11.
ABSTRACT

Suppose independent random samples are available from k(k ≥ 2) exponential populations ∏1,…,∏ k with a common location θ and scale parameters σ1,…,σ k , respectively. Let X i and Y i denote the minimum and the mean, respectively, of the ith sample, and further let X = min{X 1,…, X k } and T i  = Y i  ? X; i = 1,…, k. For selecting a nonempty subset of {∏1,…,∏ k } containing the best population (the one associated with max{σ1,…,σ k }), we use the decision rule which selects ∏ i if T i  ≥ c max{T 1,…,T k }, i = 1,…, k. Here 0 < c ≤ 1 is chosen so that the probability of including the best population in the selected subset is at least P* (1/k ≤ P* < 1), a pre-assigned level. The problem is to estimate the average worth W of the selected subset, the arithmetic average of means of selected populations. In this article, we derive the uniformly minimum variance unbiased estimator (UMVUE) of W. The bias and risk function of the UMVUE are compared numerically with those of analogs of the best affine equivariant estimator (BAEE) and the maximum likelihood estimator (MLE).  相似文献   

12.
A Gauss–Markov model is said to be singular if the covariance matrix of the observable random vector in the model is singular. In such a case, there exist some natural restrictions associated with the observable random vector and the unknown parameter vector in the model. In this paper, we derive through the matrix rank method a necessary and sufficient condition for a vector of parametric functions to be estimable, and necessary and sufficient conditions for a linear estimator to be unbiased in the singular Gauss–Markov model. In addition, we give some necessary and sufficient conditions for the ordinary least-square estimator (OLSE) and the best linear unbiased estimator (BLUE) under the model to satisfy the natural restrictions.   相似文献   

13.
An incomplete factorial design based on an extension of the Fawiliar 2kfactorial called a nested cube is proposed for use in response surface investigationso The simplicity and general efficiency of the nested cube suggest its suitability to many areas of research, especially that repeated at many locations orconductea over a long period. Comparisons to potentially competing designs are provided for bias in response estination due to fitting an ioappropriate model and for profiles of variance. merits of the nested cube are (1) a level of relative bias and variance judged to be favorable though not optimal, (2) an ability to utilize a minimum blag estimator not available to competing designs, and (3) a simplicity associated with use of equal spacing

and nearly equal replication on the margin for each factor level.  相似文献   

14.
In this article, we consider a partially linear single-index model Y = g(Z τθ0) + X τβ0 + ? when the covariate X may be missing at random. We propose weighted estimators for the unknown parametric and nonparametric part by applying weighted estimating equations. We establish normality of the estimators of the parameters and asymptotic expansion for the estimator of the nonparametric part when the selection probabilities are unknown. Simulation studies are also conducted to illustrate the finite sample properties of these estimators.  相似文献   

15.
《统计学通讯:理论与方法》2012,41(13-14):2405-2418
In this article, we consider two linear models, ?1 = {y, X β, V 1} and ?2 = {y, X β, V 2}, which differ only in their covariance matrices. Our main focus lies on the difference of the best linear unbiased estimators, BLUEs, of X β under these models. The corresponding problems between the models {y, X β, I n } and {y, X β, V}, i.e., between the OLSE (ordinary least squares estimator) and BLUE, are pretty well studied. Our purpose is to review the corresponding considerations between the BLUEs of X β under ?1 and ?2. This article is an expository one presenting also new results.  相似文献   

16.
Among criteria for the least squares estimator in a linear model (y, , V) to be simultaneously the best linear unbiased estimator, one convenient for applications is that of Anderson (1971, 1972). His result, however, has been developed under assumptions of full column rank for X and nonsingularity for V. Subsequently, this result has been extended by Styan (1973) to the case when the restriction on X is removed. In this note, it is shown that also the restriction on V can be relaxed and, consequently, that Anderson's criterion is applicable to the general linear model without any rank assumptions at all.  相似文献   

17.
In this note we present a criterion for linear estimation which is similar to MV-MB-LE of Rao (1978) in Gauss-Markoff model (Y, XB, α2G). We call this criterion MMS-MB-LE (Minimum Mean Square Error-Minimum Bias-Linear Estimation)> Representations of solutions to such estimators similar to those of Rao (1978) are provided.  相似文献   

18.
The paper introduces a new difference-based Liu estimator β?Ldiff=([Xtilde]′[Xtilde]+I)?1([Xtilde]′[ytilde]+η β?diff) of the regression parameters β in the semiparametric regression model, y=Xβ+f+?. Difference-based estimator, β?diff=([Xtilde]′[Xtilde])?1[Xtilde]′[ytilde] and difference-based Liu estimator are analysed and compared with respect to mean-squared error (mse) criterion. Finally, the performance of the new estimator is evaluated for a real data set. Monte Carlo simulation is given to show the improvement in the scalar mse of the estimator.  相似文献   

19.
In this article, we discuss on how to predict a combined quadratic parametric function of the form β H β + hσ2 in a general linear model with stochastic regression coefficients denoted by y  =  X β +  e . Firstly, the quadratic predictability of β H β + hσ2 is investigated to obtain a quadratic unbiased predictor (QUP) via a general method of structuring an unbiased estimator. This QUP is also optimal in some situations and therefore we hope it will be a fine predictor. To show this idea, we apply the Lagrange multipliers method to this problem and finally reach the expected conclusion through permutation matrix techniques.  相似文献   

20.
The general Gauss–Markov model, Y = e, E(e) = 0, Cov(e) = σ 2 V, has been intensively studied and widely used. Most studies consider covariance matrices V that are nonsingular but we focus on the most difficult case wherein C(X), the column space of X, is not contained in C(V). This forces V to be singular. Under this condition there exist nontrivial linear functions of Q that are known with probability 1 (perfectly) where ${C(Q)=C(V)^\perp}$ . To treat ${C(X) \not \subset C(V)}$ , much of the existing literature obtains estimates and tests by replacing V with a pseudo-covariance matrix T = V + XUX′ for some nonnegative definite U such that ${C(X) \subset C(T)}$ , see Christensen (Plane answers to complex questions: the theory of linear models, 2002, Chap. 10). We find it more intuitive to first eliminate what is known about and then to adjust X while keeping V unchanged. We show that we can decompose β into the sum of two orthogonal parts, β = β 0 + β 1, where β 0 is known. We also show that the unknown component of X β is ${X\beta_1 \equiv \tilde{X} \gamma}$ , where ${C(\tilde{X})=C(X)\cap C(V)}$ . We replace the original model with ${Y-X\beta_0=\tilde{X}\gamma+e}$ , E(e) = 0, ${Cov(e)=\sigma^2V}$ and perform estimation and tests under this new model for which the simplifying assumption ${C(\tilde{X}) \subset C(V)}$ holds. This allows us to focus on the part of that parameters that are not known perfectly. We show that this method provides the usual estimates and tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号