首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
理论方法论   1篇
统计学   7篇
  2013年   2篇
  2010年   2篇
  2003年   1篇
  2002年   1篇
  1997年   1篇
  1993年   1篇
排序方式: 共有8条查询结果,搜索用时 31 毫秒
1
1.
Modeling prior information as a fuzzy set and using Zadeh’s extension principle, a general approach is presented how to rate linear affine estimators in linear regression. This general approach is applied to fuzzy prior information sets given by ellipsoidal α-cuts. Here, in an important and meaningful subclass, a uniformly best linear affine estimator can be determined explicitly. Surprisingly, such a uniformly best linear affine estimator is optimal with respect to a corresponding relative squared error approach. Two illustrative special cases are discussed, where a generalized least squares estimator on the one hand and a general ridge or Kuks–Olman estimator on the other hand turn out to be uniformly best.  相似文献   
2.
In linear regression models, predictors based on least squares or on generalized least squares estimators are usually applied which, however, fail in case of multicollinearity. As an alternative biased estimators like ridge estimators, Kuks-Olman estimators, Bayes or minimax estimators are sometimes suggested. In our analysis the relative instead of the generally used absolute squared error enters the objective function. An explicit minimax solution is derived which, in an important special case, can be viewed as a predictor based on a Kuks-Olman estimator.  相似文献   
3.
It is already shown in Arnold and Stahlecker (2009) that, in linear regression, a uniformly best estimator exists in the class of all Γ-compatibleΓ-compatible linear affine estimators. Here, prior information is given by a fuzzy set ΓΓ defined by its ellipsoidal α-cutsα-cuts. Surprisingly, such a uniformly best linear affine estimator is uniformly best not only in the class of all Γ-compatibleΓ-compatible linear affine estimators but also in the class of all estimators satisfying a very weak and sensible condition. This property of a uniformly best linear affine estimator is shown in the present paper. Furthermore, two illustrative special cases are mentioned, where a generalized least squares estimator on the one hand and a general ridge or Kuks–Olman estimator on the other hand turn out to be uniformly best.  相似文献   
4.
In order to obtain optimal estimators in a generalized linear regression model we apply the minimax principle to the relative squared error. It turns out that this approach is equivalent to the application of the minimax principle to the absolute squared error when an ellipsoidal prior information set is given. We discuss the admissibility of these minimax estimators. Furthermore, a close relation to a Bayesian approach is derived.  相似文献   
5.
In this paper the Hurwicz decision rule is applied to an adjustment problem concerning the decision whether a given action should be improved in the light of some knowledge on the states of nature or on other actors' behaviour. In comparison with the minimax and the minimin adjustment principles the general Hurwicz rule reduces to these specific classes whenever the underlying loss function is quadratic and knowledge is given by an ellipsoidal set. In the framework of the adjustment model discussed in this paper Hurwicz's optimism index can be interpreted as a mobility index representing the actor's attitude towards new external information. Examples are given that serve to illustrate the theoretical findings.  相似文献   
6.
We consider the problem of estimating the parameter vector in the linear model when observations on the independent variables are partially missing or incorrect. A new estimator is developed which systematically combines prior restrictions on the exogenous variables with the incomplete data. We compare this method with the alternative strategy of deleting missing values.  相似文献   
7.
We consider to ordinary linear regression model where the parameter vector ß is constrained to a given ellipsoid. It will be shown that within the class of linear statistics for ß where exists a (sub-)sequence of approximate minimax estimators converging to an exact minimax estimator. This result is valid for an arbitrary quadratic loss function.  相似文献   
8.
We consider the linear regression modely=Xβ+u with prior information on the unknown parameter vector β. The additional information on β is given by a fuzzy set. Using the mean squared error criterion we derive linear estimators that optimally combine the data with the fuzzy prior information. Our approach generalizes the classical minimax procedure firstly proposed by Kuks and Olman.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号