首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Under some nonstochastic linear restrictions based on either additional information or prior knowledge in a semiparametric regression model, a family of feasible generalized robust estimators for the regression parameter is proposed. The least trimmed squares (LTS) method proposed by Rousseeuw as a highly robust regression estimator is a statistical technique for fitting a regression model based on the subset of h observations (out of n) whose least-square fit possesses the smallest sum of squared residuals. The coverage h may be set between n/2 and n. The LTS estimator involves computing the hyperplane that minimizes the sum of the smallest h squared residuals. For practical purpose, it is assumed that the covariance matrix of the error term is unknown and thus feasible estimators are replaced. Then, we develop an algorithm for the LTS estimator based on feasible methods. Through the Monte Carlo simulation studies and a real data example, performance of the feasible type of robust estimators is compared with the classical ones in restricted semiparametric regression models.  相似文献   

2.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   

3.
Jump-detection and curve estimation methods for the discontinuous regression function are proposed in this article. First, two estimators of the regression function based on B-splines are considered. The first estimator is obtained when the knot sequence is quasi-uniform; by adding a knot with multiplicity p + 1 at a fixed point x0 on support [a, b], we can obtain the second estimator. Then, the jump locations are detected by the performance of the difference of the residual sum of squares DRSS(x0) (x0 ∈ (a, b)); subsequently the regression function with jumps can be fitted based on piecewise B-spline function. Asymptotic properties are established under some mild conditions. Several numerical examples using both simulated and real data are presented to evaluate the performance of the proposed method.  相似文献   

4.
The problem of estimating ordered parameters is encountered in biological, agricultural, reliability and various other experiments. Consider two populations with densities f1(x11) and f2(x22) where ω12. The estimation of ω12) with the loss function, the sum of squared errors, is studied. when fi is the fi(,i,,i 2) density with ,i known, i=1,2; we obtain a class of minimax estimators. When ω12 we show some of these estimators are improved by the maximum likelihood estimator. For a general fi we give sufficient conditions for the minimaxity of the analogue of the Pitman estimator.  相似文献   

5.
In linear regression models, predictors based on least squares or on generalized least squares estimators are usually applied which, however, fail in case of multicollinearity. As an alternative biased estimators like ridge estimators, Kuks-Olman estimators, Bayes or minimax estimators are sometimes suggested. In our analysis the relative instead of the generally used absolute squared error enters the objective function. An explicit minimax solution is derived which, in an important special case, can be viewed as a predictor based on a Kuks-Olman estimator.  相似文献   

6.
We regard the simple linear calibration problem where only the response y of the regression line y = β0 + β1 t is observed with errors. The experimental conditions t are observed without error. For the errors of the observations y we assume that there may be some gross errors providing outlying observations. This situation can be modeled by a conditionally contaminated regression model. In this model the classical calibration estimator based on the least squares estimator has an unbounded asymptotic bias. Therefore we introduce calibration estimators based on robust one-step-M-estimators which have a bounded asymptotic bias. For this class of estimators we discuss two problems: The optimal estimators and their corresponding optimal designs. We derive the locally optimal solutions and show that the maximin efficient designs for non-robust estimation and robust estimation coincide.  相似文献   

7.
For the regression model y=X β+ε where the errors follow the elliptically contoured distribution, we consider the least squares, restricted least squares, preliminary test, Stein-type shrinkage and positive-rule shrinkage estimators for the regression parameters, β.

We compare the quadratic risks of the estimators to determine the relative dominance properties of the five estimators.  相似文献   

8.
Abstract

We consider adaptive ridge regression estimators in the general linear model with homogeneous spherically symmetric errors. A restriction on the parameter of regression is considered. We assume that all components are non negative (i.e. on the positive orthant). For this setting, we produce under general quadratic loss such estimators whose risk function dominates that of the least squares provided the number of regressors in the least fore.  相似文献   

9.
J. Kleffe 《Statistics》2013,47(2):233-250
The subject of this contribution is to present a survey on new methods for variance component estimation, which appeared in the literature in recent years. Starting from mixed models treated in analysis of variance research work on this field turned over to a more general approach in which the covariance matrix of the vector of observations is assumed to be a unknown linear combination of known symmetric matrices. Much interest has been shown in developing some kinds op optimal estimators for the unknown parameters and most results were obtained for estimators being invariant with respect to a certain group of translations. Therefore we restrict attention to this class of estimates. We will deal with minimum variance unbiased estimators, least squared errors estimators, maximum likelihood estimators. Bayes quadratic estimators and show some relations to the mimimum norm quadratic unbiased estimation principle (MINQUE) introduced by C. R. Rao [20]. We do not mention the original motivation of MINQUE since the otion of minimum norm depends on a measure that is not accepted by all statisticians. Also we do‘nt deal with other approaches like the BAYEsian and fiducial methods which were successfully applied by S. Portnoy [18], P. Rusolph [22], G. C. Tiao, W. Y. Tan [28], M. J. K. Healy [9] and others, although in very special situations, only. Additionally we add some new results and also new insight in the properties of known estimators. We give a new characterization of MINQUE in the class of all estimators, extend explicite expressions for locally optimal quadratic estimators given by C. R. Rao [22] to a slightly more general situation and prove complete class theorems useful for the computation of BAYES quadratic estimators. We also investigate situations in which BAYES quadratic unbiased estimators do'nt change if the distribution of the error terms differ from the normal distribution.  相似文献   

10.
In this paper we seek designs and estimators which are optimal in some sense for multivariate linear regression on cubes and simplexes when the true regression function is unknown. More precisely, we assume that the unknown true regression function is the sum of a linear part plus some contamination orthogonal to the set of all linear functions in the L2 norm with respect to Lebesgue measure. The contamination is assumed bounded in absolute value and it is shown that the usual designs for multivariate linear regression on cubes and simplices and the usual least squares estimators minimize the supremum over all possible contaminations of the expected mean square error. Additional results for extrapolation and interpolation, among other things, are discussed. For suitable loss functions optimal designs are found to have support on the extreme points of our design space.  相似文献   

11.
This paper introduces a novel hybrid regression method (MixReg) combining two linear regression methods, ordinary least square (OLS) and least squares ratio (LSR) regression. LSR regression is a method to find the regression coefficients minimizing the sum of squared error rate while OLS minimizes the sum of squared error itself. The goal of this study is to combine two methods in a way that the proposed method superior both OLS and LSR regression methods in terms of R2 statistics and relative error rate. Applications of MixReg, on both simulated and real data, show that MixReg method outperforms both OLS and LSR regression.  相似文献   

12.
In regression analysis, to overcome the problem of multicollinearity, the r ? k class estimator is proposed as an alternative to the ordinary least squares estimator which is a general estimator including the ordinary ridge regression estimator, the principal components regression estimator and the ordinary least squares estimator. In this article, we derive the necessary and sufficient conditions for the superiority of the r ? k class estimator over each of these estimators under the Mahalanobis loss function by the average loss criterion. Then, we compare these estimators with each other using the same criterion. Also, we suggest to test to verify if these conditions are indeed satisfied. Finally, a numerical example and a Monte Carlo simulation are done to illustrate the theoretical results.  相似文献   

13.
This paper studies the partially time-varying coefficient models where some covariates are measured with additive errors. In order to overcome the bias of the usual profile least squares estimation when measurement errors are ignored, we propose a modified profile least squares estimator of the regression parameter and construct estimators of the nonlinear coefficient function and error variance. The proposed three estimators are proved to be asymptotically normal under mild conditions. In addition, we introduce the profile likelihood ratio test and then demonstrate that it follows an asymptotically χ2χ2 distribution under the null hypothesis. Finite sample behavior of the estimators is investigated via simulations too.  相似文献   

14.
Methods for linear regression with multivariate response variables are well described in statistical literature. In this study we conduct a theoretical evaluation of the expected squared prediction error in bivariate linear regression where one of the response variables contains missing data. We make the assumption of known covariance structure for the error terms. On this basis, we evaluate three well-known estimators: standard ordinary least squares, generalized least squares, and a James–Stein inspired estimator. Theoretical risk functions are worked out for all three estimators to evaluate under which circumstances it is advantageous to take the error covariance structure into account.  相似文献   

15.
Consider the problem of pointwise estimation of f in a multivariate isotonic regression model Z=f(X1,…,Xd)+ϵ, where Z is the response variable, f is an unknown nonparametric regression function, which is isotonic with respect to each component, and ϵ is the error term. In this article, we investigate the behavior of the least squares estimator of f. We generalize the greatest convex minorant characterization of isotonic regression estimator for the multivariate case and use it to establish the asymptotic distribution of properly normalized version of the estimator. Moreover, we test whether the multivariate isotonic regression function at a fixed point is larger (or smaller) than a specified value or not based on this estimator, and the consistency of the test is established. The practicability of the estimator and the test are shown on simulated and real data as well.  相似文献   

16.
A simple estimation procedure, based on the generalized least squares method, for the parameters of the Weibull distribution is described and investigated. Through a simulation study, this estimation technique is compared with maximum likelihood estimation, ordinary least squares estimation, and Menon's estimation procedure; this comparison is based on observed relative efficiencies (that is, the ratio of the Cramer-Rao lower bound to the observed mean squared error). Simulation results are presented for samples of size 25. Among the estimators considered in this simulation study, the generalized least squares estimator was found to be the "best" estimator for the shape parameter and a close competitor to the maximum likelihood estimator of the scale parameter.  相似文献   

17.
This article is concerned with the problem of multicollinearity in a linear model with linear restrictions. After introducing a spheral restricted condition, a new restricted ridge estimation method is proposed by minimizing the sum of squared residuals. The property of the new estimator in its superiority over the ordinary restricted least squares estimation is then theoretically analyzed. Furthermore, a sufficient and necessary condition for selecting the ridge parameter k is obtained. To simplify the selection of the ridge parameter, a sufficient condition is also given. Finally, a numerical example demonstrates the merit of the new method in the aspect of solving the multicollinearity over the ordinary restricted least squares estimation.  相似文献   

18.
ABSTRACT

In this paper, we consider the estimation problem of the parameter vector in the linear regression model with heteroscedastic errors. First, under heteroscedastic errors, we study the performance of shrinkage-type estimators and their performance as compared to theunrestricted and restricted least squares estimators. In order to accommodate the heteroscedastic structure, we generalize an identity which is useful in deriving the risk function. Thanks to the established identity, we prove that shrinkage estimators dominate the unrestricted estimator. Finally, we explore the performance of high-dimensional heteroscedastic regression estimator as compared to classical LASSO and shrinkage estimators.  相似文献   

19.
Independence of error terms in a linear regression model, often not established. So a linear regression model with correlated error terms appears in many applications. According to the earlier studies, this kind of error terms, basically can affect the robustness of the linear regression model analysis. It is also shown that the robustness of the parameters estimators of a linear regression model can stay using the M-estimator. But considering that, it acquires this feature as the result of establishment of its efficiency. Whereas, it has been shown that the minimum Matusita distance estimators, has both features robustness and efficiency at the same time. On the other hand, because the Cochrane and Orcutt adjusted least squares estimators are not affected by the dependence of the error terms, so they are efficient estimators. Here we are using of a non-parametric kernel density estimation method, to give a new method of obtaining the minimum Matusita distance estimators for the linear regression model with correlated error terms in the presence of outliers. Also, simulation and real data study both are done for the introduced estimation method. In each case, the proposed method represents lower biases and mean squared errors than the other two methods.KEYWORDS: Robust estimation method, minimum Matusita distance estimation method, non-parametric kernel density estimation method, correlated error terms, outliers  相似文献   

20.
Several estimators of squared prediction error have been suggested for use in model and bandwidth selection problems. Among these are cross-validation, generalized cross-validation and a number of related techniques based on the residual sum of squares. For many situations with squared error loss, e.g. nonparametric smoothing, these estimators have been shown to be asymptotically optimal in the sense that in large samples the estimator minimizing the selection criterion also minimizes squared error loss. However, cross-validation is known not to be asymptotically optimal for some `easy' location problems. We consider selection criteria based on estimators of squared prediction risk for choosing between location estimators. We show that criteria based on adjusted residual sum of squares are not asymptotically optimal for choosing between asymptotically normal location estimators that converge at rate n 1/2but are when the rate of convergence is slower. We also show that leave-one-out cross-validation is not asymptotically optimal for choosing between √ n -differentiable statistics but leave- d -out cross-validation is optimal when d ∞ at the appropriate rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号