首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Sielken and Heartely 1973 have shown that the L1 and L estimation problems may be formulated in such a way as to yield unbiased estimators of in the standard linear model y = Xβ + ε In this paper we will show that the L1 estimation problem is closely related to the dual of the L estimation problem and vice versa. We will use this resu;t to obtain four fistiner lineat programming problems which yield unbiased L1 and L estimators of β.  相似文献   

2.
The authors introduce a penalized minimum distance regression estimator. They show the estimator to balance, among a sequence of nested models of increasing complexity, the L1 ‐approximation error of each model class and a penalty term which reflects the richness of each model and serves as a upper bound for the estimation error.  相似文献   

3.
A fast routine for converting regression algorithms into corresponding orthogonal regression (OR) algorithms was introduced in Ammann and Van Ness (1988). The present paper discusses the properties of various ordinary and robust OR procedures created using this routine. OR minimizes the sum of the orthogonal distances from the regression plane to the data points. OR has three types of applications. First, L 2 OR is the maximum likelihood solution of the Gaussian errors-in-variables (EV) regression problem. This L 2 solution is unstable, thus the robust OR algorithms created from robust regression algorithms should prove very useful. Secondly, OR is intimately related to principal components analysis. Therefore, the routine can also be used to create L 1, robust, etc. principal components algorithms. Thirdly, OR treats the x and y variables symmetrically which is important in many modeling problems. Using Monte Carlo studies this paper compares the performance of standard regression, robust regression, OR, and robust OR on Gaussian EV data, contaminated Gaussian EV data, heavy-tailed EV data, and contaminated heavy-tailed EV data.  相似文献   

4.
A number of efficient computer codes are available for the simple linear L 1 regression problem. However, a number of these codes can be made more efficient by utilizing the least squares solution. In fact, a couple of available computer programs already do so.

We report the results of a computational study comparing several openly available computer programs for solving the simple linear L 1 regression problem with and without computing and utilizing a least squares solution.  相似文献   

5.
Nonparametric regression techniques such as spline smoothing and local fitting depend implicitly on a parametric model. For instance, the cubic smoothing spline estimate of a regression function ∫ μ based on observations ti, Yi is the minimizer of Σ{Yi ‐ μ(ti)}2 + λ∫(μ′′)2. Since ∫(μ″)2 is zero when μ is a line, the cubic smoothing spline estimate favors the parametric model μ(t) = αo + α1t. Here the authors consider replacing ∫(μ″)2 with the more general expression ∫(Lμ)2 where L is a linear differential operator with possibly nonconstant coefficients. The resulting estimate of μ performs well, particularly if Lμ is small. They present an O(n) algorithm for the computation of μ. This algorithm is applicable to a wide class of L's. They also suggest a method for the estimation of L. They study their estimates via simulation and apply them to several data sets.  相似文献   

6.
Given the regression model Yi = m(xi) +εi (xi ε C, i = l,…,n, C a compact set in R) where m is unknown and the random errors {εi} present an ARMA structure, we design a bootstrap method for testing the hypothesis that the regression function follows a general linear model: Ho : m ε {mθ(.) = At(.)θ : θ ε ? ? Rq} with A a functional from R to Rq. The criterion of the test derives from a Cramer-von-Mises type functional distance D = d2([mcirc]n, At(.)θn), between [mcirc]n, a Gasser-Miiller non-parametric estimator of m, and the member of the class defined in Ho that is closest to mn in terms of this distance. The consistency of the bootstrap distribution of D and θn is obtained under general conditions. Finally, simulations show the good behavior of the bootstrap approximation with respect to the asymptotic distribution of D = d2.  相似文献   

7.
The authors analyze the L1 performance of wavelet density estimators. They prove that under mild conditions on the family of wavelets, such estimates are universally consistent in the L1 sense.  相似文献   

8.
Let f?n, h denote the kernel density estimate based on a sample of size n drawn from an unknown density f. Using techniques from L2 projection density estimators, the author shows how to construct a data-driven estimator f?n, h which satisfies This paper is inspired by work of Stone (1984), Devroye and Lugosi (1996) and Birge and Massart (1997).  相似文献   

9.
We propose a new adaptive L1 penalized quantile regression estimator for high-dimensional sparse regression models with heterogeneous error sequences. We show that under weaker conditions compared with alternative procedures, the adaptive L1 quantile regression selects the true underlying model with probability converging to one, and the unique estimates of nonzero coefficients it provides have the same asymptotic normal distribution as the quantile estimator which uses only the covariates with non-zero impact on the response. Thus, the adaptive L1 quantile regression enjoys oracle properties. We propose a completely data driven choice of the penalty level λnλn, which ensures good performance of the adaptive L1 quantile regression. Extensive Monte Carlo simulation studies have been conducted to demonstrate the finite sample performance of the proposed method.  相似文献   

10.
Let X1, …,Xn be a random sample from a normal distribution with mean θ and variance σ2 The problem is to estimate θ with loss function L(θ,e) = v(e-θ) where v(x) = b(exp(ax)-ax-l) and where a, b are constants with b>0, a¦0. Zellner (1986), showed that [Xbar] ? σ2a/2n dominates [Xbar] and hence [Xbar] is inadmissible. The question of what values of c and d render c[Xbar]+ d admissible is studied here.  相似文献   

11.
Elliott and Müller (2006) considered the problem of testing for general types of parameter variations, including infrequent breaks. They developed a framework that yields optimal tests, in the sense that they nearly attain some local Gaussian power envelop. The main ingredient in their setup is that the variance of the process generating the changes in the parameters must go to zero at a fast rate. They recommended the so-called qL?L test, a partial sums type test based on the residuals obtained from the restricted model. We show that for breaks that are very small, its power is indeed higher than other tests, including the popular sup-Wald (SW) test. However, the differences are very minor. When the magnitude of change is moderate to large, the power of the test is very low in the context of a regression with lagged dependent variables or when a correction is applied to account for serial correlation in the errors. In many cases, the power goes to zero as the magnitude of change increases. The power of the SW test does not show this non-monotonicity and its power is far superior to the qL?L test when the break is not very small. We claim that the optimality of the qL?L test does not come from the properties of the test statistics but the criterion adopted, which is not useful to analyze structural change tests. Instead, we use fixed-break size asymptotic approximations to assess the relative efficiency or power of the two tests. When doing so, it is shown that the SW test indeed dominates the qL?L test and, in many cases, the latter has zero relative asymptotic efficiency.  相似文献   

12.
This paper proposes robust regression to solve the problem of outliers in seemingly unrelated regression (SUR) models. The authors present an adaptation of S‐estimators to SUR models. S‐estimators are robust, have a high breakdown point and are much more efficient than other robust regression estimators commonly used in practice. Furthermore, modifications to Ruppert's algorithm allow a fast evaluation of them in this context. The classical example of U.S. corporations is revisited, and it appears that the procedure gives an interesting insight into the problem.  相似文献   

13.
Several methods have been suggested to calculate robust M- and G-M -estimators of the regression parameter β and of the error scale parameter σ in a linear model. This paper shows that, for some data sets well known in robust statistics, the nonlinear systems of equations for the simultaneous estimation of β, with an M-estimate with a redescending ψ-function, and σ, with the residual median absolute deviation (MAD), have many solutions. This multiplicity is not caused by the possible lack of uniqueness, for redescending ψ-functions, of the solutions of the system defining β with known σ; rather, the simultaneous estimation of β and σ together creates the problem. A way to avoid these multiple solutions is to proceed in two steps. First take σ as the median absolute deviation of the residuals for a uniquely defined robust M-estimate such as Huber's Proposal 2 or the L1-estimate. Then solve the nonlinear system for the M-estimate with σ equal to the value obtained at the first step to get the estimate of β. Analytical conditions for the uniqueness of M and G-M-estimates are also given.  相似文献   

14.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem.  相似文献   

15.
Because outliers and leverage observations unduly affect the least squares regression, the identification of influential observations is considered an important and integrai part of the analysis. However, very few techniques have been developed for the residual analysis and diagnostics for the minimum sum of absolute errors, L1 regression. Although the L1 regression is more resistant to the outliers than the least squares regression, it appears that outliers (leverage) in the predictor variables may affect it. In this paper, our objective is to develop an influence measure for the L1 regression based on the likelihood displacement function. We illustrate the proposed influence measure with examples.  相似文献   

16.
Let Sp × p have a Wishart distribution with parameter matrix Σ and n degrees of freedom. We consider here the problem of estimating the precision matrix Σ?1 under the loss functions L1(σ) tr (σ) - log |σ| and L2(σ) = tr (σ). James-Stein-type estimators have been derived for an arbitrary p. We also obtain an orthogonal invariant and a diagonal invariant minimax estimator under both loss functions. A Monte-Carlo simulation study indicates that the risk improvement of the orthogonal invariant estimators over the James-Stein type estimators, the Haff (1979) estimator, and the “testimator” given by Sinha and Ghosh (1987) is substantial.  相似文献   

17.
Given an unknown function (e.g. a probability density, a regression function, …) f and a constant c, the problem of estimating the level set L(c) ={fc} is considered. This problem is tackled in a very general framework, which allows f to be defined on a metric space different from . Such a degree of generality is motivated by practical considerations and, in fact, an example with astronomical data is analyzed where the domain of f is the unit sphere. A plug‐in approach is followed; that is, L(c) is estimated by Ln(c) ={fnc} , where fn is an estimator of f. Two results are obtained concerning consistency and convergence rates, with respect to the Hausdorff metric, of the boundaries ?Ln(c) towards ?L(c) . Also, the consistency of Ln(c) to L(c) is shown, under mild conditions, with respect to the L1 distance. Special attention is paid to the particular case of spherical data.  相似文献   

18.
The L1-type regularization provides a useful tool for variable selection in high-dimensional regression modeling. Various algorithms have been proposed to solve optimization problems for L1-type regularization. Especially the coordinate descent algorithm has been shown to be effective in sparse regression modeling. Although the algorithm shows a remarkable performance to solve optimization problems for L1-type regularization, it suffers from outliers, since the procedure is based on the inner product of predictor variables and partial residuals obtained from a non-robust manner. To overcome this drawback, we propose a robust coordinate descent algorithm, especially focusing on the high-dimensional regression modeling based on the principal components space. We show that the proposed robust algorithm converges to the minimum value of its objective function. Monte Carlo experiments and real data analysis are conducted to examine the efficiency of the proposed robust algorithm. We observe that our robust coordinate descent algorithm effectively performs for the high-dimensional regression modeling even in the presence of outliers.  相似文献   

19.
In the ciassical regression model Yi=h(xi) + ? i, i=1,…,n, Cheng (1984) introduced linear combinations of regression quantiles as a new class of estimators for the unknown regression function h(x). The asymptotic properties studied in Cheng (1984) are reconsidered. We obtain a sharper scrong consistency rate and we improve on the conditions for asymptotic normality by proving a new result on the remainder term in the Bahadur representation for regression quantiles.  相似文献   

20.
In many experiments, not all explanatory variables can be controlled. When the units arise sequentially, different approaches may be used. The authors study a natural sequential procedure for “marginally restricted” D‐optimal designs. They assume that one set of explanatory variables (x1) is observed sequentially, and that the experimenter responds by choosing an appropriate value of the explanatory variable x2. In order to solve the sequential problem a priori, the authors consider the problem of constructing optimal designs with a prior marginal distribution for x1. This eliminates the influence of units already observed on the next unit to be designed. They give explicit designs for various cases in which the mean response follows a linear regression model; they also consider a case study with a nonlinear logistic response. They find that the optimal strategy often consists of randomizing the assignment of the values of x2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号