首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
We present a concise summary of recent progress in developing algorithms for restricted least absolute value (LAV) estimation (i. e. ?1 approximation subject to linear constraints). The emphasis is on our own new algorithm, and we provide some numerical results obtained with it.  相似文献   

2.
The Barrodale and Roberts algorithm for least absolute value (LAV) regression and the algorithm proposed by Bartels and Conn both have the advantage that they are often able to skip across points at which the conventional simplex-method algorithms for LAV regression would be required to carry out an (expensive) pivot operation.

We indicate here that this advantage holds in the Bartels-Conn approach for a wider class of problems: the minimization of piecewise linear functions. We show how LAV regression, restricted LAV regression, general linear programming and least maximum absolute value regression can all be easily expressed as piecewise linear minimization problems.  相似文献   

3.
The L1-type regularization provides a useful tool for variable selection in high-dimensional regression modeling. Various algorithms have been proposed to solve optimization problems for L1-type regularization. Especially the coordinate descent algorithm has been shown to be effective in sparse regression modeling. Although the algorithm shows a remarkable performance to solve optimization problems for L1-type regularization, it suffers from outliers, since the procedure is based on the inner product of predictor variables and partial residuals obtained from a non-robust manner. To overcome this drawback, we propose a robust coordinate descent algorithm, especially focusing on the high-dimensional regression modeling based on the principal components space. We show that the proposed robust algorithm converges to the minimum value of its objective function. Monte Carlo experiments and real data analysis are conducted to examine the efficiency of the proposed robust algorithm. We observe that our robust coordinate descent algorithm effectively performs for the high-dimensional regression modeling even in the presence of outliers.  相似文献   

4.
The resistance of least absolute values (L1) estimators to outliers and their robustness to heavy-tailed distributions make these estimators useful alternatives to the usual least squares estimators. The recent development of efficient algorithms for L1 estimation in linear models has permitted their use in practical data analysis. Although in general the L1 estimators are not unique, there are a number of properties they all share. The set of all L1 estimators for a given model and data set can be characterized as the convex hull of some extreme estimators. Properties of the extreme estimators and of the L1-estimate set are considered.  相似文献   

5.
Dummy (0, 1) variables are frequently used in statistical modeling to represent the effect of certain extraneous factors. This paper presents a special purpose linear programming algorithm for obtaining least-absolute-value estimators in a linear model with dummy variables. The algorithm employs a compact basis inverse procedure and incorporates the advanced basis exchange techniques available in specialized algorithms for the general linear least-absolute-value problem. Computational results with a computer code version of the algorithm are given.  相似文献   

6.
A number of efficient computer codes are available for the simple linear L 1 regression problem. However, a number of these codes can be made more efficient by utilizing the least squares solution. In fact, a couple of available computer programs already do so.

We report the results of a computational study comparing several openly available computer programs for solving the simple linear L 1 regression problem with and without computing and utilizing a least squares solution.  相似文献   

7.
The nonlinear least squares algorithm of Gill and Murray (1978) is extended and modified to solve nonlinear L р-norm estimation problems efficiently. The new algorithm uses a mixture of 1st-order derivative (Guass-Newton) and 2nd-order derivative (Newton) search directions. A new rule for selecting the “grade” r of the p-jacobiab matrix Jp was also incorporated. This brought about rapid convergence of the algorithm on previously reported test examples.  相似文献   

8.
When one or few observations are deleted in the multiple linear regression model, they can affect the variable selection. In this paper we derived the formula for the Mallows Cp criterion when k observations are deleted and express it as a functionn of basic building blocks such as residuals and leverages. Also, two real date sets are used to see how the selected model changes as few observations re deleted.  相似文献   

9.
Sielken and Heartely 1973 have shown that the L1 and L estimation problems may be formulated in such a way as to yield unbiased estimators of in the standard linear model y = Xβ + ε In this paper we will show that the L1 estimation problem is closely related to the dual of the L estimation problem and vice versa. We will use this resu;t to obtain four fistiner lineat programming problems which yield unbiased L1 and L estimators of β.  相似文献   

10.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

11.
In multiple linear regression analysis each lower-dimensional subspace L of a known linear subspace M of ? n corresponds to a non empty subset of the columns of the regressor matrix. For a fixed subspace L, the C p statistic is an unbiased estimator of the mean square error if the projection of the response vector onto L is used to estimate the expected response. In this article, we consider two truncated versions of the C p statistic that can also be used to estimate this mean square error. The C p statistic and its truncated versions are compared in two example data sets, illustrating that use of the truncated versions may result in models different from those selected by standard C p .  相似文献   

12.
We describe a method for fitting a least absolute residual (LAR) line through a set of two–dimensional points. The algorithm is based on a labeling technique derived from linear programming. It is suited for interactive data analysis and can be carried out with graph paper and a programmable hand calculator. Tests conducted with a Pascal program indicate that the algorithm is computationally efficient.  相似文献   

13.
Selection of appropriate predictors for right censored time to event data is very often encountered by the practitioners. We consider the ?1 penalized regression or “least absolute shrinkage and selection operator” as a tool for predictor selection in association with accelerated failure time model. The choice of the penalizing parameter λ is crucial to identify the correct set of covariates. In this paper, we propose an information theory-based method to choose λ under log-normal distribution. Furthermore, an efficient algorithm is discussed in the same context. The performance of the proposed λ and the algorithm is illustrated through simulation studies and a real data analysis. The convergence of the algorithm is also discussed.  相似文献   

14.
The least squares estimator is usually applied when estimating the parameters in linear regression models. As this estimator is sensitive to departures from normality in the residual distribution, several alternatives have been proposed. The Lp norm estimators is one class of such alternatives. It has been proposed that the kurtosis of the residual distribution be taken into account when a choice of estimator in the Lp norm class is made (i.e. the choice of p). In this paper, the asymtotic variance of the estimators is used as the criterion in the choice of p. It is shown that when this criterion is applied, other characteristics of the residual distribution than the kurtosis (namely moments of order p-2 and 2p-2) are important.  相似文献   

15.
In this article, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows’ Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms’ performance.  相似文献   

16.
Estimating multivariate location and scatter with both affine equivariance and positive breakdown has always been difficult. A well-known estimator which satisfies both properties is the Minimum Volume Ellipsoid Estimator (MVE). Computing the exact MVE is often not feasible, so one usually resorts to an approximate algorithm. In the regression setup, algorithms for positive-breakdown estimators like Least Median of Squares typically recompute the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, can be applied to the MVE. For this purpose we use the Minimum Volume Ball (MVB), in order to lower the MVE objective function. An exact algorithm for calculating the MVB is presented. As an alternative to MVB location adjustment we propose L 1 location adjustment, which does not necessarily lower the MVE objective function but yields more efficient estimates for the location part. Simulations compare the two types of location adjustment. We also obtain the maxbias curves of L 1 and the MVB in the multivariate setting, revealing the superiority of L 1.  相似文献   

17.
This is the first of a projected series of papers dealing with computational experimentation in mathematical programming. This paper provides early results of a test case using four discrete linear L1 approximation codes. Variables influencing code behavior are identified and measures of performance are specified. More importantly, an experimental design is developed for assessing code performance and is illustrated using the variable “problem size”.  相似文献   

18.
In many real life situations the linear cost function does not approximate the actual cost incurred adequately. The cost of traveling between the units selected in the sample within a stratum is significant, instead of linear cost function. In this paper, we have considered the problem of finding a compromise allocation for a multivariate stratified sample survey with a significant travel cost within strata is formulated as a problem of non-linear stochastic programming with multiple objective functions. The compromise solutions are obtained through Chebyshev approximation technique, D 1- distance and goal programming. A numerical example is presented to illustrate the computational details of the proposed methods.  相似文献   

19.
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matrices via the modified Cholesky decomposition with lasso. Two different methods are proposed. They are the equi-angular and equi-sparse methods. We use simulation to compare the performance of the proposed methods with others available in the literature, including the sample covariance matrix, the banding method, and the L1-penalized normal loglikelihood method. We then apply the proposed methods to a portfolio selection problem using 80 series of daily stock returns. To facilitate the use of lasso in high-dimensional time series analysis, we develop the dynamic weighted lasso (DWL) algorithm that extends the LARS-lasso algorithm. In particular, the proposed algorithm can efficiently update the lasso solution as new data become available. It can also add or remove explanatory variables. The entire solution path of the L1-penalized normal loglikelihood method is also constructed.  相似文献   

20.
In an earlier paper it was recommended that an experimental design for the study of a mixture system in which the components had lower and upper limits should consist of a subset of the vertices and centroids of the region defined by the limitson the components. This paper extends this methodology to the situation where linear combinations of two or more components (e.g., liquid content=x3+x4+≦0.35) are subject to lower and upper constraints. The CONSIM algorithm, developed by R. E. Wheeler, is recommended for computing the vertices of the resulting experimental region. Procedures for developing linear and quadratic mixture model designs are discussed. A five-component example which has two multiple-component constraints is included to illustrate the proposed methods of mixture experimentation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号