首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dummy (0, 1) variables are frequently used in statistical modeling to represent the effect of certain extraneous factors. This paper presents a special purpose linear programming algorithm for obtaining least-absolute-value estimators in a linear model with dummy variables. The algorithm employs a compact basis inverse procedure and incorporates the advanced basis exchange techniques available in specialized algorithms for the general linear least-absolute-value problem. Computational results with a computer code version of the algorithm are given.  相似文献   

2.
The importance of the two-way classification model is well known, but the standard method of analysis is least squares. Often, the data of the model calls for a more robust estimation technique. This paper demonstrates the equivalence between the problem of obtaining least absolute value estimates for the two-way classification model and a capacitated transportation problem. A special purpose primal algorithm is developed to provide the least absolute value estimates. A computational comparison is made between an implementation of this specialized algorithm and a standard capacitated transportation code.  相似文献   

3.
The L1-type regularization provides a useful tool for variable selection in high-dimensional regression modeling. Various algorithms have been proposed to solve optimization problems for L1-type regularization. Especially the coordinate descent algorithm has been shown to be effective in sparse regression modeling. Although the algorithm shows a remarkable performance to solve optimization problems for L1-type regularization, it suffers from outliers, since the procedure is based on the inner product of predictor variables and partial residuals obtained from a non-robust manner. To overcome this drawback, we propose a robust coordinate descent algorithm, especially focusing on the high-dimensional regression modeling based on the principal components space. We show that the proposed robust algorithm converges to the minimum value of its objective function. Monte Carlo experiments and real data analysis are conducted to examine the efficiency of the proposed robust algorithm. We observe that our robust coordinate descent algorithm effectively performs for the high-dimensional regression modeling even in the presence of outliers.  相似文献   

4.
The Barrodale and Roberts algorithm for least absolute value (LAV) regression and the algorithm proposed by Bartels and Conn both have the advantage that they are often able to skip across points at which the conventional simplex-method algorithms for LAV regression would be required to carry out an (expensive) pivot operation.

We indicate here that this advantage holds in the Bartels-Conn approach for a wider class of problems: the minimization of piecewise linear functions. We show how LAV regression, restricted LAV regression, general linear programming and least maximum absolute value regression can all be easily expressed as piecewise linear minimization problems.  相似文献   

5.
Selection of appropriate predictors for right censored time to event data is very often encountered by the practitioners. We consider the ?1 penalized regression or “least absolute shrinkage and selection operator” as a tool for predictor selection in association with accelerated failure time model. The choice of the penalizing parameter λ is crucial to identify the correct set of covariates. In this paper, we propose an information theory-based method to choose λ under log-normal distribution. Furthermore, an efficient algorithm is discussed in the same context. The performance of the proposed λ and the algorithm is illustrated through simulation studies and a real data analysis. The convergence of the algorithm is also discussed.  相似文献   

6.
We present a concise summary of recent progress in developing algorithms for restricted least absolute value (LAV) estimation (i. e. ?1 approximation subject to linear constraints). The emphasis is on our own new algorithm, and we provide some numerical results obtained with it.  相似文献   

7.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem.  相似文献   

8.
The nonlinear least squares algorithm of Gill and Murray (1978) is extended and modified to solve nonlinear L р-norm estimation problems efficiently. The new algorithm uses a mixture of 1st-order derivative (Guass-Newton) and 2nd-order derivative (Newton) search directions. A new rule for selecting the “grade” r of the p-jacobiab matrix Jp was also incorporated. This brought about rapid convergence of the algorithm on previously reported test examples.  相似文献   

9.
10.
Estimating multivariate location and scatter with both affine equivariance and positive breakdown has always been difficult. A well-known estimator which satisfies both properties is the Minimum Volume Ellipsoid Estimator (MVE). Computing the exact MVE is often not feasible, so one usually resorts to an approximate algorithm. In the regression setup, algorithms for positive-breakdown estimators like Least Median of Squares typically recompute the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, can be applied to the MVE. For this purpose we use the Minimum Volume Ball (MVB), in order to lower the MVE objective function. An exact algorithm for calculating the MVB is presented. As an alternative to MVB location adjustment we propose L 1 location adjustment, which does not necessarily lower the MVE objective function but yields more efficient estimates for the location part. Simulations compare the two types of location adjustment. We also obtain the maxbias curves of L 1 and the MVB in the multivariate setting, revealing the superiority of L 1.  相似文献   

11.
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matrices via the modified Cholesky decomposition with lasso. Two different methods are proposed. They are the equi-angular and equi-sparse methods. We use simulation to compare the performance of the proposed methods with others available in the literature, including the sample covariance matrix, the banding method, and the L1-penalized normal loglikelihood method. We then apply the proposed methods to a portfolio selection problem using 80 series of daily stock returns. To facilitate the use of lasso in high-dimensional time series analysis, we develop the dynamic weighted lasso (DWL) algorithm that extends the LARS-lasso algorithm. In particular, the proposed algorithm can efficiently update the lasso solution as new data become available. It can also add or remove explanatory variables. The entire solution path of the L1-penalized normal loglikelihood method is also constructed.  相似文献   

12.
The well-known Johnson system of distributions was developed by N. L. Johnson (1949). Slifker and Shapiro (1980) presented a criterion for choosing a member from the three distributional classes (SB,SL, and Sv) in the Johnson system to fit a set of data. The criterion is based on the value of a quantile ratio which depends on a specified positive z value and the parameters of the distribution. In this paper, we present some properties of the quantile ratio for various distributions and for some selected z values. Some comments are made on using the criterion for selecting a Johnson distribution to fit empirical data.  相似文献   

13.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

14.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

15.
In this article, we obtain a Stein operator for the sum of n independent random variables (rvs) which is shown as the perturbation of the negative binomial (NB) operator. Comparing the operator with NB operator, we derive the error bounds for total variation distance by matching parameters. Also, three-parameter approximation for such a sum is considered and is shown to improve the existing bounds in the literature. Finally, an application of our results to a function of waiting time for (k1, k2)-events is given.  相似文献   

16.
The HastingsMetropolis algorithm is a general MCMC method for sampling from a density known up to a constant. Geometric convergence of this algorithm has been proved under conditions relative to the instrumental (or proposal) distribution. We present an inhomogeneous HastingsMetropolis algorithm for which the proposal density approximates the target density, as the number of iterations increases. The proposal density at the n th step is a non-parametric estimate of the density of the algorithm, and uses an increasing number of i.i.d. copies of the Markov chain. The resulting algorithm converges (in n ) geometrically faster than a HastingsMetropolis algorithm with any fixed proposal distribution. The case of a strictly positive density with compact support is presented first, then an extension to more general densities is given. We conclude by proposing a practical way of implementation for the algorithm, and illustrate it over simulated examples.  相似文献   

17.
In the optimal experimental design literature, the G-optimality is defined as minimizing the maximum prediction variance over the entire experimental design space. Although the G-optimality is a highly desirable property in many applications, there are few computer algorithms developed for constructing G-optimal designs. Some existing methods employ an exhaustive search over all candidate designs, which is time-consuming and inefficient. In this paper, a new algorithm for constructing G-optimal experimental designs is developed for both linear and generalized linear models. The new algorithm is made based on the clustering of candidate or evaluation points over the design space and it is a combination of point exchange algorithm and coordinate exchange algorithm. In addition, a robust design algorithm is proposed for generalized linear models with modification of an existing method. The proposed algorithm are compared with the methods proposed by Rodriguez et al. [Generating and assessing exact G-optimal designs. J. Qual. Technol. 2010;42(1):3–20] and Borkowski [Using a genetic algorithm to generate small exact response surface designs. J. Prob. Stat. Sci. 2003;1(1):65–88] for linear models and with the simulated annealing method and the genetic algorithm for generalized linear models through several examples in terms of the G-efficiency and computation time. The result shows that the proposed algorithm can obtain a design with higher G-efficiency in a much shorter time. Moreover, the computation time of the proposed algorithm only increases polynomially when the size of model increases.  相似文献   

18.
We propose a two-stage algorithm for computing maximum likelihood estimates for a class of spatial models. The algorithm combines Markov chain Monte Carlo methods such as the Metropolis–Hastings–Green algorithm and the Gibbs sampler, and stochastic approximation methods such as the off-line average and adaptive search direction. A new criterion is built into the algorithm so stopping is automatic once the desired precision has been set. Simulation studies and applications to some real data sets have been conducted with three spatial models. We compared the algorithm proposed with a direct application of the classical Robbins–Monro algorithm using Wiebe's wheat data and found that our procedure is at least 15 times faster.  相似文献   

19.
The classical Shewhart c-chart and p-chart which are constructed based on the Poisson and binomial distributions are inappropriate in monitoring zero-inflated counts. They tend to underestimate the dispersion of zero-inflated counts and subsequently lead to higher false alarm rate in detecting out-of-control signals. Another drawback of these charts is that their 3-sigma control limits, evaluated based on the asymptotic normality assumption of the attribute counts, have a systematic negative bias in their coverage probability. We recommend that the zero-inflated models which account for the excess number of zeros should first be fitted to the zero-inflated Poisson and binomial counts. The Poisson parameter λ estimated from a zero-inflated Poisson model is then used to construct a one-sided c-chart with its upper control limit constructed based on the Jeffreys prior interval that provides good coverage probability for λ. Similarly, the binomial parameter p estimated from a zero-inflated binomial model is used to construct a one-sided np-chart with its upper control limit constructed based on the Jeffreys prior interval or Blyth–Still interval of the binomial proportion p. A simple two-of-two control rule is also recommended to improve further on the performance of these two proposed charts.  相似文献   

20.
Simple iterative and exact solutions are described depending on symmetric percentage points for the Johnson translation system of distributions. A condition is given for a zero skewness parameter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号