首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The fused lasso penalizes a loss function by the L1 norm for both the regression coefficients and their successive differences to encourage sparsity of both. In this paper, we propose a Bayesian generalized fused lasso modeling based on a normal-exponential-gamma (NEG) prior distribution. The NEG prior is assumed into the difference of successive regression coefficients. The proposed method enables us to construct a more versatile sparse model than the ordinary fused lasso using a flexible regularization term. Simulation studies and real data analyses show that the proposed method has superior performance to the ordinary fused lasso.  相似文献   

2.
Summary.  The lasso penalizes a least squares regression by the sum of the absolute values ( L 1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L 1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N , the sample size. The technique is also extended to the 'hinge' loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data.  相似文献   

3.
Lasso is popularly used for variable selection in recent years. In this paper, lasso-type penalty functions including lasso and adaptive lasso are employed in simultaneously variable selection and parameter estimation for covariate-adjusted linear model, where the predictors and response cannot be observed directly and distorted by some observable covariate through some unknown multiplicative smooth functions. Estimation procedures are proposed and some asymptotic properties are obtained under some mild conditions. It deserves noting that under appropriate conditions, the adaptive lasso estimator correctly select covariates with nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients have the same asymptotic distribution that they would have if the zero coefficients were known in advance, i.e. the adaptive lasso estimator has the oracle property in the sense of Fan and Li [6]. Simulation studies are carried out to examine its performance in finite sample situations and the Boston Housing data is analyzed for illustration.  相似文献   

4.
Lasso proved to be an extremely successful technique for simultaneous estimation and variable selection. However lasso has two major drawbacks. First, it does not enforce any grouping effect and secondly in some situation lasso solutions are inconsistent for variable selection. To overcome this inconsistency adaptive lasso is proposed where adaptive weights are used for penalizing different coefficients. Recently a doubly regularized technique namely elastic net is proposed which encourages grouping effect i.e. either selection or omission of the correlated variables together. However elastic net is also inconsistent. In this paper we study adaptive elastic net which does not have this drawback. In this article we specially focus on the grouped selection property of adaptive elastic net along with its model selection complexity. We also shed some light on the bias-variance tradeoff of different regularization methods including adaptive elastic net. An efficient algorithm was proposed in the line of LARS-EN, which is then illustrated with simulated as well as real life data examples.  相似文献   

5.
6.
The lasso procedure is an estimator‐shrinkage and variable selection method. This paper shows that there always exists an interval of tuning parameter values such that the corresponding mean squared prediction error for the lasso estimator is smaller than for the ordinary least squares estimator. For an estimator satisfying some condition such as unbiasedness, the paper defines the corresponding generalized lasso estimator. Its mean squared prediction error is shown to be smaller than that of the estimator for values of the tuning parameter in some interval. This implies that all unbiased estimators are not admissible. Simulation results for five models support the theoretical results.  相似文献   

7.
Abstract.  In a range of imaging problems, particularly those where the images are of man-made objects, edges join at points which comprise three or more distinct boundaries between textures. In such cases the set of edges in the plane forms what a mathematician would call a planar graph. Smooth edges in the graph meet one another at junctions, called 'vertices', the 'degrees' of which denote the respective numbers of edges that join there. Conventional image reconstruction methods do not always draw clear distinctions among different degrees of junction, however. In such cases the algorithm is, in a sense, too locally adaptive; it inserts junctions without checking more globally to determine whether another configuration might be more suitable. In this paper we suggest an alternative approach to edge reconstruction, which combines a junction classification step with an edge-tracking routine. The algorithm still makes its decisions locally, so that the method retains an adaptive character. However, the fact that it focuses specifically on estimating the degree of a junction means that it is relatively unlikely to insert multiple low-degree junctions when evidence in the data supports the existence of a single high-degree junction. Numerical and theoretical properties of the method are explored, and theoretical optimality is discussed. The technique is based on local least-squares, or local likelihood in the case of Gaussian data. This feature, and the fact that the algorithm takes a tracking approach which does not require analysis of the full spatial data set, mean that it is relatively simple to implement.  相似文献   

8.
Motivated by an entropy inequality, we propose for the first time a penalized profile likelihood method for simultaneously selecting significant variables and estimating unknown coefficients in multiple linear regression models in this article. The new method is robust to outliers or errors with heavy tails and works well even for error with infinite variance. Our proposed approach outperforms the adaptive lasso in both theory and practice. It is observed from the simulation studies that (i) the new approach possesses higher probability of correctly selecting the exact model than the least absolute deviation lasso and the adaptively penalized composite quantile regression approach and (ii) exact model selection via our proposed approach is robust regardless of the error distribution. An application to a real dataset is also provided.  相似文献   

9.
This article proposes a variable selection approach for zero-inflated count data analysis based on the adaptive lasso technique. Two models including the zero-inflated Poisson and the zero-inflated negative binomial are investigated. An efficient algorithm is used to minimize the penalized log-likelihood function in an approximate manner. Both the generalized cross-validation and Bayesian information criterion procedures are employed to determine the optimal tuning parameter, and a consistent sandwich formula of standard errors for nonzero estimates is given based on local quadratic approximation. We evaluate the performance of the proposed adaptive lasso approach through extensive simulation studies, and apply it to analyze real-life data about doctor visits.  相似文献   

10.
Abstract.  For the problem of estimating a sparse sequence of coefficients of a parametric or non-parametric generalized linear model, posterior mode estimation with a Subbotin( λ , ν ) prior achieves thresholding and therefore model selection when ν   ∈    [0,1] for a class of likelihood functions. The proposed estimator also offers a continuum between the (forward/backward) best subset estimator ( ν  =  0 ), its approximate convexification called lasso ( ν  =  1 ) and ridge regression ( ν  =  2 ). Rather than fixing ν , selecting the two hyperparameters λ and ν adds flexibility for a better fit, provided both are well selected from the data. Considering first the canonical Gaussian model, we generalize the Stein unbiased risk estimate, SURE( λ , ν ), to the situation where the thresholding function is not almost differentiable (i.e. ν    1 ). We then propose a more general selection of λ and ν by deriving an information criterion that can be employed for instance for the lasso or wavelet smoothing. We investigate some asymptotic properties in parametric and non-parametric settings. Simulations and applications to real data show excellent performance.  相似文献   

11.
We propose marginalized lasso, a new nonconvex penalization for variable selection in regression problem. The marginalized lasso penalty is motivated from integrating out the penalty parameter in the original lasso penalty with a gamma prior distribution. This study provides a thresholding rule and a lasso-based iterative algorithm for parameter estimation in the marginalized lasso. We also provide a coordinate descent algorithm to efficiently optimize the marginalized lasso penalized regression. Numerical comparison studies are provided to demonstrate its competitiveness over the existing sparsity-inducing penalizations and suggest some guideline for tuning parameter selection.  相似文献   

12.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

13.
14.
This study considers the binary classification of functional data collected in the form of curves. In particular, we assume a situation in which the curves are highly mixed over the entire domain, so that the global discriminant analysis based on the entire domain is not effective. This study proposes an interval-based classification method for functional data: the informative intervals for classification are selected and used for separating the curves into two classes. The proposed method, called functional logistic regression with fused lasso penalty, combines the functional logistic regression as a classifier and the fused lasso for selecting discriminant segments. The proposed method automatically selects the most informative segments of functional data for classification by employing the fused lasso penalty and simultaneously classifies the data based on the selected segments using the functional logistic regression. The effectiveness of the proposed method is demonstrated with simulated and real data examples.  相似文献   

15.
Linear structural equation models, which relate random variables via linear interdependencies and Gaussian noise, are a popular tool for modelling multivariate joint distributions. The models correspond to mixed graphs that include both directed and bidirected edges representing the linear relationships and correlations between noise terms, respectively. A question of interest for these models is that of parameter identifiability, whether or not it is possible to recover edge coefficients from the joint covariance matrix of the random variables. For the problem of determining generic parameter identifiability, we present an algorithm building upon the half‐trek criterion. Underlying our new algorithm is the idea that ancestral subsets of vertices in the graph can be used to extend the applicability of a decomposition technique.  相似文献   

16.
The important feature of the accelerated hazards (AH) model is that it can capture the gradual effect of treatment. Because of the complexity in its estimation, few discussion has been made on the variable selection of the AH model. The Bayesian non-parametric prior, called the transformed Bernstein polynomial prior, is employed for simultaneously robust estimation and variable selection in sparse AH models. We first introduce a naive lasso-type accelerated hazards model, and later, in order to reduce estimation bias and improve variable selection accuracy, we further consider an adaptive lasso AH model as a direct extension of the naive lasso-type model. Through our simulation studies, we obtain that the adaptive lasso AH model performs better than the lasso-type model with respect to the variable selection and prediction accuracy. We also illustrate the performance of the proposed methods via a brain tumour study.  相似文献   

17.
We study a Bayesian analysis of the proportional hazards model with time‐varying coefficients. We consider two priors for time‐varying coefficients – one based on B‐spline basis functions and the other based on Gamma processes – and we use a beta process prior for the baseline hazard functions. We show that the two priors provide optimal posterior convergence rates (up to the term) and that the Bayes factor is consistent for testing the assumption of the proportional hazards when the two priors are used for an alternative hypothesis. In addition, adaptive priors are considered for theoretical investigation, in which the smoothness of the true function is assumed to be unknown, and prior distributions are assigned based on B‐splines.  相似文献   

18.
Summary.  We propose covariance-regularized regression, a family of methods for prediction in high dimensional settings that uses a shrunken estimate of the inverse covariance matrix of the features to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing the log-likelihood of the data, under a multivariate normal model, subject to a penalty; it is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyse gene expression data sets with multiple class and survival outcomes.  相似文献   

19.
We consider a continuous-time model for the evolution of social networks. A social network is here conceived as a (di-) graph on a set of vertices, representing actors, and the changes of interest are creation and disappearance over time of (arcs) edges in the graph. Hence we model a collection of random edge indicators that are not, in general, independent. We explicitly model the interdependencies between edge indicators that arise from interaction between social entities. A Markov chain is defined in terms of an embedded chain with holding times and transition probabilities. Data are observed at fixed points in time and hence we are not able to observe the embedded chain directly. Introducing a prior distribution for the parameters we may implement an MCMC algorithm for exploring the posterior distribution of the parameters by simulating the evolution of the embedded process between observations.  相似文献   

20.
The problem of modeling the relationship between a set of covariates and a multivariate response with correlated components often arises in many areas of research such as genetics, psychometrics, signal processing. In the linear regression framework, such task can be addressed using a number of existing methods. In the high-dimensional sparse setting, most of these methods rely on the idea of penalization in order to efficiently estimate the regression matrix. Examples of such methods include the lasso, the group lasso, the adaptive group lasso or the simultaneous variable selection (SVS) method. Crucially, a suitably chosen penalty also allows for an efficient exploitation of the correlation structure within the multivariate response. In this paper we introduce a novel variant of such method called the adaptive SVS, which is closely linked with the adaptive group lasso. Via a simulation study we investigate its performance in the high-dimensional sparse regression setting. We provide a comparison with a number of other popular methods under different scenarios and show that the adaptive SVS is a powerful tool for efficient recovery of signal in such setting. The methods are applied to genetic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号