首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Sparsity and smoothness via the fused lasso
Authors:Robert Tibshirani  Michael Saunders  Saharon Rosset  Ji Zhu  Keith Knight
Institution:Stanford University, USA; IBM T. J. Watson Research Center, Yorktown Heights, USA; University of Michigan, Ann Arbor, USA; University of Toronto, Canada
Abstract:Summary.  The lasso penalizes a least squares regression by the sum of the absolute values ( L 1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L 1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N , the sample size. The technique is also extended to the 'hinge' loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data.
Keywords:Fused lasso  Gene expression  Lasso  Least squares regression  Protein mass spectroscopy  Sparse solutions  Support vector classifier
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号