首页 | 本学科首页   官方微博 | 高级检索  
     


Learning rates of regularized regression for exponentially strongly mixing sequence
Authors:Yong-Li Xu  Di-Rong Chen
Affiliation:Department of Mathematics, and LMIB, Beijing University of Aeronautics and Astronautics, Beijing 100083, PR China
Abstract:
The study of regularized learning algorithms associated with least squared loss is one of very important issues. Wu et al. [2006. Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192] established fast learning rates mm-θ for the least square regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where m   denoted the number of the samples and θθ may be arbitrarily close to 1. They assumed as in most existing works that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for the dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequence.
Keywords:Learning theory   Regularized learning algorithm   Exponentially strongly mixing sequence   Reproducing kernel Hilbert space
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号