Learning rates of regularized regression for exponentially strongly mixing sequence |
| |
Authors: | Yong-Li Xu Di-Rong Chen |
| |
Affiliation: | Department of Mathematics, and LMIB, Beijing University of Aeronautics and Astronautics, Beijing 100083, PR China |
| |
Abstract: | The study of regularized learning algorithms associated with least squared loss is one of very important issues. Wu et al. [2006. Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192] established fast learning rates m-θ for the least square regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where m denoted the number of the samples and θ may be arbitrarily close to 1. They assumed as in most existing works that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for the dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequence. |
| |
Keywords: | Learning theory Regularized learning algorithm Exponentially strongly mixing sequence Reproducing kernel Hilbert space |
本文献已被 ScienceDirect 等数据库收录! |
|