首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到2条相似文献,搜索用时 0 毫秒
1.
Learning the kernel function has recently received considerable attention in machine learning. In this paper, we consider the multi-kernel regularized regression (MKRR) algorithm associated with least square loss over reproducing kernel Hilbert spaces. We provide an error analysis for the MKRR algorithm based on the Rademacher chaos complexity and iteration techniques. The main result is an explicit learning rate for the MKRR algorithm. Two examples are given to illustrate that the learning rates are much improved compared to those in the literature.  相似文献   

2.
The study of regularized learning algorithms associated with least squared loss is one of very important issues. Wu et al. [2006. Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192] established fast learning rates mm-θ for the least square regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where m   denoted the number of the samples and θθ may be arbitrarily close to 1. They assumed as in most existing works that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for the dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号