首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   1篇
综合类   1篇
社会学   1篇
统计学   23篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2018年   3篇
  2017年   1篇
  2016年   1篇
  2013年   5篇
  2012年   3篇
  2010年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2001年   1篇
  2000年   2篇
  1994年   1篇
  1993年   1篇
排序方式: 共有25条查询结果,搜索用时 125 毫秒
21.
We consider a regularized D-classification rule for high dimensional binary classification, which adapts the linear shrinkage estimator of a covariance matrix as an alternative to the sample covariance matrix in the D-classification rule (D-rule in short). We find an asymptotic expression for misclassification rate of the regularized D-rule, when the sample size n and the dimension p both increase and their ratio pn approaches a positive constant γ. In addition, we compare its misclassification rate to the standard D-rule under various settings via simulation.  相似文献   
22.
张景肖  刘燕平 《统计研究》2012,29(9):95-102
本文对函数性广义线性模型曲线选择的正则化方法进行了较全面地综述,并比较了各种方法的性质。结果发现,函数性广义线性模型曲线选择问题具有群组效应,另外可能具有高维数据性质。同时通过数据模拟发现,Group Bridge、Group MCP、Elastic Net和Mnet表现出较好的数值结果。  相似文献   
23.
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A “newbie” algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a smoothing spline ANOVA penalized likelihood model, a support vector machine, or any model that will admit reproducing kernel Hilbert space components, for nonparametric regression, supervised learning, or semisupervised learning. Future work and open questions are discussed. The papers are:  相似文献   
24.
To find an appropriate low-dimensional representation for complex data is one of the central problems in machine learning and data analysis. In this paper, a nonlinear dimensionality reduction algorithm called regularized Laplacian eigenmaps (RLEM) is proposed, motivated by the method for regularized spectral clustering. This algorithm provides a natural out-of-sample extension for dealing with points not in the original data set. The consistency of the RLEM algorithm is investigated. Moreover, a convergence rate is established depending on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers. Experiments are given to illustrate our algorithm.  相似文献   
25.
The study of regularized learning algorithms associated with least squared loss is one of very important issues. Wu et al. [2006. Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192] established fast learning rates mm-θ for the least square regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where m   denoted the number of the samples and θθ may be arbitrarily close to 1. They assumed as in most existing works that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for the dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequence.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号