首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3篇
  免费   0篇
统计学   3篇
  2011年   1篇
  2010年   1篇
  2004年   1篇
排序方式: 共有3条查询结果,搜索用时 390 毫秒
1
1.
In this paper, bootstrap prediction is adapted to resolve some problems in small sample datasets. The bootstrap predictive distribution is obtained by applying Breiman's bagging to the plug-in distribution with the maximum likelihood estimator. The effectiveness of bootstrap prediction has previously been shown, but some problems may arise when bootstrap prediction is constructed in small sample datasets. In this paper, Bayesian bootstrap is used to resolve the problems. The effectiveness of Bayesian bootstrap prediction is confirmed by some examples. These days, analysis of small sample data is quite important in various fields. In this paper, some datasets are analyzed in such a situation. For real datasets, it is shown that plug-in prediction and bootstrap prediction provide very poor prediction when the sample size is close to the dimension of parameter while Bayesian bootstrap prediction provides stable prediction.  相似文献   
2.
On Parametric Bootstrapping and Bayesian Prediction   总被引:1,自引:0,他引:1  
Abstract.  We investigate bootstrapping and Bayesian methods for prediction. The observations and the variable being predicted are distributed according to different distributions. Many important problems can be formulated in this setting. This type of prediction problem appears when we deal with a Poisson process. Regression problems can also be formulated in this setting. First, we show that bootstrap predictive distributions are equivalent to Bayesian predictive distributions in the second-order expansion when some conditions are satisfied. Next, the performance of predictive distributions is compared with that of a plug-in distribution with an estimator. The accuracy of prediction is evaluated by using the Kullback–Leibler divergence. Finally, we give some examples.  相似文献   
3.
Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K-fold cross-validation.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号