排序方式: 共有3条查询结果,搜索用时 390 毫秒
1
1.
In this paper, bootstrap prediction is adapted to resolve some problems in small sample datasets. The bootstrap predictive distribution is obtained by applying Breiman's bagging to the plug-in distribution with the maximum likelihood estimator. The effectiveness of bootstrap prediction has previously been shown, but some problems may arise when bootstrap prediction is constructed in small sample datasets. In this paper, Bayesian bootstrap is used to resolve the problems. The effectiveness of Bayesian bootstrap prediction is confirmed by some examples. These days, analysis of small sample data is quite important in various fields. In this paper, some datasets are analyzed in such a situation. For real datasets, it is shown that plug-in prediction and bootstrap prediction provide very poor prediction when the sample size is close to the dimension of parameter while Bayesian bootstrap prediction provides stable prediction. 相似文献
2.
On Parametric Bootstrapping and Bayesian Prediction 总被引:1,自引:0,他引:1
Tadayoshi Fushiki Fumiyasu Komaki Kazuyuki Aihara 《Scandinavian Journal of Statistics》2004,31(3):403-416
Abstract. We investigate bootstrapping and Bayesian methods for prediction. The observations and the variable being predicted are distributed according to different distributions. Many important problems can be formulated in this setting. This type of prediction problem appears when we deal with a Poisson process. Regression problems can also be formulated in this setting. First, we show that bootstrap predictive distributions are equivalent to Bayesian predictive distributions in the second-order expansion when some conditions are satisfied. Next, the performance of predictive distributions is compared with that of a plug-in distribution with an estimator. The accuracy of prediction is evaluated by using the Kullback–Leibler divergence. Finally, we give some examples. 相似文献
3.
Tadayoshi Fushiki 《Statistics and Computing》2011,21(2):137-146
Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction
error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes
cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training
error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates.
In this paper, we investigate two families that connect the training error and K-fold cross-validation. 相似文献
1