Posterior robustness with more than one sampling model |
| |
Authors: | Luis Raúl Pericchi María Egle Prez |
| |
Institution: | Departamento de Matemática y Centro de Estadística y Software Matemático CESMa, Universidad Simón Bolívar, Caracas, Venezuela |
| |
Abstract: | In order to robustify posterior inference, besides the use of large classes of priors, it is necessary to consider uncertainty about the sampling model. In this article we suggest that a convenient and simple way to incorporate model robustness is to consider a discrete set of competing sampling models, and combine it with a suitable large class of priors. This set reflects foreseeable departures of the base model, like thinner or heavier tails or asymmetry. We combine the models with different classes of priors that have been proposed in the vast literature on Bayesian robustness with respect to the prior. Also we explore links with the related literature of stable estimation and precise measurement theory, now with more than one model entertained. To these ends it will be necessary to introduce a procedure for model comparison that does not depend on an arbitrary constant or scale. We utilize a recent development on automatic Bayes factors with self-adjusted scale, the ‘intrinsic Bayes factor’ (Berger and Pericchi, Technical Report, 1993). |
| |
Keywords: | Adaptive weighted averages intrinsic Bayes factor model robustness precise measurement prior robustness |
本文献已被 ScienceDirect 等数据库收录! |
|