共查询到20条相似文献,搜索用时 9 毫秒
1.
Biviana M. Súarez-Sierra Eliane R. Rodrigues Guadalupe Tzintzun 《Journal of applied statistics》2022,49(9):2430
It is very important to study the occurrence of high levels of particulate matter due to the potential harm to people''s health and to the environment. In the present work we use a non-homogeneous Poisson model to analyse the rate of exceedances of particulate matter with diameter smaller that 2.5 microns (PM ). Models with and without change-points are considered and they are applied to data from Bogota, Colombia, and Mexico City, Mexico. Results show that whereas in Bogota larger particles pose a more serious problem, in Mexico City, even though nowadays levels are more controlled, in the recent past PM were the ones causing serious problems. 相似文献
2.
C. P. Robert T. Rydén & D. M. Titterington 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(1):57-75
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism. 相似文献
3.
The authors present theoretical results that show how one can simulate a mixture distribution whose components live in subspaces of different dimension by reformulating the problem in such a way that observations may be drawn from an auxiliary continuous distribution on the largest subspace and then transformed in an appropriate fashion. Motivated by the importance of enlarging the set of available Markov chain Monte Carlo (MCMC) techniques, the authors show how their results can be fruitfully employed in problems such as model selection (or averaging) of nested models, or regeneration of Markov chains for evaluating standard deviations of estimated expectations derived from MCMC simulations. 相似文献
4.
Parallel algorithms for Markov chain Monte Carlo methods in latent spatial Gaussian models 总被引:1,自引:1,他引:1
Markov chain Monte Carlo (MCMC) implementations of Bayesian inference for latent spatial Gaussian models are very computationally intensive, and restrictions on storage and computation time are limiting their application to large problems. Here we propose various parallel MCMC algorithms for such models. The algorithms' performance is discussed with respect to a simulation study, which demonstrates the increase in speed with which the algorithms explore the posterior distribution as a function of the number of processors. We also discuss how feasible problem size is increased by use of these algorithms. 相似文献
5.
It is well known that long-term exposure to high levels of pollution is hazardous to human health. Therefore, it is important to study and understand the behavior of pollutants in general. In this work, we study the occurrence of a pollutant concentration's surpassing a given threshold (an exceedance) as well as the length of time that the concentration stays above it. A general N(t)/D/1 queueing model is considered to jointly analyze those problems. A non-homogeneous Poisson process is used to model the arrivals of clusters of exceedances. Geometric and generalized negative binomial distributions are used to model the amount of time (cluster size) that the pollutant concentration stays above the threshold. A mixture model is also used for the cluster size distribution. The rate function of the non-homogeneous Poisson process is assumed to be of either the Weibull or the Musa–Okumoto type. The selection of the model that best fits the data is performed using the Bayes discrimination method and the sum of absolute differences as well as using a graphical criterion. Results are applied to the daily maximum ozone measurements provided by the monitoring network of the Metropolitan Area of Mexico City. 相似文献
6.
Eugene D. Hahn 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2006,169(1):37-48
Summary. In recent years, advances in Markov chain Monte Carlo techniques have had a major influence on the practice of Bayesian statistics. An interesting but hitherto largely underexplored corollary of this fact is that Markov chain Monte Carlo techniques make it practical to consider broader classes of informative priors than have been used previously. Conjugate priors, long the workhorse of classic methods for eliciting informative priors, have their roots in a time when modern computational methods were unavailable. In the current environment more attractive alternatives are practicable. A reappraisal of these classic approaches is undertaken, and principles for generating modern elicitation methods are described. A new prior elicitation methodology in accord with these principles is then presented. 相似文献
7.
《Journal of Statistical Computation and Simulation》2012,82(2):394-413
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied. 相似文献
8.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms. 相似文献
9.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods. 相似文献
10.
S. P. Brooks 《Statistics and Computing》1998,8(3):267-274
Yu (1995) provides a novel convergence diagnostic for Markov chain Monte Carlo (MCMC) which provides a qualitative measure of mixing for Markov chains via a cusum path plot for univariate parameters of interest. The method is based upon the output of a single replication of an MCMC sampler and is therefore widely applicable and simple to use. One criticism of the method is that it is subjective in its interpretation, since it is based upon a graphical comparison of two cusum path plots. In this paper, we develop a quantitative measure of smoothness which we can associate with any given cusum path, and show how we can use this measure to obtain a quantitative measure of mixing. In particular, we derive the large sample distribution of this smoothness measure, so that objective inference is possible. In addition, we show how this quantitative measure may also be used to provide an estimate of the burn-in length for any given sampler. We discuss the utility of this quantitative approach, and highlight a problem which may occur if the chain is able to remain in any one state for some period of time. We provide a more general implementation of the method to overcome the problem in such cases. 相似文献
11.
TOM BRITTON THEODORE KYPRAIOS PHILIP D. O'NEILL 《Scandinavian Journal of Statistics》2011,38(3):578-599
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany. 相似文献
12.
《Journal of Statistical Computation and Simulation》2012,82(4):235-248
In this work we study robustness in Bayesian models through a generalization of the Normal distribution. We show new appropriate techniques in order to deal with this distribution in Bayesian inference. Then we propose two approaches to decide, in some applications, if we should replace the usual Normal model by this generalization. First, we pose this dilemma as a model rejection problem, using diagnostic measures. In the second approach we evaluate the model's predictive efficiency. We illustrate those perspectives with a simulation study, a non linear model and a longitudinal data model. 相似文献
13.
Zheng Wei 《Journal of applied statistics》2019,46(11):1917-1936
Due to the escalating growth of big data sets in recent years, new Bayesian Markov chain Monte Carlo (MCMC) parallel computing methods have been developed. These methods partition large data sets by observations into subsets. However, for Bayesian nested hierarchical models, typically only a few parameters are common for the full data set, with most parameters being group specific. Thus, parallel Bayesian MCMC methods that take into account the structure of the model and split the full data set by groups rather than by observations are a more natural approach for analysis. Here, we adapt and extend a recently introduced two-stage Bayesian hierarchical modeling approach, and we partition complete data sets by groups. In stage 1, the group-specific parameters are estimated independently in parallel. The stage 1 posteriors are used as proposal distributions in stage 2, where the target distribution is the full model. Using three-level and four-level models, we show in both simulation and real data studies that results of our method agree closely with the full data analysis, with greatly increased MCMC efficiency and greatly reduced computation times. The advantages of our method versus existing parallel MCMC computing methods are also described. 相似文献
14.
I. Bray & D. E. Wright 《Journal of the Royal Statistical Society. Series C, Applied statistics》1998,47(4):589-602
Data collected before the routine application of prenatal screening are of unique value in estimating the natural live-birth prevalence of Down syndrome. However, much of these data are from births from over 20 years ago and they are of uncertain quality. In particular, they are subject to varying degrees of underascertainment. Published approaches have used ad hoc corrections to deal with this problem or have been restricted to data sets in which ascertainment is assumed to be complete. In this paper we adopt a Bayesian approach to modelling ascertainment and live-birth prevalence. We consider three prior specifications concerning ascertainment and compare predicted maternal-age-specific prevalence under these three different prior specifications. The computations are carried out by using Markov chain Monte Carlo methods in which model parameters and missing data are sampled. 相似文献
15.
Bayesian analysis of mortality data 总被引:1,自引:0,他引:1
Petros Dellaportas Adrian F. M. Smith & Photis Stavropoulos 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2001,164(2):275-291
Congdon argued that the use of parametric modelling of mortality data is necessary in many practical demographical problems. In this paper, we focus on a form of model introduced by Heligman and Pollard in 1980, and we adopt a Bayesian analysis, using Markov chain Monte Carlo simulation, to produce the posterior summaries required. This opens the way to richer, more flexible inference summaries and avoids the numerical problems that are encountered with classical methods. Particular methodologies to cope with incomplete life-tables and a derivation of joint lifetimes, median times to death and related quantities of interest are also presented. 相似文献
16.
GIORGOS SERMAIDIS OMIROS PAPASPILIOPOULOS GARETH O. ROBERTS ALEXANDROS BESKOS PAUL FEARNHEAD 《Scandinavian Journal of Statistics》2013,40(2):294-321
ABSTRACT. We develop exact Markov chain Monte Carlo methods for discretely sampled, directly and indirectly observed diffusions. The qualification ‘exact’ refers to the fact that the invariant and limiting distribution of the Markov chains is the posterior distribution of the parameters free of any discretization error. The class of processes to which our methods directly apply are those which can be simulated using the most general to date exact simulation algorithm. The article introduces various methods to boost the performance of the basic scheme, including reparametrizations and auxiliary Poisson sampling. We contrast both theoretically and empirically how this new approach compares to irreducible high frequency imputation, which is the state‐of‐the‐art alternative for the class of processes we consider, and we uncover intriguing connections. All methods discussed in the article are tested on typical examples. 相似文献
17.
D. A. Stephens & A. F. M. Smith 《Journal of the Royal Statistical Society. Series C, Applied statistics》1997,46(4):477-492
To assess radiation damage in steel for reactor pressure vessels in the nuclear industry, specimens are subjected to the Charpy test, which measures how much energy a specimen can absorb at a given test temperature before cracking. The resulting Charpy impact energy data are well represented by a three-parameter Burr curve as a function of test temperature, in which the parameters of the Burr curve are themselves dependent on irradiation dose. The resulting non-linear model function, combined with heteroscedastic random errors, gives rise to complicated likelihood surfaces that make conventional statistical techniques difficult to implement. To compute estimates of parameters of practical interest, Markov chain Monte Carlo sampling-based techniques are implemented. The approach is applied to 40 data sets from specimens subjected to no irradiation or one or two doses of irradiation. The influence of irradiation dose on the amount of energy absorbed is investigated. 相似文献
18.
H. Haario M. Laine M. Lehtinen E. Saksman J. Tamminen 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(3):591-607
Summary. We discuss the inversion of the gas profiles (ozone, NO3 , NO2 , aerosols and neutral density) in the upper atmosphere from the spectral occultation measurements. The data are produced by the 'Global ozone monitoring of occultation of stars' instrument on board the Envisat satellite that was launched in March 2002. The instrument measures the attenuation of light spectra at various horizontal paths from about 100 km down to 10–20 km. The new feature is that these data allow the inversion of the gas concentration height profiles. A short introduction is given to the present operational data management procedure with examples of the first real data inversion. Several solution options for a more comprehensive statistical inversion are presented. A direct inversion leads to a non-linear model with hundreds of parameters to be estimated. The problem is solved with an adaptive single-step Markov chain Monte Carlo algorithm. Another approach is to divide the problem into several non-linear smaller dimensional problems, to run parallel adaptive Markov chain Monte Carlo chains for them and to solve the gas profiles in repetitive linear steps. The effect of grid size is discussed, and we present how the prior regularization takes the grid size into account in a way that effectively leads to a grid-independent inversion. 相似文献
19.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples. 相似文献
20.
Cathy W.S. Chen Richard H. Gerlach S.T. Boris Choy Celine Lin 《Journal of statistical planning and inference》2010
A family of threshold nonlinear generalised autoregressive conditionally heteroscedastic models is considered, that allows smooth transitions between regimes, capturing size asymmetry via an exponential smooth transition function. A Bayesian approach is taken and an efficient adaptive sampling scheme is employed for inference, including a novel extension to a recently proposed prior for the smoothing parameter that solves a likelihood identification problem. A simulation study illustrates that the sampling scheme performs well, with the chosen prior kept close to uninformative, while successfully ensuring identification of model parameters and accurate inference for the smoothing parameter. An empirical study confirms the potential suitability of the model, highlighting the presence of both mean and volatility (size) asymmetry; while the model is favoured over modern, popular model competitors, including those with sign asymmetry, via the deviance information criterion. 相似文献