首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The study of spatial variations in disease rates is a common epidemiological approach used to describe the geographical clustering of diseases and to generate hypotheses about the possible 'causes' which could explain apparent differences in risk. Recent statistical and computational developments have led to the use of realistically complex models to account for overdispersion and spatial correlation. However, these developments have focused almost exclusively on spatial modelling of a single disease. Many diseases share common risk factors (smoking being an obvious example) and, if similar patterns of geographical variation of related diseases can be identified, this may provide more convincing evidence of real clustering in the underlying risk surface. We propose a shared component model for the joint spatial analysis of two diseases. The key idea is to separate the underlying risk surface for each disease into a shared and a disease-specific component. The various components of this formulation are modelled simultaneously by using spatial cluster models implemented via reversible jump Markov chain Monte Carlo methods. We illustrate the methodology through an analysis of oral and oesophageal cancer mortality in the 544 districts of Germany, 1986–1990.  相似文献   

2.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   

3.
4.
Summary.  A fully Bayesian analysis of directed graphs, with particular emphasis on applica- tions in social networks, is explored. The model is capable of incorporating the effects of covariates, within and between block ties and multiple responses. Inference is straightforward by using software that is based on Markov chain Monte Carlo methods. Examples are provided which highlight the variety of data sets that can be entertained and the ease with which they can be analysed.  相似文献   

5.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

6.
This article designs a Sequential Monte Carlo (SMC) algorithm for estimation of Bayesian semi-parametric Stochastic Volatility model for financial data. In particular, it makes use of one of the most recent particle filters called Particle Learning (PL). SMC methods are especially well suited for state-space models and can be seen as a cost-efficient alternative to Markov Chain Monte Carlo (MCMC), since they allow for online type inference. The posterior distributions are updated as new data is observed, which is exceedingly costly using MCMC. Also, PL allows for consistent online model comparison using sequential predictive log Bayes factors. A simulated data is used in order to compare the posterior outputs for the PL and MCMC schemes, which are shown to be almost identical. Finally, a short real data application is included.  相似文献   

7.
In this paper we present a review of population-based simulation for static inference problems. Such methods can be described as generating a collection of random variables {X n } n=1,…,N in parallel in order to simulate from some target density π (or potentially sequence of target densities). Population-based simulation is important as many challenging sampling problems in applied statistics cannot be dealt with successfully by conventional Markov chain Monte Carlo (MCMC) methods. We summarize population-based MCMC (Geyer, Computing Science and Statistics: The 23rd Symposium on the Interface, pp. 156–163, 1991; Liang and Wong, J. Am. Stat. Assoc. 96, 653–666, 2001) and sequential Monte Carlo samplers (SMC) (Del Moral, Doucet and Jasra, J. Roy. Stat. Soc. Ser. B 68, 411–436, 2006a), providing a comparison of the approaches. We give numerical examples from Bayesian mixture modelling (Richardson and Green, J. Roy. Stat. Soc. Ser. B 59, 731–792, 1997).  相似文献   

8.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

9.
This paper presents the Bayesian analysis of a semiparametric regression model that consists of parametric and nonparametric components. The nonparametric component is represented with a Fourier series where the Fourier coefficients are assumed a priori to have zero means and to decay to 0 in probability at either algebraic or geometric rates. The rate of decay controls the smoothness of the response function. The posterior analysis automatically selects the amount of smoothing that is coherent with the model and data. Posterior probabilities of the parametric and semiparametric models provide a method for testing the parametric model against a non-specific alternative. The Bayes estimator's mean integrated squared error compares favourably with the theoretically optimal estimator for kernel regression.  相似文献   

10.
11.
We deal with two-way contingency tables having ordered column categories. We use a row effects model wherein each interaction term is assumed to have a multiplicative form involving a row effect parameter and a fixed column score. We propose a methodology to cluster row effects in order to simplify the interaction structure and to enhance the interpretation of the model. Our method uses a product partition model with a suitable specification of the cohesion function, so that we can carry out our analysis on a collection of models of varying dimensions using a straightforward MCMC sampler. The methodology is illustrated with reference to simulated and real data sets.  相似文献   

12.
Modelling accelerated life test data by using a Bayesian approach   总被引:1,自引:0,他引:1  
Summary. Because of the high reliability of many modern products, accelerated life tests are becoming widely used to obtain timely information about their time-to-failure distributions. We propose a general class of accelerated life testing models which are motivated by the actual failure process of units from a limited failure population with a positive probability of not failing during the technological lifetime. We demonstrate a Bayesian approach to this problem, using a new class of models with non-monotone hazard rates, the hazard model with potential scope for use far beyond accelerated life testing. Our methods are illustrated with the modelling and analysis of a data set on lifetimes of printed circuit boards under humidity accelerated life testing.  相似文献   

13.
A threshold autoregressive model for wholesale electricity prices   总被引:1,自引:0,他引:1  
Summary.  We introduce a discrete time model for electricity prices which accounts for both transitory spikes and temperature effects. The model allows for different rates of mean reversion: one for weather events, one around price jumps and another for the remainder of the process. We estimate the model by using a Markov chain Monte Carlo approach with 3 years of daily data from Allegheny County, Pennsylvania. We show that our model outperforms existing stochastic jump diffusion models for this data set. Results also demonstrate the importance of model parameters corresponding to both the temperature effect and the multilevel mean reversion rate.  相似文献   

14.
In a Bayesian analysis of finite mixture models, parameter estimation and clustering are sometimes less straightforward than might be expected. In particular, the common practice of estimating parameters by their posterior mean, and summarizing joint posterior distributions by marginal distributions, often leads to nonsensical answers. This is due to the so-called 'label switching' problem, which is caused by symmetry in the likelihood of the model parameters. A frequent response to this problem is to remove the symmetry by using artificial identifiability constraints. We demonstrate that this fails in general to solve the problem, and we describe an alternative class of approaches, relabelling algorithms , which arise from attempting to minimize the posterior expected loss under a class of loss functions. We describe in detail one particularly simple and general relabelling algorithm and illustrate its success in dealing with the label switching problem on two examples.  相似文献   

15.
Summary.  Road safety has recently become a major concern in most modern societies. The identification of sites that are more dangerous than others (black spots) can help in better scheduling road safety policies. This paper proposes a methodology for ranking sites according to their level of hazard. The model is innovative in at least two respects. Firstly, it makes use of all relevant information per accident location, including the total number of accidents and the number of fatalities, as well as the number of slight and serious injuries. Secondly, the model includes the use of a cost function to rank the sites with respect to their total expected cost to society. Bayesian estimation for the model via a Markov chain Monte Carlo approach is proposed. Accident data from 519 intersections in Leuven (Belgium) are used to illustrate the methodology proposed. Furthermore, different cost functions are used to show the effect of the proposed method on the use of different costs per type of injury.  相似文献   

16.
Prediction of possible cliff erosion at some future date is fundamental to coastal planning and shoreline management, for example to avoid development in vulnerable areas. Historically, to predict cliff recession rates deterministic methods were used. More recently, recession predictions have been expressed in probabilistic terms. However, to date, only simplistic models have been developed. We consider the cliff erosion along the Holderness Coast. Since 1951 a monitoring program has been started in 118 stations along the coast, providing an invaluable, but often missing, source of information. We build hierarchical random effect models, taking account of the known dynamics of the process and including the missing information.  相似文献   

17.
Malaria illness can be diagnosed by the presence of fever and parasitaemia. However, in highly endemic areas the diagnosis of clinical malaria can be difficult since children may tolerate parasites without fever and may have fever due to other causes. We propose a novel, simulation-based Bayesian approach for obtaining precise estimates of the probabilities of children with different levels of parasitaemia having fever due to malaria, by formulating the problem as a mixture of distributions. The methodology suggested is a general methodology for decomposing any two-component mixture distribution nonparametrically, when an independent training sample is available from one of the components. It is based on the assumption that one of the component distributions lies on the left of the other but there is some overlap between the distributions.  相似文献   

18.
The authors propose a procedure for determining the unknown number of components in mixtures by generalizing a Bayesian testing method proposed by Mengersen & Robert (1996). The testing criterion they propose involves a Kullback‐Leibler distance, which may be weighted or not. They give explicit formulas for the weighted distance for a number of mixture distributions and propose a stepwise testing procedure to select the minimum number of components adequate for the data. Their procedure, which is implemented using the BUGS software, exploits a fast collapsing approach which accelerates the search for the minimum number of components by avoiding full refitting at each step. The performance of their method is compared, using both distances, to the Bayes factor approach.  相似文献   

19.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

20.
Book Reviews     
This article uses a Bayesian unit-root test in stochastic volatility models. The time series of interest is the volatility that is unobservable. The unit-root testing is based on the posterior odds ratio, which is approximated by Markov-chain Monte Carlo methods. Simulations show that the testing procedure is efficient for moderate sample size. The unit-root hypothesis is rejected in seven market indexes, and some evidence of nonstationarity is observed in the TWSI of Taiwan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号