首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Based on the Bayesian framework of utilizing a Gaussian prior for the univariate nonparametric link function and an asymmetric Laplace distribution (ALD) for the residuals, we develop a Bayesian treatment for the Tobit quantile single-index regression model (TQSIM). With the location-scale mixture representation of the ALD, the posterior inferences of the latent variables and other parameters are achieved via the Markov Chain Monte Carlo computation method. TQSIM broadens the scope of applicability of the Tobit models by accommodating nonlinearity in the data. The proposed method is illustrated by two simulation examples and a labour supply dataset.  相似文献   

2.
Markov chain Monte Carlo (MCMC) implementations of Bayesian inference for latent spatial Gaussian models are very computationally intensive, and restrictions on storage and computation time are limiting their application to large problems. Here we propose various parallel MCMC algorithms for such models. The algorithms' performance is discussed with respect to a simulation study, which demonstrates the increase in speed with which the algorithms explore the posterior distribution as a function of the number of processors. We also discuss how feasible problem size is increased by use of these algorithms.  相似文献   

3.
In this paper, we adopt the Bayesian approach to expectile regression employing a likelihood function that is based on an asymmetric normal distribution. We demonstrate that improper uniform priors for the unknown model parameters yield a proper joint posterior. Three simulated data sets were generated to evaluate the proposed method which show that Bayesian expectile regression performs well and has different characteristics comparing with Bayesian quantile regression. We also apply this approach into two real data analysis.  相似文献   

4.
Shookri and Consul (1989) and Scollnik (1995) have previously considered the Bayesian analysis of an overdispersed generalized Poisson model. Scollnik (1995) also considered the Bayesian analysis of an ordinary Poisson and over-dispersed generalized Poisson mixture model. In this paper, we discuss the Bayesian analysis of these models when they are utilised in a regression context. Markov chain Monte Carlo methods are utilised, and an illustrative analysis is provided.  相似文献   

5.
6.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

7.
This paper considers the analysis of round robin interaction data whereby individuals from a group of subjects interact with one another, producing a pair of outcomes, one for each individual. The authors provide an overview of the various analyses applied to these types of data and extend the work in several directions. In particular, they provide a fully Bayesian analysis for such data and use a real data example for illustration purposes.  相似文献   

8.
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied.  相似文献   

9.
Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.  相似文献   

10.
The Reed-Frost epidemic model is a simple stochastic process with parameter q that describes the spread of an infectious disease among a closed population. Given data on the final outcome of an epidemic, it is possible to perform Bayesian inference for q using a simple Gibbs sampler algorithm. In this paper it is illustrated that by choosing latent variables appropriately, certain monotonicity properties hold which facilitate the use of a perfect simulation algorithm. The methods are applied to real data.  相似文献   

11.
This work extends the integrated nested Laplace approximation (INLA) method to latent models outside the scope of latent Gaussian models, where independent components of the latent field can have a near‐Gaussian distribution. The proposed methodology is an essential component of a bigger project that aims to extend the R package INLA in order to allow the user to add flexibility and challenge the Gaussian assumptions of some of the model components in a straightforward and intuitive way. Our approach is applied to two examples, and the results are compared with that obtained by Markov chain Monte Carlo, showing similar accuracy with only a small fraction of computational time. Implementation of the proposed extension is available in the R‐INLA package.  相似文献   

12.
Individual-level models (ILMs) for infectious disease can be used to model disease spread between individuals while taking into account important covariates. One important covariate in determining the risk of infection transfer can be spatial location. At the same time, measurement error is a concern in many areas of statistical analysis, and infectious disease modelling is no exception. In this paper, we are concerned with the issue of measurement error in the recorded location of individuals when using a simple spatial ILM to model the spread of disease within a population. An ILM that incorporates spatial location random effects is introduced within a hierarchical Bayesian framework. This model is tested upon both simulated data and data from the UK 2001 foot-and-mouth disease epidemic. The ability of the model to successfully identify both the spatial infection kernel and the basic reproduction number (R 0) of the disease is tested.  相似文献   

13.
In most practical applications, the quality of count data is often compromised due to errors-in-variables (EIVs). In this paper, we apply Bayesian approach to reduce bias in estimating the parameters of count data regression models that have mismeasured independent variables. Furthermore, the exposure model is misspecified with a flexible distribution, hence our approach remains robust against any departures from normality in its true underlying exposure distribution. The proposed method is also useful in realistic situations as the variance of EIVs is estimated instead of assumed as known, in contrast with other methods of correcting bias especially in count data EIVs regression models. We conduct simulation studies on synthetic data sets using Markov chain Monte Carlo simulation techniques to investigate the performance of our approach. Our findings show that the flexible Bayesian approach is able to estimate the values of the true regression parameters consistently and accurately.  相似文献   

14.
One of the standard problems in statistics consists of determining the relationship between a response variable and a single predictor variable through a regression function. Background scientific knowledge is often available that suggests that the regression function should have a certain shape (e.g. monotonically increasing or concave) but not necessarily a specific parametric form. Bernstein polynomials have been used to impose certain shape restrictions on regression functions. The Bernstein polynomials are known to provide a smooth estimate over equidistant knots. Bernstein polynomials are used in this paper due to their ease of implementation, continuous differentiability, and theoretical properties. In this work, we demonstrate a connection between the monotonic regression problem and the variable selection problem in the linear model. We develop a Bayesian procedure for fitting the monotonic regression model by adapting currently available variable selection procedures. We demonstrate the effectiveness of our method through simulations and the analysis of real data.  相似文献   

15.
The rjmcmc package for R implements the post‐processing reversible jump Markov chain Monte Carlo (MCMC) algorithm of Barker & Link. MCMC output from each of the models is used to estimate posterior model probabilities and Bayes factors. Automatic differentiation is used to simplify implementation. The package is demonstrated on two examples.  相似文献   

16.
New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic subprocesses and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty and leads to inferences about unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating field of application is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore the analysis of data from the southern oscillation index, one of several series that has been central in recent high profile debates in the atmospheric sciences about recent apparent trends in climatic indicators.  相似文献   

17.
Summary.  We deal with contingency table data that are used to examine the relationships between a set of categorical variables or factors. We assume that such relationships can be adequately described by the cond`itional independence structure that is imposed by an undirected graphical model. If the contingency table is large, a desirable simplified interpretation can be achieved by combining some categories, or levels, of the factors. We introduce conditions under which such an operation does not alter the Markov properties of the graph. Implementation of these conditions leads to Bayesian model uncertainty procedures based on reversible jump Markov chain Monte Carlo methods. The methodology is illustrated on a 2×3×4 and up to a 4×5×5×2×2 contingency table.  相似文献   

18.
Summary.  The paper is concerned with new methodology for statistical inference for final outcome infectious disease data using certain structured population stochastic epidemic models. A major obstacle to inference for such models is that the likelihood is both analytically and numerically intractable. The approach that is taken here is to impute missing information in the form of a random graph that describes the potential infectious contacts between individuals. This level of imputation overcomes various constraints of existing methodologies and yields more detailed information about the spread of disease. The methods are illustrated with both real and test data.  相似文献   

19.
Estimation in mixed linear models is, in general, computationally demanding, since applied problems may involve extensive data sets and large numbers of random effects. Existing computer algorithms are slow and/or require large amounts of memory. These problems are compounded in generalized linear mixed models for categorical data, since even approximate methods involve fitting of a linear mixed model within steps of an iteratively reweighted least squares algorithm. Only in models in which the random effects are hierarchically nested can the computations for fitting these models to large data sets be carried out rapidly. We describe a data augmentation approach to these computational difficulties in which we repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as 'offsets'. The submodels are chosen so that they have a nested random-effect structure, thus allowing maximum exploitation of the computational efficiency which is available in this case. Examples of the use of the algorithm for both metric and discrete responses are discussed, all calculations being carried out using macros within the MLwiN program.  相似文献   

20.
Summary. The development of time series models for traffic volume data constitutes an important step in constructing automated tools for the management of computing infrastructure resources. We analyse two traffic volume time series: one is the volume of hard disc activity, aggregated into half-hour periods, measured on a workstation, and the other is the volume of Internet requests made to a workstation. Both of these time series exhibit features that are typical of network traffic data, namely strong seasonal components and highly non-Gaussian distributions. For these time series, a particular class of non-linear state space models is proposed, and practical techniques for model fitting and forecasting are demonstrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号