首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper describes how importance sampling can be applied to estimate likelihoods for spatio-temporal stochastic models of epidemics in plant populations, where observations consist of the set of diseased individuals at two or more distinct times. Likelihood computation is problematic because of the inherent lack of independence of the status of individuals in the population whenever disease transmission is distance-dependent. The methods of this paper overcome this by partitioning the population into a number of sectors and then attempting to take account of this dependence within each sector, while neglecting that between-sectors. Application to both simulated and real epidemic data sets show that the techniques perform well in comparison with existing approaches. Moreover, the results confirm the validity of likelihood estimates obtained elsewhere using Markov chain Monte Carlo methods.  相似文献   

2.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

3.
In this paper, we develop a Bayesian estimation procedure for semiparametric models under shape constrains. The approach uses a hierarchical Bayes framework and characterizations of shape-constrained B-splines. We employ Markov chain Monte Carlo methods for model fitting, using a truncated normal distribution as the prior for the coefficients of basis functions to ensure the desired shape constraints. The small sample properties of the function estimators are provided via simulation and compared with existing methods. A real data analysis is conducted to illustrate the application of the proposed method.  相似文献   

4.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

5.
Often the dependence in multivariate survival data is modeled through an individual level effect called the frailty. Due to its mathematical simplicity, the gamma distribution is often used as the frailty distribution for hazard modeling. However, it is well known that the gamma frailty distribution has many drawbacks. For example, it weakens the effect of covariates. In addition, in the presence of a multilevel model, overall frailty comes from several levels. To overcome such drawbacks, more heavy-tailed distributions are needed to model the frailty distribution in order to incorporate extra variability. In this article, we develop a class of log-skew-t distributions for the frailty. This class includes the log-normal distribution along with many other heavy tailed distributions, e.g., log-Cauchy, log normal, and log-t as special cases.

Conditional on the frailty, the survival times are assumed to be independent with proportional hazard structure. The modeling process is then completed by assuming multilevel frailty-effects. Instead of tuning a strict parameterization of the baseline hazard function, we consider the partial likelihood approach and thus leave the baseline function unspecified. By eliminating the hazard, the pre-specification and computation are simplified considerably.  相似文献   

6.
We consider a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows us to use Bayesian variable selection methods for covariance selection. We search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors. With this method we are able to learn from the data for each effect whether it is random or not, and whether covariances among random effects are zero. An application in marketing shows a substantial reduction of the number of free elements in the variance-covariance matrix.  相似文献   

7.
We present a method for predicting future pavement distresses such as longitudinal cracking. These predicted distress values are used to plan road repairs. Large inherent variability in measured cracking and an extremely small number of observations are the nature of the pavement cracking data, which calls for a parametric Bayesian approach. We model theoretical pavement distress with a sigmoidal equation with coefficients based on prior engineering knowledge. We show that a Bayesian formulation akin to Kalman filtering gives sensible predictions and provides defendable uncertainty statements for predictions. The method is demonstrated on data collected by the Texas Transportation Institute at several sites in Texas. The predictions behave in a reasonable and statistically valid manner.  相似文献   

8.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

9.
Quasi-life tables, in which the data arise from many concurrent, independent, discrete-time renewal processes, were defined by Baxter (1994, Biometrika 81:567–577), who outlined some methods for estimation. The processes are not observed individually; only the total numbers of renewals at each time point are observed. Crowder and Stephens (2003, Lifetime Data Anal 9:345–355) implemented a formal estimating-equation approach that invokes large-sample theory. However, these asymptotic methods fail to yield sensible estimates for smaller samples. In this paper, we implement a Bayesian analysis based on MCMC computation that works equally well for large and small sample sizes. We give three simulated examples, studying the Bayesian results, the impact of changing prior specification, and empirical properties of the Bayesian estimators of the lifetime distribution parameters. We also study the Baxter (1994, Biometrika 81:567–577) data, and uncover structure that has not been commented upon previously.  相似文献   

10.
Statistical methods are formulated for fitting and testing percolation-based, spatio-temporal models that are generally applicable to biological or physical processes that evolve in spatially distributed populations. The approach is developed and illustrated in the context of the spread of Rhizoctonia solani, a fungal pathogen, in radish but is readily generalized to other scenarios. The particular model considered represents processes of primary and secondary infection between nearest-neighbour hosts in a lattice, and time-varying susceptibility of the hosts. Bayesian methods for fitting the model to observations of disease spread through space and time in replicate populations are developed. These use Markov chain Monte Carlo methods to overcome the problems associated with partial observation of the process. We also consider how model testing can be achieved by embedding classical methods within the Bayesian analysis. In particular we show how a residual process, with known sampling distribution, can be defined. Model fit is then examined by generating samples from the posterior distribution of the residual process, to which a classical test for consistency with the known distribution is applied, enabling the posterior distribution of the P-value of the test used to be estimated. For the Rhizoctonia-radish system the methods confirm the findings of earlier non-spatial analyses regarding the dynamics of disease transmission and yield new evidence of environmental heterogeneity in the replicate experiments.  相似文献   

11.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

12.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

13.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

14.
This paper presents the Bayesian analysis of a semiparametric regression model that consists of parametric and nonparametric components. The nonparametric component is represented with a Fourier series where the Fourier coefficients are assumed a priori to have zero means and to decay to 0 in probability at either algebraic or geometric rates. The rate of decay controls the smoothness of the response function. The posterior analysis automatically selects the amount of smoothing that is coherent with the model and data. Posterior probabilities of the parametric and semiparametric models provide a method for testing the parametric model against a non-specific alternative. The Bayes estimator's mean integrated squared error compares favourably with the theoretically optimal estimator for kernel regression.  相似文献   

15.
The max-stable process is a natural approach for modelling extrenal dependence in spatial data. However, the estimation is difficult due to the intractability of the full likelihoods. One approach that can be used to estimate the posterior distribution of the parameters of the max-stable process is to employ composite likelihoods in the Markov chain Monte Carlo (MCMC) samplers, possibly with adjustment of the credible intervals. In this paper, we investigate the performance of the composite likelihood-based MCMC samplers under various settings of the Gaussian extreme value process and the Brown–Resnick process. Based on our findings, some suggestions are made to facilitate the application of this estimator in real data.  相似文献   

16.
Based on the Bayesian framework of utilizing a Gaussian prior for the univariate nonparametric link function and an asymmetric Laplace distribution (ALD) for the residuals, we develop a Bayesian treatment for the Tobit quantile single-index regression model (TQSIM). With the location-scale mixture representation of the ALD, the posterior inferences of the latent variables and other parameters are achieved via the Markov Chain Monte Carlo computation method. TQSIM broadens the scope of applicability of the Tobit models by accommodating nonlinearity in the data. The proposed method is illustrated by two simulation examples and a labour supply dataset.  相似文献   

17.
It is well known that the approximate Bayesian computation algorithm based on Markov chain Monte Carlo methods suffers from the sensitivity to the choice of starting values, inefficiency and a low acceptance rate. To overcome these problems, this study proposes a generalization of the multiple-point Metropolis algorithm, which proceeds by generating multiple-dependent proposals and then by selecting a candidate among the set of proposals on the basis of weights that can be chosen arbitrarily. The performance of the proposed algorithm is illustrated by using both simulated and real data.  相似文献   

18.
Data from large surveys are often supplemented with sampling weights that are designed to reflect unequal probabilities of response and selection inherent in complex survey sampling methods. We propose two methods for Bayesian estimation of parametric models in a setting where the survey data and the weights are available, but where information on how the weights were constructed is unavailable. The first approach is to simply replace the likelihood with the pseudo likelihood in the formulation of Bayes theorem. This is proven to lead to a consistent estimator but also leads to credible intervals that suffer from systematic undercoverage. Our second approach involves using the weights to generate a representative sample which is integrated into a Markov chain Monte Carlo (MCMC) or other simulation algorithms designed to estimate the parameters of the model. In the extensive simulation studies, the latter methodology is shown to achieve performance comparable to the standard frequentist solution of pseudo maximum likelihood, with the added advantage of being applicable to models that require inference via MCMC. The methodology is demonstrated further by fitting a mixture of gamma densities to a sample of Australian household income.  相似文献   

19.
A Bayesian approach is considered for identifying sources of nonstationarity for models with a unit root and breaks. Different types of multiple breaks are allowed through crash models, changing growth models, and mixed models. All possible nonstationary models are represented by combinations of zero or nonzero parameters associated with time trends, dummy for breaks, or previous levels, for which Bayesian posterior probabilities are computed. Multiple tests based on Markov chain Monte Carlo procedures are implemented. The proposed method is applied to a real data set, the Korean GDP data set, showing a strong evidence for two breaks rather than the usual unit root or one break.  相似文献   

20.
In this paper, we extend the structural probit measurement error model by considering that the unobserved covariate follows a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically which can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques to generate from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which is the case with the naive procedure and it seems to be more efficient than using the structural probit model when the distribution of the covariate (predictor) is skew.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号