首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  There are models for which the evaluation of the likelihood is infeasible in practice. For these models the Metropolis–Hastings acceptance probability cannot be easily computed. This is the case, for instance, when only departure times from a G / G /1 queue are observed and inference on the arrival and service distributions are required. Indirect inference is a method to estimate a parameter θ in models whose likelihood function does not have an analytical closed form, but from which random samples can be drawn for fixed values of θ . First an auxiliary model is chosen whose parameter β can be directly estimated. Next, the parameters in the auxiliary model are estimated for the original data, leading to an estimate     . The parameter β is also estimated by using several sampled data sets, simulated from the original model for different values of the original parameter θ . Finally, the parameter θ which leads to the best match to     is chosen as the indirect inference estimate. We analyse which properties an auxiliary model should have to give satisfactory indirect inference. We look at the situation where the data are summarized in a vector statistic T , and the auxiliary model is chosen so that inference on β is drawn from T only. Under appropriate assumptions the asymptotic covariance matrix of the indirect estimators is proportional to the asymptotic covariance matrix of T and componentwise inversely proportional to the square of the derivative, with respect to θ , of the expected value of T . We discuss how these results can be used in selecting good estimating functions. We apply our findings to the queuing problem.  相似文献   

2.
A Bayesian mixture model for differential gene expression   总被引:3,自引:0,他引:3  
Summary.  We propose model-based inference for differential gene expression, using a nonparametric Bayesian probability model for the distribution of gene intensities under various conditions. The probability model is a mixture of normal distributions. The resulting inference is similar to a popular empirical Bayes approach that is used for the same inference problem. The use of fully model-based inference mitigates some of the necessary limitations of the empirical Bayes method. We argue that inference is no more difficult than posterior simulation in traditional nonparametric mixture-of-normal models. The approach proposed is motivated by a microarray experiment that was carried out to identify genes that are differentially expressed between normal tissue and colon cancer tissue samples. Additionally, we carried out a small simulation study to verify the methods proposed. In the motivating case-studies we show how the nonparametric Bayes approach facilitates the evaluation of posterior expected false discovery rates. We also show how inference can proceed even in the absence of a null sample of known non-differentially expressed scores. This highlights the difference from alternative empirical Bayes approaches that are based on plug-in estimates.  相似文献   

3.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

4.
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.  相似文献   

5.
We consider likelihood and Bayesian inferences for seemingly unrelated (linear) regressions for the joint niultivariate terror (e.g. Zellner, 1976) and the independent t-error (e.g. Maronna, 1976) models. For likelihood inference, the scale matrix and the shape parameter for the joint terror model cannot be consistently estimated because of the lack of adequate information to identify the latter. The joint terror model also yields the same MLEs for the regression coefficients and the scale matrix as for the independent normal error model. which are not robust against outliers. Further, linear hypotheses with respect

to the regression coefficients also give rise to the same mill distributions AS for the independent normal error model, though the MLE has a non-normal limiting distribution. In contrast to the striking similarities between the joint t-error and the independent normal error models, the independent f-error model yields AiLEs that are lubust against uuthers. Since the MLE of the shape parameter reflects the tails of the data distributions, this model extends the independent normal error model for modeling data distributions with relatively t hicker tails. These differences are also discussed with respect to the posterior and predictive distributions for Bayesian inference.  相似文献   

6.
We present a method for using posterior samples produced by the computer program BUGS (Bayesian inference Using Gibbs Sampling) to obtain approximate profile likelihood functions of parameters or functions of parameters in directed graphical models with incomplete data. The method can also be used to approximate integrated likelihood functions. It is easily implemented and it performs a good approximation. The profile likelihood represents an aspect of the parameter uncertainty which does not depend on the specification of prior distributions, and it can be used as a worthwhile supplement to BUGS that enable us to do both Bayesian and likelihood based analyses in directed graphical models.  相似文献   

7.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

8.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

9.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   

10.
We apply some log-linear modelling methods, which have been proposed for treating non-ignorable non-response, to some data on voting intention from the British General Election Survey. We find that, although some non-ignorable non-response models fit the data very well, they may generate implausible point estimates and predictions. Some explanation is provided for the extreme behaviour of the maximum likelihood estimates for the most parsimonious model. We conclude that point estimates for such models must be treated with great caution. To allow for the uncertainty about the non-response mechanism we explore the use of profile likelihood inference and find the likelihood surfaces to be very flat and the interval estimates to be very wide. To reduce the width of these intervals we propose constraining confidence regions to values where the parameters governing the non-response mechanism are plausible and study the effect of such constraints on inference. We find that the widths of these intervals are reduced but remain wide.  相似文献   

11.
The likelihood function is often used for parameter estimation. Its use, however, may cause difficulties in specific situations. In order to circumvent these difficulties, we propose a parameter estimation method based on the replacement of the likelihood in the formula of the Bayesian posterior distribution by a function which depends on a contrast measuring the discrepancy between observed data and a parametric model. The properties of the contrast-based (CB) posterior distribution are studied to understand what the consequences of incorporating a contrast in the Bayes formula are. We show that the CB-posterior distribution can be used to make frequentist inference and to assess the asymptotic variance matrix of the estimator with limited analytical calculations compared to the classical contrast approach. Even if the primary focus of this paper is on frequentist estimation, it is shown that for specific contrasts the CB-posterior distribution can be used to make inference in the Bayesian way.The method was used to estimate the parameters of a variogram (simulated data), a Markovian model (simulated data) and a cylinder-based autosimilar model describing soil roughness (real data). Even if the method is presented in the spatial statistics perspective, it can be applied to non-spatial data.  相似文献   

12.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   

13.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

14.
In this paper, we introduce the empirical likelihood (EL) method to longitudinal studies. By considering the dependence within subjects in the auxiliary random vectors, we propose a new weighted empirical likelihood (WEL) inference for generalized linear models with longitudinal data. We show that the weighted empirical likelihood ratio always follows an asymptotically standard chi-squared distribution no matter which working weight matrix that we have chosen, but a well chosen working weight matrix can improve the efficiency of statistical inference. Simulations are conducted to demonstrate the accuracy and efficiency of our proposed WEL method, and a real data set is used to illustrate the proposed method.  相似文献   

15.
We propose a Bayesian computation and inference method for the Pearson-type chi-squared goodness-of-fit test with right-censored survival data. Our test statistic is derived from the classical Pearson chi-squared test using the differences between the observed and expected counts in the partitioned bins. In the Bayesian paradigm, we generate posterior samples of the model parameter using the Markov chain Monte Carlo procedure. By replacing the maximum likelihood estimator in the quadratic form with a random observation from the posterior distribution of the model parameter, we can easily construct a chi-squared test statistic. The degrees of freedom of the test equal the number of bins and thus is independent of the dimensionality of the underlying parameter vector. The test statistic recovers the conventional Pearson-type chi-squared structure. Moreover, the proposed algorithm circumvents the burden of evaluating the Fisher information matrix, its inverse and the rank of the variance–covariance matrix. We examine the proposed model diagnostic method using simulation studies and illustrate it with a real data set from a prostate cancer study.  相似文献   

16.
Approximate Bayesian inference on the basis of summary statistics is well-suited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machine-learning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the state-of-the-art approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model.  相似文献   

17.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

18.
Inference, quantile forecasting and model comparison for an asymmetric double smooth transition heteroskedastic model is investigated. A Bayesian framework in employed and an adaptive Markov chain Monte Carlo scheme is designed. A mixture prior is proposed that alleviates the usual identifiability problem as the speed of transition parameter tends to zero, and an informative prior for this parameter is suggested, that allows for reliable inference and a proper posterior, despite the non-integrability of the likelihood function. A formal Bayesian posterior model comparison procedure is employed to compare the proposed model with its two limiting cases: the double threshold GARCH and symmetric ARX GARCH models. The proposed methods are illustrated using both simulated and international stock market return series. Some illustrations of the advantages of an adaptive sampling scheme for these models are also provided. Finally, Bayesian forecasting methods are employed in a Value-at-Risk study of the international return series. The results generally favour the proposed smooth transition model and highlight explosive and smooth nonlinear behaviour in financial markets.  相似文献   

19.
The authors propose a general model for the joint distribution of nominal, ordinal and continuous variables. Their work is motivated by the treatment of various types of data. They show how to construct parameter estimates for their model, based on the maximization of the full likelihood. They provide algorithms to implement it, and present an alternative estimation method based on the pairwise likelihood approach. They also touch upon the issue of statistical inference. They illustrate their methodology using data from a foreign language achievement study.  相似文献   

20.
While most regression models focus on explaining distributional aspects of one single response variable alone, interest in modern statistical applications has recently shifted towards simultaneously studying multiple response variables as well as their dependence structure. A particularly useful tool for pursuing such an analysis are copula-based regression models since they enable the separation of the marginal response distributions and the dependence structure summarised in a specific copula model. However, so far copula-based regression models have mostly been relying on two-step approaches where the marginal distributions are determined first whereas the copula structure is studied in a second step after plugging in the estimated marginal distributions. Moreover, the parameters of the copula are mostly treated as a constant not related to covariates and most regression specifications for the marginals are restricted to purely linear predictors. We therefore propose simultaneous Bayesian inference for both the marginal distributions and the copula using computationally efficient Markov chain Monte Carlo simulation techniques. In addition, we replace the commonly used linear predictor by a generic structured additive predictor comprising for example nonlinear effects of continuous covariates, spatial effects or random effects and furthermore allow to make the copula parameters covariate-dependent. To facilitate Bayesian inference, we construct proposal densities for a Metropolis–Hastings algorithm relying on quadratic approximations to the full conditionals of regression coefficients avoiding manual tuning. The performance of the resulting Bayesian estimates is evaluated in simulations comparing our approach with penalised likelihood inference, studying the choice of a specific copula model based on the deviance information criterion, and comparing a simultaneous approach with a two-step procedure. Furthermore, the flexibility of Bayesian conditional copula regression models is illustrated in two applications on childhood undernutrition and macroecology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号