首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 878 毫秒
1.
Likelihood computation in spatial statistics requires accurate and efficient calculation of the normalizing constant (i.e. partition function) of the Gibbs distribution of the model. Two available methods to calculate the normalizing constant by Markov chain Monte Carlo methods are compared by simulation experiments for an Ising model, a Gaussian Markov field model and a pairwise interaction point field model.  相似文献   

2.
Summary. Motivated by the autologistic model for the analysis of spatial binary data on the two-dimensional lattice, we develop efficient computational methods for calculating the normalizing constant for models for discrete data defined on the cylinder and lattice. Because the normalizing constant is generally unknown analytically, statisticians have developed various ad hoc methods to overcome this difficulty. Our aim is to provide computationally and statistically efficient methods for calculating the normalizing constant so that efficient likelihood-based statistical methods are then available for inference. We extend the so-called transition method to find a feasible computational method of obtaining the normalizing constant for the cylinder boundary condition. To extend the result to the free-boundary condition on the lattice we use an efficient path sampling Markov chain Monte Carlo scheme. The methods are generally applicable to association patterns other than spatial, such as clustered binary data, and to variables taking three or more values described by, for example, Potts models.  相似文献   

3.
Markov random fields (MRFs) express spatial dependence through conditional distributions, although their stochastic behavior is defined by their joint distribution. These joint distributions are typically difficult to obtain in closed form, the problem being a normalizing constant that is a function of unknown parameters. The Gaussian MRF (or conditional autoregressive model) is one case where the normalizing constant is available in closed form; however, when sample sizes are moderate to large (thousands to tens of thousands), and beyond, its computation can be problematic. Because the conditional autoregressive (CAR) model is often used for spatial-data modeling, we develop likelihood-inference methodology for this model in situations where the sample size is too large for its normalizing constant to be computed directly. In particular, we use simulation methodology to obtain maximum likelihood estimators of mean, variance, and spatial-depencence parameters (including their asymptotic variances and covariances) of CAR models.  相似文献   

4.
A density estimation method in a Bayesian nonparametric framework is presented when recorded data are not coming directly from the distribution of interest, but from a length biased version. From a Bayesian perspective, efforts to computationally evaluate posterior quantities conditionally on length biased data were hindered by the inability to circumvent the problem of a normalizing constant. In this article, we present a novel Bayesian nonparametric approach to the length bias sampling problem that circumvents the issue of the normalizing constant. Numerical illustrations as well as a real data example are presented and the estimator is compared against its frequentist counterpart, the kernel density estimator for indirect data of Jones.  相似文献   

5.
Recently, a new family of skew distributions was proposed using a specific class of transformation of scale, in which the normalizing constant remains unchanged and unimodality is readily assured. In this paper, we introduce the mode invariance in this family, which allows us to easily study certain properties, including monotonicity of skewness, and incorporate various favorable properties. The entropy maximization for a skew distribution is discussed. A numerical study is also conducted.  相似文献   

6.
We present a novel model, which is a two-parameter extension of the Poisson distribution. Its normalizing constant is related to the Touchard polynomials, hence the name of this model. It is a flexible distribution that can account for both under- or overdispersion and concentration of zeros that are frequently found in non-Poisson count data. In contrast to some other generalizations, the Hessian matrix for maximum likelihood estimation of the Touchard parameters has a simple form. We exemplify with three data sets, showing that our suggested model is a competitive candidate for fitting non-Poisson counts.  相似文献   

7.
A closed-form expression is presented for the probability integral of the Pearson Type IV distribution, and a corresponding method of evaluation is given. This analysis addresses a long-standing gap in the theory of the Pearson system of distributions. In addition, a simple derivation is given of an expression for the normalizing constant in the Type IV integral.  相似文献   

8.
Summary.  Gaussian Markov random-field (GMRF) models are frequently used in a wide variety of applications. In most cases parts of the GMRF are observed through mutually independent data; hence the full conditional of the GMRF, a hidden GMRF (HGMRF), is of interest. We are concerned with the case where the likelihood is non-Gaussian, leading to non-Gaussian HGMRF models. Several researchers have constructed block sampling Markov chain Monte Carlo schemes based on approximations of the HGMRF by a GMRF, using a second-order expansion of the log-density at or near the mode. This is possible as the GMRF approximation can be sampled exactly with a known normalizing constant. The Markov property of the GMRF approximation yields computational efficiency.The main contribution in the paper is to go beyond the GMRF approximation and to construct a class of non-Gaussian approximations which adapt automatically to the particular HGMRF that is under study. The accuracy can be tuned by intuitive parameters to nearly any precision. These non-Gaussian approximations share the same computational complexity as those which are based on GMRFs and can be sampled exactly with computable normalizing constants. We apply our approximations in spatial disease mapping and model-based geostatistical models with different likelihoods, obtain procedures for block updating and construct Metropolized independence samplers.  相似文献   

9.
10.
Motivated by examples in protein bioinformatics, we study a mixture model of multivariate angular distributions. The distribution treated here (multivariate sine distribution) is a multivariate extension of the well-known von Mises distribution on the circle. The density of the sine distribution has an intractable normalizing constant and here we propose to replace it in the concentrated case by a simple approximation. We study the EM algorithm for this distribution and apply it to a practical example from protein bioinformatics.  相似文献   

11.
While conjugate Bayesian inference in decomposable Gaussian graphical models is largely solved, the non-decomposable case still poses difficulties concerned with the specification of suitable priors and the evaluation of normalizing constants. In this paper we derive the DY-conjugate prior ( Diaconis & Ylvisaker, 1979 ) for non-decomposable models and show that it can be regarded as a generalization to an arbitrary graph G of the hyper inverse Wishart distribution ( Dawid & Lauritzen, 1993 ). In particular, if G is an incomplete prime graph it constitutes a non-trivial generalization of the inverse Wishart distribution. Inference based on marginal likelihood requires the evaluation of a normalizing constant and we propose an importance sampling algorithm for its computation. Examples of structural learning involving non-decomposable models are given. In order to deal efficiently with the set of all positive definite matrices with non-decomposable zero-pattern we introduce the operation of triangular completion of an incomplete triangular matrix. Such a device turns out to be extremely useful both in the proof of theoretical results and in the implementation of the Monte Carlo procedure.  相似文献   

12.
The Ising model has become more prominent in spatial statistics since its applications in image analysis, as pioneered by Besag. This paper describes three multilevel generalizations of the Ising model, including the general spin Ising model. We compare and contrast these three generalizations. In all three cases, the normalizing constant is intractable but, using the developments made by physicists, we give adequate approximations for the general spin model, together with complete expressions for the binary Ising model. We show that these approximations allow inference for the general spin model. An application to texture analysis is also given.  相似文献   

13.
Holonomic function theory has been successfully implemented in a series of recent papers to efficiently calculate the normalizing constant and perform likelihood estimation for the Fisher–Bingham distributions. A key ingredient for establishing the standard holonomic gradient algorithms is the calculation of the Pfaffian equations. So far, these papers either calculate these symbolically or apply certain methods to simplify this process. Here we show the explicit form of the Pfaffian equations using the expressions from Laplace inversion methods. This improves on the implementation of the holonomic algorithms for these problems and enables their adjustments for the degenerate cases. As a result, an exact and more dimensionally efficient ODE is implemented for likelihood inference.  相似文献   

14.
ABSTRACT

We study the holonomic gradient decent for maximum likelihood estimation of exponential-polynomial distribution, whose density is the exponential function of a polynomial in the random variable. We first consider the case that the support of the distribution is the set of positive reals. We show that the maximum likelihood estimate (MLE) can be easily computed by the holonomic gradient descent, even though the normalizing constant of this family does not have a closed-form expression, and discuss the determination of the degree of the polynomial based on the score test statistic. Then, we present extensions to the whole real line and to the bivariate distribution on the positive orthant.  相似文献   

15.
Estimation of time‐average variance constant (TAVC), which is the asymptotic variance of the sample mean of a dependent process, is of fundamental importance in various fields of statistics. For frequentists, it is crucial for constructing confidence interval of mean and serving as a normalizing constant in various test statistics and so forth. For Bayesians, it is widely used for evaluating effective sample size and conducting convergence diagnosis in Markov chain Monte Carlo method. In this paper, by considering high‐order corrections to the asymptotic biases, we develop a new class of TAVC estimators that enjoys optimal ‐convergence rates under different degrees of the serial dependence of stochastic processes. The high‐order correction procedure is applicable to estimation of the so‐called smoothness parameter, which is essential in determining the optimal bandwidth. Comparisons with existing TAVC estimators are comprehensively investigated. In particular, the proposed optimal high‐order corrected estimator has the best performance in terms of mean squared error.  相似文献   

16.
Fisher's transformation of the bivariate-normal correlation coefficient is usually derived as a variance-stabilizing transformation and its normalizing property is then demonstrated by the reduced skewness of the distribution resulting from the transformation. In this note the transformation is derived as a normalizing transformation that incorporates variance stabilization. Some additional remarks are made on the transformation and its uses.  相似文献   

17.

This paper develops test procedures for testing the validity of general linear identifying restrictions imposed on cointegrating vectors in the context of a vector autoregressive model. In addition to overidentifying restrictions the considered restrictions may also involve normalizing restrictions. Tests for both types of restrictions are developed and their asymptotic properties are obtained. Under the null hypothesis tests for normalizing restrictions have an asymptotic "multivariate unit root distribution", similar to that obtained for the likelihood ratio test for cointegration, while tests for overidentifying restrictions have a standard chi-square limiting distribution. Since these two types of tests are asymptotically independent they are easy to cotnbine to an overall test for the spccifed identifying restrictions. An overall test of this kind can consistently reveal the failure of the identifying restrictions in a wider class of cases than previous tests which only test for overidentifying restrictions.  相似文献   

18.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

19.
A spatial hidden Markov model (SHMM) is introduced to analyse the distribution of a species on an atlas, taking into account that false observations and false non-detections of the species can occur during the survey, blurring the true map of presence and absence of the species. The reconstruction of the true map is tackled as the restoration of a degraded pixel image, where the true map is an autologistic model, hidden behind the observed map, whose normalizing constant is efficiently computed by simulating an auxiliary map. The distribution of the species is explained under the Bayesian paradigm and Markov chain Monte Carlo (MCMC) algorithms are developed. We are interested in the spatial distribution of the bird species Greywing Francolin in the south of Africa. Many climatic and land-use explanatory variables are also available: they are included in the SHMM and a subset of them is selected by the mutation operators within the MCMC algorithm.  相似文献   

20.
We consider a method for setting second-order accurate confidence intervals for a scalar parameter by applying normalizing transformations to unbiased estimating functions. Normalizing a nonlinear estimating function is usually easier than normalizing the estimator defined as the solution to the corresponding estimating equation. This estimator usually has to be obtained by some iterative algorithm. Numerical examples include a canonical Poisson regression and the estimation of the correlation coefficient. Numerical comparisons are made with the asymptotically equivalent method called estimating function bootstrap proposed recently by Hu and Kalbfleisch (Canad. J. Statist. 28 (2000) 449).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号