首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 96 毫秒
1.
We consider the optimal scaling problem for proposal distributions in Hastings–Metropolis algorithms derived from Langevin diffusions. We prove an asymptotic diffusion limit theorem and show that the relative efficiency of the algorithm can be characterized by its overall acceptance rate, independently of the target distribution. The asymptotically optimal acceptance rate is 0.574. We show that, as a function of dimension n , the complexity of the algorithm is O ( n 1/3), which compares favourably with the O ( n ) complexity of random walk Metropolis algorithms. We illustrate this comparison with some example simulations.  相似文献   

2.
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.  相似文献   

3.
Rejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.  相似文献   

4.
We consider modified tolerance intervals for assessing content (dose) uniformity from one‐ and two‐stage designs. This work was motivated by a proposal from the International Pharmaceutical Aerosol Consortium on Regulation and Science (IPAC‐RS) to modify tolerance intervals to reduce their known conservatism away from the centre of the acceptance interval. IPAC‐RS's original proposal suffers from anti‐conservatism, but we find this is due to the choice of constants, not the form of the acceptance rule. We propose an alternative basis for choosing the constants in the acceptance rules and compare attained levels to those of other choices, including revised constants proposed by IPAC‐RS. These alternate choices of the constants lead to modified tolerance intervals that are either conservative at the sample sizes for which they were chosen or may have slight anti‐conservatism, within the limits of the simulations. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

5.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   

6.
Appropriately designing the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm adaptively approximating the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via online-EM methods, to the optimal kernel through minimisation of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimisation problem for the whole particle sample, yielding an algorithm with only linear complexity. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models.  相似文献   

7.
We study the properties of truncated gamma distributions and we derive simulation algorithms which dominate the standard algorithms for these distributions. For the right truncated gamma distribution, an optimal accept–reject algorithm is based on the fact that its density can be expressed as an infinite mixture of beta distribution. For integer values of the parameters, the density of the left truncated distributions can be rewritten as a mixture which can be easily generated. We give an optimal accept–reject algorithm for the other values of the parameter. We compare the efficiency of our algorithm with the previous method and show the improvement in terms of minimum acceptance probability. The algorithm proposed here has an acceptance probability which is superior to e/4.  相似文献   

8.
Abstract. The entropy and mutual information index are important concepts developed by Shannon in the context of information theory. They have been widely studied in the case of the multivariate normal distribution. We first extend these tools to the full symmetric class of multivariate elliptical distributions and then to the more flexible families of multivariate skew‐elliptical distributions. We study in detail the cases of the multivariate skew‐normal and skew‐t distributions. We implement our findings to the application of the optimal design of an ozone monitoring station network in Santiago de Chile.  相似文献   

9.
The authors consider the problem of estimating, under quadratic loss, the mean of a spherically symmetric distribution when its norm is supposed to be known and when a residual vector is available. They give a necessary and sufficient condition for the optimal James‐Stein estimator to dominate the usual estimator. Various examples are given that are not necessarily variance mixtures of normal distributions. Consideration is also given to an alternative class of robust James‐Stein type estimators that take into account the residual vector. A more general domination condition is given for this class.  相似文献   

10.
New sequential Monte Carlo methods for nonlinear dynamic systems   总被引:1,自引:0,他引:1  
In this paper we present several new sequential Monte Carlo (SMC) algorithms for online estimation (filtering) of nonlinear dynamic systems. SMC has been shown to be a powerful tool for dealing with complex dynamic systems. It sequentially generates Monte Carlo samples from a proposal distribution, adjusted by a set of importance weight with respect to a target distribution, to facilitate statistical inferences on the characteristic (state) of the system. The key to a successful implementation of SMC in complex problems is the design of an efficient proposal distribution from which the Monte Carlo samples are generated. We propose several such proposal distributions that are efficient yet easy to generate samples from. They are efficient because they tend to utilize both the information in the state process and the observations. They are all Gaussian distributions hence are easy to sample from. The central ideas of the conventional nonlinear filters, such as extended Kalman filter, unscented Kalman filter and the Gaussian quadrature filter, are used to construct these proposal distributions. The effectiveness of the proposed algorithms are demonstrated through two applications—real time target tracking and the multiuser parameter tracking in CDMA communication systems.This work was supported in part by the U.S. National Science Foundation (NSF) under grants CCR-9875314, CCR-9980599, DMS-9982846, DMS-0073651 and DMS-0073601.  相似文献   

11.
The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen (Bernoulli 7(2):223?C242, 2001) uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in order to attain a given acceptance rate. The empirical results show promising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. In the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation.  相似文献   

12.
ABSTRACT

We present an adaptive method for the automatic scaling of random-walk Metropolis–Hastings algorithms, which quickly and robustly identifies the scaling factor that yields a specified overall sampler acceptance probability. Our method relies on the use of the Robbins–Monro search process, whose performance is determined by an unknown steplength constant. Based on theoretical considerations we give a simple estimator of this constant for Gaussian proposal distributions. The effectiveness of our method is demonstrated with both simulated and real data examples.  相似文献   

13.
Abstract.  We propose new control variates for variance reduction in estimation of mean values using the Metropolis–Hastings algorithm. Traditionally, states that are rejected in the Metropolis–Hastings algorithm are simply ignored, which intuitively seems to be a waste of information. We present a setting for construction of zero mean control variates for general target and proposal distributions and develop ideas for the standard Metropolis–Hastings and reversible jump algorithms. We give results for three simulation examples. We get best results for variates that are functions of the current state x and the proposal y , but we also consider variates that in addition are functions of the Metropolis–Hastings acceptance/rejection decision. The variance reduction achieved varies depending on the target distribution and proposal mechanisms used. In simulation experiments, we typically achieve relative variance reductions between 15% and 35%.  相似文献   

14.
In this paper we describe a sequential importance sampling (SIS) procedure for counting the number of vertex covers in general graphs. The optimal SIS proposal distribution is the uniform over a suitably restricted set, but is not implementable. We will consider two proposal distributions as approximations to the optimal. Both proposals are based on randomization techniques. The first randomization is the classic probability model of random graphs, and in fact, the resulting SIS algorithm shows polynomial complexity for random graphs. The second randomization introduces a probabilistic relaxation technique that uses Dynamic Programming. The numerical experiments show that the resulting SIS algorithm enjoys excellent practical performance in comparison with existing methods. In particular the method is compared with cachet—an exact model counter, and the state of the art SampleSearch, which is based on Belief Networks and importance sampling.  相似文献   

15.
The author shows how to find M‐estimators of location whose generating function is monotone and which are optimal or close to optimal. It is easy to identify a consistent sequence of estimators in this class. In addition, it contains simple and efficient approximations in cases where the likelihood function is difficult to obtain. In some neighbourhoods of the normal distribution, the loss of efficiency due to the approximation is quite small. Optimal monotone M‐estimators can also be determined in cases when the underlying distribution is known only up to a certain neighbourhood. The author considers the e‐contamination model and an extension thereof that allows the distributions to be arbitrary outside compact intervals. His results also have implications for distributions with monotone score functions. The author illustrates his methodology using Student and stable distributions.  相似文献   

16.
The design of double acceptance sampling (AS) plans for attributes based on the operating characteristic curve paradigm is usually addressed by enumeration algorithms. These AS plans may be non optimal regarding the sample size to inspect as they were obtained without the requirement that the constraints at the OC curve controlled points are not violated for minimum Average Sample Number (ASN) scenarios. An approach based on mathematical programming is proposed to systematically design double AS plans for attributes, where the characteristics controlled are modelled by binomial or Poisson distributions. Specifically, Mixed Integer Nonlinear Programming (MINLP) formulations are developed and combined with an enumeration algorithm that allows finding ASN minimax optimal plans. A theoretical result is developed with the purpose of assuring the global optimum design is reached by iteration where a convenient solver is used to find local optima. To validate the algorithm, we compare our results with those of tables commonly used for practical purposes, consider different rates of risk, and setups commonly used in Lot Quality Assurance Plans (LQAS) for health monitoring programmes. Finally, we compare AS plans determined for processes described by binomial and Poisson distributions.  相似文献   

17.
The authors propose methods for Bayesian inference for generalized linear models with missing covariate data. They specify a parametric distribution for the covariates that is written as a sequence of one‐dimensional conditional distributions. They propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. They examine the properties of the proposed prior and resulting posterior distributions. They also present a Bayesian criterion for comparing various models, and a calibration is derived for it. A detailed simulation is conducted and two real data sets are examined to demonstrate the methodology.  相似文献   

18.
Grouped data are commonly encountered in applications. All data from a continuous population are grouped due to rounding of the individual observations. The Bernstein polynomial model is proposed as an approximate model in this paper for estimating a univariate density function based on grouped data. The coefficients of the Bernstein polynomial, as the mixture proportions of beta distributions, can be estimated using an EM algorithm. The optimal degree of the Bernstein polynomial can be determined using a change-point estimation method. The rate of convergence of the proposed density estimate to the true density is proved to be almost parametric by an acceptance–rejection argument used for generating random numbers. The proposed method is compared with some existing methods in a simulation study and is applied to the Chicken Embryo Data.  相似文献   

19.
The authors present an improved ranked set two‐sample Mann‐Whitney‐Wilcoxon test for a location shift between samples from two distributions F and G. They define a function that measures the amount of information provided by each observation from the two samples, given the actual joint ranking of all the units in a set. This information function is used as a guide for improving the Pitman efficacy of the Mann‐Whitney‐Wilcoxon test. When the underlying distributions are symmetric, observations at their mode(s) must be quantified in order to gain efficiency. Analogous results are provided for asymmetric distributions.  相似文献   

20.
The method of tempered transitions was proposed by Neal (Stat. Comput. 6:353–366, 1996) for tackling the difficulties arising when using Markov chain Monte Carlo to sample from multimodal distributions. In common with methods such as simulated tempering and Metropolis-coupled MCMC, the key idea is to utilise a series of successively easier to sample distributions to improve movement around the state space. Tempered transitions does this by incorporating moves through these less modal distributions into the MCMC proposals. Unfortunately the improved movement between modes comes at a high computational cost with a low acceptance rate of expensive proposals. We consider how the algorithm may be tuned to increase the acceptance rates for a given number of temperatures. We find that the commonly assumed geometric spacing of temperatures is reasonable in many but not all applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号