首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.  相似文献   

2.
Statistics and Computing - Simulated tempering is a popular method of allowing MCMC algorithms to move between modes of a multimodal target density $$pi $$. One problem with simulated tempering...  相似文献   

3.
4.
ABSTRACT

Markov chain Monte Carlo (MCMC) methods can be used for statistical inference. The methods are time-consuming due to time-vary. To resolve these problems, parallel tempering (PT), as a parallel MCMC method, is tried, for dynamic generalized linear models (DGLMs), as well as the several optimal properties of our proposed method. In PT, two or more samples are drawn at the same time, and samples can exchange information with each other. We also present some simulations of the DGLMs in the case and provide two applications of Poisson-type DGLMs in financial research.  相似文献   

5.
Differential equations are used in modeling diverse system behaviors in a wide variety of sciences. Methods for estimating the differential equation parameters traditionally depend on the inclusion of initial system states and numerically solving the equations. This paper presents Smooth Functional Tempering, a new population Markov Chain Monte Carlo approach for posterior estimation of parameters. The proposed method borrows insights from parallel tempering and model based smoothing to define a sequence of approximations to the posterior. The tempered approximations depend on relaxations of the solution to the differential equation model, reducing the need for estimating the initial system states and obtaining a numerical differential equation solution. Rather than tempering via approximations to the posterior that are more heavily rooted in the prior, this new method tempers towards data features. Using our proposed approach, we observed faster convergence and robustness to both initial values and prior distributions that do not reflect the features of the data. Two variations of the method are proposed and their performance is examined through simulation studies and a real application to the chemical reaction dynamics of producing nylon.  相似文献   

6.
7.
Abstract. The Adaptive Multiple Importance Sampling algorithm is aimed at an optimal recycling of past simulations in an iterated importance sampling (IS) scheme. The difference with earlier adaptive IS implementations like Population Monte Carlo is that the importance weights of all simulated values, past as well as present, are recomputed at each iteration, following the technique of the deterministic multiple mixture estimator of Owen & Zhou (J. Amer. Statist. Assoc., 95, 2000, 135). Although the convergence properties of the algorithm cannot be investigated, we demonstrate through a challenging banana shape target distribution and a population genetics example that the improvement brought by this technique is substantial.  相似文献   

8.
We introduce Markov Chain Importance Sampling (MCIS), which combines importance sampling (IS) and Markov Chain Monte Carlo (MCMC) to estimate some characteristics of a non-normalized multi-dimensional distribution. Especially, we introduce some importance functions whose variates are regeneratively generated by MCMC; these variates then are used to estimate the quantity of interest through IS. Because MCIS is regenerative, it overcomes the burn-in problem associated with MCMC. It could also speed up the mixing rate in MCMC.  相似文献   

9.
In this paper we present extensions to the original adaptive Parallel Tempering algorithm. Two different approaches are presented. In the first one we introduce state-dependent strategies using current information to perform a swap step. It encompasses a wide family of potential moves including the standard one and Equi-Energy type move, without any loss in tractability. In the second one, we introduce online trimming of the number of temperatures. Numerical experiments demonstrate the effectiveness of the proposed method.  相似文献   

10.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

11.
Importance measures are used to identify weak components and/or states in a system based on the component state random variables, which seem to be inadequate to show the corresponding actual situations. By contrast, the performance random variables own significant practical meanings and eliminate the subjectivity and limitation of state division and definition in many actual situations. In this paper, instead of state random variables, the performance stochastic processes are used for modeling all the components and the entire system, and the integrated importance measure (IIM) for the performance random variables are extended. The generalized IIM evaluates the contribution of component performance to the desired level of system performance. A case study of an oil transmission system is used to illustrate the effectiveness of our approach with importance measures.  相似文献   

12.
Which component is most important for a system's survival? We answer this question by ranking the information relationship between a system and its components. The mutual information (M) measures dependence between the operational states of the system and a component for a mission time as well as between their life lengths. This measure ranks each component in terms of its expected utility for predicting the system's survival. We explore some relationships between the ordering of importance of components by M and by Zellner's Maximal Data Information (MDIP) criterion. For many systems the bivariate distribution of the component and system lifetimes does not have a density with respect to the two-dimensional Lebesgue measure. For these systems, M is not defined, so we use a modification of a mutual information index to cover such situations. Our results for ordering dependence are general in terms of binary structures, sum of random variables, and order statistics.  相似文献   

13.
The solution of the Kolmogorov backward equation is expressed as a functional integral by means of the Feynman–Kac formula. The expectation value is approximated as a mean over trajectories. In order to reduce the variance of the estimate, importance sampling is utilized. From the optimal importance density, a modified drift function is derived which is used to simulate optimal trajectories from an Itô equation. The method is applied to option pricing and the simulation of transition densities and likelihoods for diffusion processes. The results are compared to known exact solutions and results obtained by numerical integration of the path integral using Euler transition kernels. The importance sampling leads to strong variance reduction, even if the unknown solution appearing in the drift is replaced by known reference solutions. In models with low-dimensional state space, the numerical integration method is more efficient, but in higher dimensions it soon becomes infeasible, whereas the Monte Carlo method still works.  相似文献   

14.
在计算投资组合市场风险时,采用高效率重要性抽样技术来处理大规模、高维度和稀有事件问题可以提高计算的速度和效率。在对投资组合损失进行Delta-Gamma近似的基础上,通过利用辅助分布变换函数,将求解抽样参数的最小抽样方差问题转化为一个非线性的广义最小二乘问题;在指数族抽样核的假设下,进一步将问题转化为迭代线性回归问题,从而简化了计算;通过德尔塔对冲和指数对冲投资组合的模拟算例验证了所提出方法的有效性。  相似文献   

15.
Johns (1988 Johns , M. V. (1988). Importance sampling for bootstrap confidence intervals. Journal of the American Statistical Association 83:709714.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), Davison (1988 Davison , A. C. ( 1988 ). Discussion of paper by D. V. Hinkley . Journal of the Royal Statistical Society Series B 50 : 356357 . [Google Scholar]), and Do and Hall (1991 Do , K. A. , Hall , P. ( 1991 ). On importance sampling for the bootstrap . Biometrika 78 : 161167 .[Crossref], [Web of Science ®] [Google Scholar]) used importance sampling for calculating bootstrap distributions of one-dimensional statistics. Realizing that their methods can not be extended easily to multi-dimensional statistics, Fuh and Hu (2004 Fuh , C. D. , Hu , I. ( 2004 ). Efficient importance sampling for events of moderate deviations with applications . Biometrika 91 : 471490 .[Crossref], [Web of Science ®] [Google Scholar]) proposed an exponential tilting formula for statistics of multi-dimension, which is optimal in the sense that the asymptotic variance is minimized for estimating tail probabilities of asymptotically normal statistics. For one-dimensional statistics, Hu and Su (2008 Hu , J. , Su , Z. ( 2008 ). Adaptive resampling algorithms for estimating bootstrap distributions . Journal of Statistical Planning and Inference 138 ( 6 ): 17631777 .[Crossref], [Web of Science ®] [Google Scholar]) proposed a multi-step variance minimization approach that can be viewed as a generalization of the two-step variance minimization approach proposed by Do and Hall (1991 Do , K. A. , Hall , P. ( 1991 ). On importance sampling for the bootstrap . Biometrika 78 : 161167 .[Crossref], [Web of Science ®] [Google Scholar]). In this article, we generalize the approach of Hu and Su (2008 Hu , J. , Su , Z. ( 2008 ). Adaptive resampling algorithms for estimating bootstrap distributions . Journal of Statistical Planning and Inference 138 ( 6 ): 17631777 .[Crossref], [Web of Science ®] [Google Scholar]) to multi-dimensional statistics, which applies to general statistics and does not resort to asymptotics. Empirical results on a real survival data set show that the proposed algorithm provides significant computational efficiency gains.  相似文献   

16.
In this paper, the generalized exponential power (GEP) density is proposed as an importance function in Monte Carlo simulations in the context of estimation of posterior moments of a location parameter. This density is divided in five classes according to its tail behaviour which may be exponential, polynomial or logarithmic. The notion of p-credence is also defined to characterize and to order the tails of a large class of symmetric densities by comparing their tails to those of the GEP density.The choice of the GEP density as an importance function allows us to obtain reliable and effective results when p-credences of the prior and the likelihood are defined, even if there are conflicting sources of information. Characterization of the posterior tails using p-credence can be done. Hence, it is possible to choose parameters of the GEP density in order to have an importance function with slightly heavier tails than the posterior. Simulation of observations from the GEP density is also addressed.  相似文献   

17.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   

18.
Statistics and Computing - We present an importance sampling algorithm that can produce realisations of Markovian epidemic models that exactly match observations, taken to be the number of a single...  相似文献   

19.
Abstract.  The sampling-importance resampling (SIR) algorithm aims at drawing a random sample from a target distribution π. First, a sample is drawn from a proposal distribution q , and then from this a smaller sample is drawn with sample probabilities proportional to the importance ratios π/ q . We propose here a simple adjustment of the sample probabilities and show that this gives faster convergence. The results indicate that our version converges better also for small sample sizes. The SIR algorithms are compared with the Metropolis–Hastings (MH) algorithm with independent proposals. Although MH converges asymptotically faster, the results indicate that our improved SIR version is better than MH for small sample sizes. We also establish a connection between the SIR algorithms and importance sampling with normalized weights. We show that the use of adjusted SIR sample probabilities as importance weights reduces the bias of the importance sampling estimate.  相似文献   

20.
Joint reliability importance (JRI) evaluates the interaction of two components in contributing to the system reliability in a system. Traditional JRI measures mainly concern the change of the system reliability caused by the interactive change of the reliabilities of the two components and seldom consider the probability distributions, transition rates of the object component states, and system performance. This article extends the JRI concept of two components from multi-state systems to multi-state transition systems and mainly focuses on the joint integrated importance measure (JIIM) which considers the transition rates of component states. Firstly, the concept and physical meaning of JIIM in binary systems are described. Secondly, the JIIM for deterioration process (JIIMDP) and the JIIM for maintenance process (JIIMMP) in multi-state systems are studied respectively. The corresponding characteristics of JIIMDP and JIIMMP for series and parallel systems are also analyzed. Finally, an application to an offshore electrical power generation system is given to demonstrate the proposed JIIM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号