首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary. The major implementational problem for reversible jump Markov chain Monte Carlo methods is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure in the parameter space to guide our choice. We consider mechanisms for guiding the choice of proposal. The first group of methods is based on an analysis of acceptance probabilities for jumps. Essentially, these methods involve a Taylor series expansion of the acceptance probability around certain canonical jumps and turn out to have close connections to Langevin algorithms. The second group of methods generalizes the reversible jump algorithm by using the so-called saturated space approach. These allow the chain to retain some degree of memory so that, when proposing to move from a smaller to a larger model, information is borrowed from the last time that the reverse move was performed. The main motivation for this paper is that, in complex problems, the probability that the Markov chain moves between such spaces may be prohibitively small, as the probability mass can be very thinly spread across the space. Therefore, finding reasonable jump proposals becomes extremely important. We illustrate the procedure by using several examples of reversible jump Markov chain Monte Carlo applications including the analysis of autoregressive time series, graphical Gaussian modelling and mixture modelling.  相似文献   

2.
In the literature, different optimality criteria have been considered for model identification. Most of the proposals assume the normal distribution for the response variable and thus they provide optimality criteria for discriminating between regression models. In this paper, a max–min approach is followed to discriminate among competing statistical models (i.e., probability distribution families). More specifically, k different statistical models (plausible for the data) are embedded in a more general model, which includes them as particular cases. The proposed optimal design maximizes the minimum KL-efficiency to discriminate between each rival model and the extended one. An equivalence theorem is proved and an algorithm is derived from it, which is useful to compute max–min KL-efficiency designs. Finally, the algorithm is run on two illustrative examples.  相似文献   

3.
In this paper we describe a sequential importance sampling (SIS) procedure for counting the number of vertex covers in general graphs. The optimal SIS proposal distribution is the uniform over a suitably restricted set, but is not implementable. We will consider two proposal distributions as approximations to the optimal. Both proposals are based on randomization techniques. The first randomization is the classic probability model of random graphs, and in fact, the resulting SIS algorithm shows polynomial complexity for random graphs. The second randomization introduces a probabilistic relaxation technique that uses Dynamic Programming. The numerical experiments show that the resulting SIS algorithm enjoys excellent practical performance in comparison with existing methods. In particular the method is compared with cachet—an exact model counter, and the state of the art SampleSearch, which is based on Belief Networks and importance sampling.  相似文献   

4.
A model for media exposure probabilities is developed which has the joint probability of exposure proportional to the product of the marginal probabilities. The model is a generalization of Goodhardt & Ehrenberg's ‘duplication of viewing law’, with the duplication constant computed from a truncated canonical expansion of the joint exposure probability. The proposed model is compared on the basis of estimation accuracy and computation speed with an accurate and quick ‘approximate’ log-linear model (as noted previously)and the popular Metheringham beta-binomial model. Our model is shown to be more accurate than the approximate log-linear model and four times faster. In addition, it is much more accurate than Metheringham's model.  相似文献   

5.
A mixture of order statistics is a random variable whose distribution is a finite mixture of the distributions for order statistics. Such mixtures show up in the literature on ranked-set sampling and related sampling schemes as models for imperfect rankings. In this paper, we derive an algorithm for computing the probability that independent mixtures of order statistics come in a particular order. The algorithm is far faster than previous proposals from the literature. As an application, we show that the algorithm can be used to create Kolmogorov–Smirnov-type confidence bands that adjust for the presence of imperfect rankings.  相似文献   

6.
In this article, two new approaches are introduced to design attributes single plans, and the corresponding models are constructed separately. For Approach I, an algorithm is proposed to design sampling plans by setting a goal function to fulfill the two-point conditions on the operating characteristic curve. For Approach II, the plan parameters are solved by a nonlinear optimization model which minimizes the integration of the probability of acceptance in the interval from the producer's risk quality to the consumer's risk quality. Then numerical examples and discussions based on numerical computation results are given to illustrate the approaches, and tables of the designed plans under various conditions are provided. Moreover, a fact is given to be proved that there is a relation between the conventional design and the new approaches.  相似文献   

7.
Summary.  We propose new Metropolis–Hastings algorithms for sampling from multimodal dis- tributions on ℜ n . Tjelmeland and Hegstad have obtained direct mode jumping proposals by optimization within Metropolis–Hastings updates and different proposals for 'forward' and 'backward' steps. We generalize their scheme by allowing the probability distribution for forward and backward kernels to depend on the current state. We use the new setting to combine mode jumping proposals and proposals from a prior approximation. We obtain that the frequency of proposals from the different proposal kernels is automatically adjusted to their quality. Mode jumping proposals include local optimizations. When combining this with a prior approximation it is tempting to use local optimization results not only for mode jumping proposals but also to improve the prior approximation. We show how this idea can be implemented. The resulting algorithm is adaptive but has a Markov structure. We evaluate the effectiveness of the proposed algorithms in two simulation examples.  相似文献   

8.
Tolerance limits are those limits that contain a certain proportion of the distribution of a characteristic with a given probability. 'They are used to make sure that the production will not be outside of specifications' (Amin & Lee, 1999). Usually, tolerance limits are constructed at the beginning of the monitoring of the process. Since they are calculated just one time, these tolerance limits cannot reflect changes of tolerance level over the lifetime of the process. This research proposes an algorithm to construct tolerance limits continuously over time for any given distribution. This algorithm makes use of the exponentially weighted moving average (EWMA) technique. It can be observed that the sample size required by this method is reduced over time.  相似文献   

9.
In this paper, the author presents an efficient method of analyzing an interest-rate model using a new approach called 'data augmentation Bayesian forecasting.' First, a dynamic linear model estimation was constructed with a hierarchically-incorporated model. Next, an observational replication was generated based on the one-step forecast distribution derived from the model. A Markov-chain Monte Carlo sampling method was conducted on it as a new observation and unknown parameters were estimated. At that time, the EM algorithm was applied to establish initial values of unknown parameters while the 'quasi Bayes factor' was used to appreciate parameter candidates. 'Data augmentation Bayesian forecasting' is a method of evaluating the transition and history of 'future,' 'present' and 'past' of an arbitrary stochastic process by which an appropriate evaluation is conducted based on the probability measure that has been sequentially modified with additional information. It would be possible to use future prediction results for modifying the model to grasp the present state or re-evaluate the past state. It would be also possible to raise the degree of precision in predicting the future through the modification of the present and the past. Thus, 'data augmentation Bayesian forecasting' is applicable not only in the field of financial data analysis but also in forecasting and controlling the stochastic process.  相似文献   

10.
We consider causal inference in randomized studies for survival data with a cure fraction and all-or-none treatment non compliance. To describe the causal effects, we consider the complier average causal effect (CACE) and the complier effect on survival probability beyond time t (CESP), where CACE and CESP are defined as the difference of cure rate and non cured subjects’ survival probability between treatment and control groups within the complier class. These estimands depend on the distributions of survival times in treatment and control groups. Given covariates and latent compliance type, we model these distributions with transformation promotion time cure model whose parameters are estimated by maximum likelihood. Both the infinite dimensional parameter in the model and the mixture structure of the problem create some computational difficulties which are overcome by an expectation-maximization (EM) algorithm. We show the estimators are consistent and asymptotically normal. Some simulation studies are conducted to assess the finite-sample performance of the proposed approach. We also illustrate our method by analyzing a real data from the Healthy Insurance Plan of Greater New York.  相似文献   

11.
本文研究了一类双险种风险模型,模型中两个险种的理赔到达计数过程和其中一个险种的保费到达计数过程均为齐次Poisson过程,得到了最终破产概率的上界估计,以及关于生存概率的Feller表示,并给出了保单收入为指数分布随机变量时的破产概率上界表示式。  相似文献   

12.
We examine three media exposure distribution (e.d.) simulation methods. The first is based on the maximum likelihood estimate of an individual's exposure, the second on ‘personal probability’ (Greene 1970) and the third on a dependent Bernoulli trials model (Klotz 1973). The last method uses population exposure probabilities rather than individual exposure probabilities, thereby markedly reducing computation time. Magazine exposure data are used to compare the accuracy and computation times of the simulation methods with a log–linear e.d. model (Danaher 1988b) and the popular Metheringham (1964) model based on the beta–binomial distribution (BBD). The results show that the simulation methods are not as accurate as the log– linear model but are more accurate than Metheringham's model, However, all the simulation methods take less computation time than the log–linear model for schedules with more than six magazines, making them viable competitors for large schedule sizes  相似文献   

13.
Qi Zheng 《Statistics》2013,47(5):529-540
In this paper, we study a limiting distribution induced by Bartlett's formulation of the Luria–Delbrück mutation model. We establish the validity of the probability generating function and devise an algorithm for computing the probability mass function. Maximum-likelihood estimation and asymptotic behaviour of the distribution are considered.  相似文献   

14.
Interval-censored data arise due to a sequence random examination such that the failure time of interest occurs in an interval. In some medical studies, there exist long-term survivors who can be considered as permanently cured. We consider a mixed model for the uncured group coming from linear transformation models and cured group coming from a logistic regression model. For the inference of parameters, an EM algorithm is developed for a full likelihood approach. To investigate finite sample properties of the proposed method, simulation studies are conducted. The approach is applied to the National Aeronautics and Space Administration’s hypobaric decompression sickness data.  相似文献   

15.
In this article, maximum likelihood estimates of an exchangeable multinomial distribution using a parametric form to model the parameters as functions of covariates are derived. The non linearity of the exchangeable multinomial distribution and the parametric model make direct application of Newton Rahpson and Fisher's scoring algorithms computationally infeasible. Instead parameter estimates are obtained as solutions to an iterative weighted least-squares algorithm. A completely monotonic parametric form is proposed for defining the marginal probabilities that results in a valid probability model.  相似文献   

16.
In this paper, we consider a Bayesian mixture model that allows us to integrate out the weights of the mixture in order to obtain a procedure in which the number of clusters is an unknown quantity. To determine clusters and estimate parameters of interest, we develop an MCMC algorithm denominated by sequential data-driven allocation sampler. In this algorithm, a single observation has a non-null probability to create a new cluster and a set of observations may create a new cluster through the split-merge movements. The split-merge movements are developed using a sequential allocation procedure based in allocation probabilities that are calculated according to the Kullback–Leibler divergence between the posterior distribution using the observations previously allocated and the posterior distribution including a ‘new’ observation. We verified the performance of the proposed algorithm on the simulated data and then we illustrate its use on three publicly available real data sets.  相似文献   

17.
A reversible jump algorithm for Bayesian model determination among generalised linear models, under relatively diffuse prior distributions for the model parameters, is proposed. Orthogonal projections of the current linear predictor are used so that knowledge from the current model parameters is used to make effective proposals. This idea is generalised to moves of a reversible jump algorithm for model determination among generalised linear mixed models. Therefore, this algorithm exploits the full flexibility available in the reversible jump method. The algorithm is demonstrated via two examples and compared to existing methods.  相似文献   

18.
In this paper, we introduce a multilevel model specification with time-series components for the analysis of prices of artworks sold at auctions. Since auction data do not constitute a panel or a time series but are composed of repeated cross-sections, they require a specification with items at the first level nested in time-points. Our approach combines the flexibility of mixed effect models together with the predicting performance of time series as it allows to model the time dynamics directly. Model estimation is obtained by means of maximum likelihood through the expectation–maximization algorithm. The model is motivated by the analysis of the first database ethnic artworks sold in the most important auctions worldwide. The results show that the proposed specification improves considerably over classical proposals both in terms of fit and prediction.  相似文献   

19.
为解决犹豫环境中由随机性和模糊性引起的不确定性对实际决策造成偏差的多准则群决策问题,文章提出一种基于犹豫概率模糊语言的改进MULTIMOORA决策方法。首先构建各决策者的犹豫概率模糊语言决策矩阵,利用熵值法与离差最大化和距离最小化法得到决策者权重;利用犹豫概率模糊加权平均算子将各决策者的决策矩阵聚合为综合矩阵,进而通过改进的MULTIMOORA方法得到最终排序结果;最后通过物流园区合作商选择的算例分析验证该算法的可行性和有效性,并与TOPSIS法、VIKOR法及HFL-MULTIMOORA进行对比分析。  相似文献   

20.
Finite mixtures of multivariate skew t (MST) distributions have proven to be useful in modelling heterogeneous data with asymmetric and heavy tail behaviour. Recently, they have been exploited as an effective tool for modelling flow cytometric data. A number of algorithms for the computation of the maximum likelihood (ML) estimates for the model parameters of mixtures of MST distributions have been put forward in recent years. These implementations use various characterizations of the MST distribution, which are similar but not identical. While exact implementation of the expectation-maximization (EM) algorithm can be achieved for ‘restricted’ characterizations of the component skew t-distributions, Monte Carlo (MC) methods have been used to fit the ‘unrestricted’ models. In this paper, we review several recent fitting algorithms for finite mixtures of multivariate skew t-distributions, at the same time clarifying some of the connections between the various existing proposals. In particular, recent results have shown that the EM algorithm can be implemented exactly for faster computation of ML estimates for mixtures with unrestricted MST components. The gain in computational time is effected by noting that the semi-infinite integrals on the E-step of the EM algorithm can be put in the form of moments of the truncated multivariate non-central t-distribution, similar to the restricted case, which subsequently can be expressed in terms of the non-truncated form of the central t-distribution function for which fast algorithms are available. We present comparisons to illustrate the relative performance of the restricted and unrestricted models, and demonstrate the usefulness of the recently proposed methodology for the unrestricted MST mixture, by some applications to three real datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号