首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
This paper discusses simulation from an absolutely continuous distribution on the positive real line when the Laplace transform of the distribution is known but its density and distribution functions may not be available. We advocate simulation by the inversion method using a modified Newton-Raphson method, with values of the distribution and density functions obtained by numerical transform inversion. We show that this algorithm performs well in a series of increasingly complex examples. Caution is needed in some situations when the numerical Laplace transform inversion becomes unreliable. In particular the algorithm should not be used for distributions with finite range. But otherwise, except for rather pathological distributions, the approach offers a rapid way of generating random samples with minimal user effort. We contrast our approach with an alternative algorithm due to Devroye (Comput. Math. Appl. 7, 547–552, 1981).  相似文献   

2.
The need to simulate from a univariate density arises in several settings, particularly in Bayesian analysis. An especially efficient algorithm which can be used to sample from a univariate density, f X , is the adaptive accept–reject algorithm. To implement the adaptive accept–reject algorithm, the user has to envelope T ° f X , where T is some transformation such that the density g(x) ∝ T ?1 (α+β x) is easy to sample from. Successfully enveloping T ° f X , however, requires that the user identify the number and location of T ° f X ’s inflection points. This is not always a trivial task. In this paper, we propose an adaptive accept–reject algorithm which relieves the user of precisely identifying the location of T ° f X ’s inflection points. This new algorithm is shown to be efficient and can be used to sample from any density such that its support is bounded and its log is three-times differentiable.  相似文献   

3.
Log-normal and log-logistic distributions are often used to analyze lifetime data. For certain ranges of the parameters, the shape of the probability density functions or the hazard functions can be very similar in nature. It might be very difficult to discriminate between the two distribution functions. In this article, we consider the discrimination procedure between the two distribution functions. We use the ratio of maximized likelihood for discrimination purposes. The asymptotic properties of the proposed criterion are investigated. It is observed that the asymptotic distributions are independent of the unknown parameters. The asymptotic distributions are used to determine the minimum sample size needed to discriminate between these two distribution functions for a user specified probability of correct selection. We perform some simulation experiments to see how the asymptotic results work for small sizes. For illustrative purpose, two data sets are analyzed.  相似文献   

4.
Rejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.  相似文献   

5.
In this paper, we make use of an algorithm of Huffer & Lin (2001) in order to develop exact prediction intervals for failure times from one-parameter and two- parameter exponential distributions based on doubly Type-II censored samples. We show that this method yields the same results as those of Lawless (1971, 1977) and Like μ(1974) in the case when the available sample is Type-II right censored. We present a computational algorithm for the determination of the exact percentage points of the pivotal quantities used in the construction of these prediction intervals. We also present some tables of these percentage points for the prediction of the ℓth order statistic in a sample of size n for both one- and two-parameter exponential distributions, assuming that the available sample is doubly Type-II censored. Finally, we present two examples to illustrate the methods of inference developed here.  相似文献   

6.
Appropriately designing the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm adaptively approximating the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via online-EM methods, to the optimal kernel through minimisation of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimisation problem for the whole particle sample, yielding an algorithm with only linear complexity. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models.  相似文献   

7.
Latent class models (LCMs) are used increasingly for addressing a broad variety of problems, including sparse modeling of multivariate and longitudinal data, model-based clustering, and flexible inferences on predictor effects. Typical frequentist LCMs require estimation of a single finite number of classes, which does not increase with the sample size, and have a well-known sensitivity to parametric assumptions on the distributions within a class. Bayesian nonparametric methods have been developed to allow an infinite number of classes in the general population, with the number represented in a sample increasing with sample size. In this article, we propose a new nonparametric Bayes model that allows predictors to flexibly impact the allocation to latent classes, while limiting sensitivity to parametric assumptions by allowing class-specific distributions to be unknown subject to a stochastic ordering constraint. An efficient MCMC algorithm is developed for posterior computation. The methods are validated using simulation studies and applied to the problem of ranking medical procedures in terms of the distribution of patient morbidity.  相似文献   

8.
Grouped data are commonly encountered in applications. All data from a continuous population are grouped due to rounding of the individual observations. The Bernstein polynomial model is proposed as an approximate model in this paper for estimating a univariate density function based on grouped data. The coefficients of the Bernstein polynomial, as the mixture proportions of beta distributions, can be estimated using an EM algorithm. The optimal degree of the Bernstein polynomial can be determined using a change-point estimation method. The rate of convergence of the proposed density estimate to the true density is proved to be almost parametric by an acceptance–rejection argument used for generating random numbers. The proposed method is compared with some existing methods in a simulation study and is applied to the Chicken Embryo Data.  相似文献   

9.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

10.
This article compares four methods used to approximate value at risk (VaR) from the first four moments of a probability distribution: Cornish–Fisher, Edgeworth, Gram–Charlier, and Johnson distributions. Increasing rearrangements are applied to the first three methods. Simulation results suggest that for large sample situations, Johnson distributions yield the most accurate VaR approximation. For small sample situations with small tail probabilities, Johnson distributions yield the worst approximation. A particularly relevant case would be in banking applications for calculating the size of operational risk to cover certain loss types. For this case, the rearranged Gram–Charlier method is recommended.  相似文献   

11.
The two-parameter generalized exponential distribution was recently introduced by Gupta and Kundu (Austral. New Zealand J. Statist. 40 (1999) 173). It is observed that the Generalized Exponential distribution can be used quite effectively to analyze skewed data set as an alternative to the more popular log-normal distribution. In this paper, we use the ratio of the maximized likelihoods in choosing between the log-normal and generalized exponential distributions. We obtain asymptotic distributions of the logarithm of the ratio of the maximized likelihoods and use them to determine the required sample size to discriminate between the two distributions for a user specified probability of correct selection and tolerance limit.  相似文献   

12.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   

13.
The method of tempered transitions was proposed by Neal (Stat. Comput. 6:353–366, 1996) for tackling the difficulties arising when using Markov chain Monte Carlo to sample from multimodal distributions. In common with methods such as simulated tempering and Metropolis-coupled MCMC, the key idea is to utilise a series of successively easier to sample distributions to improve movement around the state space. Tempered transitions does this by incorporating moves through these less modal distributions into the MCMC proposals. Unfortunately the improved movement between modes comes at a high computational cost with a low acceptance rate of expensive proposals. We consider how the algorithm may be tuned to increase the acceptance rates for a given number of temperatures. We find that the commonly assumed geometric spacing of temperatures is reasonable in many but not all applications.  相似文献   

14.
Within the context of mixture modeling, the normal distribution is typically used as the components distribution. However, if a cluster is skewed or heavy tailed, then the normal distribution will be inefficient and many may be needed to model a single cluster. In this paper, we present an attempt to solve this problem. We define a cluster, in the absence of further information, to be a group of data which can be modeled by a unimodal density function. Hence, our intention is to use a family of univariate distribution functions, to replace the normal, for which the only constraint is unimodality. With this aim, we devise a new family of nonparametric unimodal distributions, which has large support over the space of univariate unimodal distributions. The difficult aspect of the Bayesian model is to construct a suitable MCMC algorithm to sample from the correct posterior distribution. The key will be the introduction of strategic latent variables and the use of the Product Space view of Reversible Jump methodology.  相似文献   

15.
Typically, in the brief discussion of Bayesian inferential methods presented at the beginning of calculus-based undergraduate or graduate mathematical statistics courses, little attention is paid to the process of choosing the parameter value(s) for the prior distribution. Even less attention is paid to the impact of these choices on the predictive distribution of the data. Reasons for this include that the posterior can be found by ignoring the predictive distribution thereby streamlining the derivation of the posterior and/or that computer software can be used to find the posterior distribution. In this paper, the binomial, negative-binomial and Poisson distributions along with their conjugate beta and gamma priors are utilized to obtain the resulting predictive distributions. It is then demonstrated that specific choices of the parameters of the priors can lead to predictive distributions with properties that might be surprising to a non-expert user of Bayesian methods.  相似文献   

16.
Log-location-scale distributions are widely used parametric models that have fundamental importance in both parametric and semiparametric frameworks. The likelihood equations based on a Type II censored sample from location-scale distributions do not provide explicit solutions for the para-meters. Statistical software is widely available and is based on iterative methods (such as, Newton Raphson Algorithm, EM algorithm etc.), which require starting values near the global maximum. There are also many situations that the specialized software does not handle. This paper provides a method for determining explicit estimators for the location and scale parameters by approximating the likelihood function, where the method does not require any starting values. The performance of the proposed approximate method for the Weibull distribution and Log-Logistic distributions is compared with those based on iterative methods through the use of simulation studies for a wide range of sample size and Type II censoring schemes. Here we also examine the probability coverages of the pivotal quantities based on asymptotic normality. In addition, two examples are given.  相似文献   

17.
This paper develops a novel and efficient algorithm for Bayesian inference in inverse Gamma stochastic volatility models. It is shown that by conditioning on auxiliary variables, it is possible to sample all the volatilities jointly directly from their posterior conditional density, using simple and easy to draw from distributions. Furthermore, this paper develops a generalized inverse gamma process with more flexible tails in the distribution of volatilities, which still allows for simple and efficient calculations. Using several macroeconomic and financial datasets, it is shown that the inverse gamma and generalized inverse gamma processes can greatly outperform the commonly used log normal volatility processes with Student’s t errors or jumps in the mean equation.  相似文献   

18.
Edgeworth expansions as well as saddle-point methods are used to approximate the distributions of some spacing statistics for small to moderate sample sizes. By comparing with the exact values when available, it is shown that a particular form of Edgeworth expansion produces extremely good results even for fairly small sample sizes. However, this expansion suffers from negative tail probabilities and an accurate approximation without this disadvantage, is shown to be the one based on saddle-point method. Finally, quantiles of some spacing statistics whose exact distributions are not known, are tabulated, making them available in a variety of testing contexts.  相似文献   

19.
The methods of estimation of nonparametric regression function are quite common in statistical application. In this paper, the new Bayesian wavelet thresholding estimation is considered. The new mixture prior distributions for the estimation of nonparametric regression function by applying wavelet transformation are investigated. The reversible jump algorithm to obtain the appropriate prior distributions and value of thresholding is used. The performance of the proposed estimator is assessed with simulated data from well-known test functions by comparing the convergence rate of the proposed estimator with respect to another by evaluating the average mean square error and standard deviations. Finally by applying the developed method, density function of galaxy data is estimated.  相似文献   

20.
In this article, we propose an efficient and robust estimation for the semiparametric mixture model that is a mixture of unknown location-shifted symmetric distributions. Our estimation is derived by minimizing the profile Hellinger distance (MPHD) between the model and a nonparametric density estimate. We propose a simple and efficient algorithm to find the proposed MPHD estimation. Monte Carlo simulation study is conducted to examine the finite sample performance of the proposed procedure and to compare it with other existing methods. Based on our empirical studies, the newly proposed procedure works very competitively compared to the existing methods for normal component cases and much better for non-normal component cases. More importantly, the proposed procedure is robust when the data are contaminated with outlying observations. A real data application is also provided to illustrate the proposed estimation procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号