首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A ranked sampling procedure with random subsamples is proposed to estimate the population mean. Four methods of obtaining random subsamples are described. Several estimators of the mean of the population based on random subsamples in ranked set sampling are proposed. These estimators are compared with the mean of a simple random sample for estimating the mean of symmetric and skew distributions. Extensive simulation under several subsampling distributions, sample sizes, and symmetric and skew distributions shows that the estimators of the mean based on random subsamples are more accurate than existing methods.  相似文献   

2.
The adaptive rejection sampling (ARS) algorithm is a universal random generator for drawing samples efficiently from a univariate log-concave target probability density function (pdf). ARS generates independent samples from the target via rejection sampling with high acceptance rates. Indeed, ARS yields a sequence of proposal functions that converge toward the target pdf, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computational demanding each time it is updated. In this work, we propose a novel ARS scheme, called Cheap Adaptive Rejection Sampling (CARS), where the computational effort for drawing from the proposal remains constant, decided in advance by the user. For generating a large number of desired samples, CARS is faster than ARS.  相似文献   

3.
A computational problem in many fields is to estimate simultaneously multiple integrals and expectations, assuming that the data are generated by some Monte Carlo algorithm. Consider two scenarios in which draws are simulated from multiple distributions but the normalizing constants of those distributions may be known or unknown. For each scenario, existing estimators can be classified as using individual samples separately or using all the samples jointly. The latter pooled‐sample estimators are statistically more efficient but computationally more costly to evaluate than the separate‐sample estimators. We develop a cluster‐sample approach to obtain computationally effective estimators, after draws are generated for each scenario. We divide all the samples into mutually exclusive clusters and combine samples from each cluster separately. Furthermore, we exploit a relationship between estimators based on samples from different clusters to achieve variance reduction. The resulting estimators, compared with the pooled‐sample estimators, typically yield similar statistical efficiency but have reduced computational cost. We illustrate the value of the new approach by two examples for an Ising model and a censored Gaussian random field. The Canadian Journal of Statistics 41: 151–173; 2013 © 2012 Statistical Society of Canada  相似文献   

4.
Neoteric ranked set sampling (NRSS) is a recently developed sampling plan, derived from the well-known ranked set sampling (RSS) scheme. It has already been proved that NRSS provides more efficient estimators for population mean and variance compared to RSS and other sampling designs based on ranked sets. In this work, we propose and evaluate the performance of some two-stage sampling designs based on NRSS. Five different sampling schemes are proposed. Through an extensive Monte Carlo simulation study, we verified that all proposed sampling designs outperform RSS, NRSS, and the original double RSS design, producing estimators for the population mean with a lower mean square error. Furthermore, as with NRSS, two-stage NRSS estimators present some bias for asymmetric distributions. We complement the study with a discussion on the relative performance of the proposed estimators. Moreover, an additional simulation based on data of the diameter and height of pine trees is presented.  相似文献   

5.
The Monte Carlo method gives some estimators to evaluate the expectation [ILM0001] based on samples from either the true density f or from some instrumental density. In this paper, we show that the Riemann estimators introduced by Philippe (1997) can be improved by using the importance sampling method. This approach produces a class of Monte Carlo estimators such that the variance is of order O(n ?2). The choice of an optimal estimator among this class is discussed. Some simulations illustrate the improvement brought by this method. Moreover, we give a criterion to assess the convergence of our optimal estimator to the integral of interest.  相似文献   

6.
This article extends the concept of using the steady state ranked simulated sampling approach (SRSIS) by Al-Saleh and Samawi (2000) for improving Monte Carlo methods for single integration problem to multiple integration problems. We demonstrate that this approach provides unbiased estimators and substantially improves the performance of some Monte Carlo methods for bivariate integral approximations, which can be extended to multiple integrals’ approximations. This results in a significant reduction in costs and time required to attain a certain level of accuracy. In order to compare the performance of our method with the Samawi and Al-Saleh (2007) method, we use the same two illustrations for the bivariate case.  相似文献   

7.
This paper deals with the Bayesian estimation of generalized exponential distribution in the proportional hazards model of random censorship under asymmetric loss functions. It is well known for the two-parameter lifetime distributions that the continuous conjugate priors for parameters do not exist; we assume independent gamma priors for the scale and the shape parameters. It is observed that the closed-form expressions for the Bayes estimators cannot be obtained; we propose Tierney–Kadane's approximation and Gibbs sampling to approximate the Bayes estimates. Monte Carlo simulation is carried out to observe the behavior of the proposed methods and one real data analysis is performed for illustration. Bayesian methods are compared with maximum likelihood and it is observed that the Bayes estimators perform better than the maximum-likelihood estimators in some cases.  相似文献   

8.
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.  相似文献   

9.
Improved unbiased estimators in adaptive cluster sampling   总被引:1,自引:0,他引:1  
Summary.  The usual design-unbiased estimators in adaptive cluster sampling are easy to compute but are not functions of the minimal sufficient statistic and hence can be improved. Improved unbiased estimators obtained by conditioning on sufficient statistics—not necessarily minimal—are described. First, estimators that are as easy to compute as the usual design-unbiased estimators are given. Estimators obtained by conditioning on the minimal sufficient statistic which are more difficult to compute are also discussed. Estimators are compared in examples.  相似文献   

10.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples.  相似文献   

11.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

12.
The problem of sampling random variables with overlapping pdfs subject to inequality constraints is addressed. Often, the values of physical variables in an engineering model are interrelated. This mutual dependence imposes inequality constraints on the random variables representing these parameters. Ignoring the interdependencies and sampling the variables independently can lead to inconsistency/bias. We propose an algorithm to generate samples of constrained random variables that are characterized by typical continuous probability distributions and are subject to different kinds of inequality constraints. The sampling procedure is illustrated for various representative cases and one realistic application to simulation of structural natural frequencies.  相似文献   

13.
In this paper, proportion estimators and associated variance estimators are proposed for a binary variable with a concomitant variable based on modified ranked set sampling methods, which are extreme ranked set sampling (ERSS), median ranked set sampling (MRSS), percentile ranked set sampling (Per-RSS) and L ranked set sampling (LRSS) methods. The Monte Carlo simulation study is performed to compare the performance of the estimators based on bias, mean squared error, and relative efficiency for different levels of correlation coefficient, set and cycle sizes under normal and log-normal distributions. Moreover, the study is supported with real data application.  相似文献   

14.
In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion. The method, called M-PMC, is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performance of the proposed scheme is studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme. This work has been supported by the Agence Nationale de la Recherche (ANR) through the 2006–2008 project ’ . Both last authors are grateful to the participants to the BIRS meeting on “Bioinformatics, Genetics and Stochastic Computation: Bridging the Gap”, Banff, for their comments on an earlier version of this paper. The last author also acknowledges an helpful discussion with Geoff McLachlan. The authors wish to thank both referees for their encouraging comments.  相似文献   

15.
Recently, mixture distribution becomes more and more popular in many scientific fields. Statistical computation and analysis of mixture models, however, are extremely complex due to the large number of parameters involved. Both EM algorithms for likelihood inference and MCMC procedures for Bayesian analysis have various difficulties in dealing with mixtures with unknown number of components. In this paper, we propose a direct sampling approach to the computation of Bayesian finite mixture models with varying number of components. This approach requires only the knowledge of the density function up to a multiplicative constant. It is easy to implement, numerically efficient and very practical in real applications. A simulation study shows that it performs quite satisfactorily on relatively high dimensional distributions. A well-known genetic data set is used to demonstrate the simplicity of this method and its power for the computation of high dimensional Bayesian mixture models.  相似文献   

16.
On sequential Monte Carlo sampling methods for Bayesian filtering   总被引:145,自引:0,他引:145  
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.  相似文献   

17.
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers.  相似文献   

18.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

19.
We consider Particle Gibbs (PG) for Bayesian analysis of non-linear non-Gaussian state-space models. As a Monte Carlo (MC) approximation of the Gibbs procedure, PG uses sequential MC (SMC) importance sampling inside the Gibbs to update the latent states. We propose to combine PG with the Particle Efficient Importance Sampling (PEIS). By using SMC sampling densities which are approximately globally fully adapted to the targeted density of the states, PEIS can substantially improve the simulation efficiency of the PG relative to existing PG implementations. The efficiency gains are illustrated in PG applications to a non-linear local-level model and stochastic volatility models.  相似文献   

20.
Abstract

In this article, we propose the best linear unbiased estimators (BLUEs) and best linear invariant estimators (BLIEs) for the unknown parameters of location-scale family of distributions based on double-ranked set sampling (DRSS) using perfect and imperfect rankings. These estimators are then compared with the BLUEs and BLIEs based on ranked set sampling (RSS). It is shown that under perfect ranking, the proposed estimators are uniformly better than the BLUEs and BLIEs obtained via RSS. We also propose the best linear unbiased quantile (BLUQ) and the best linear invariant quantile (BLIQ) estimators for normal distribution under DRSS. It is observed that the proposed quantile estimators are more efficient than the BLUQ and BLIQ estimators based on RSS for both perfect and imperfect orderings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号