首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Rejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.  相似文献   

2.
Multipath fading is one of the most common distortions in wireless communications. The simulation of a fading channel typically requires drawing samples from a Rayleigh, Rice or Nakagami distribution. The Nakagami-m distribution is particularly important due to its good agreement with empirical channel measurements, as well as its ability to generalize the well-known Rayleigh and Rice distributions. In this paper, a simple and extremely efficient rejection sampling (RS) algorithm for generating independent samples from a Nakagami-m distribution is proposed. This RS approach is based on a novel proposal density composed of three pieces of well-known densities from which samples can be drawn easily and efficiently. The proposed method is valid for any combination of parameters of the Nakagami distribution, without any restriction in the domain and without requiring any adjustment from the final user. Simulations for several parameter combinations show that the proposed approach attains acceptance rates above 90% in all cases, outperforming all the RS techniques currently available in the literature.  相似文献   

3.
Different strategies have been proposed to improve mixing and convergence properties of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis–Hastings algorithm to the specific target density and require a detailed exploratory analysis of the stationary distribution and/or some preliminary experiments to determine an efficient proposal. Various Metropolis–Hastings algorithms have been suggested that make use of previously sampled states in defining an adaptive proposal density. Here we propose a general class of adaptive Metropolis–Hastings algorithms based on Metropolis–Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as improved versions of the all-purpose adaptive rejection Metropolis sampling (ARMS) algorithm to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.  相似文献   

4.
Abstract.  The sampling-importance resampling (SIR) algorithm aims at drawing a random sample from a target distribution π. First, a sample is drawn from a proposal distribution q , and then from this a smaller sample is drawn with sample probabilities proportional to the importance ratios π/ q . We propose here a simple adjustment of the sample probabilities and show that this gives faster convergence. The results indicate that our version converges better also for small sample sizes. The SIR algorithms are compared with the Metropolis–Hastings (MH) algorithm with independent proposals. Although MH converges asymptotically faster, the results indicate that our improved SIR version is better than MH for small sample sizes. We also establish a connection between the SIR algorithms and importance sampling with normalized weights. We show that the use of adjusted SIR sample probabilities as importance weights reduces the bias of the importance sampling estimate.  相似文献   

5.
An algorithm for sampling from non-log-concave multivariate distributions is proposed, which improves the adaptive rejection Metropolis sampling (ARMS) algorithm by incorporating the hit and run sampling. It is not rare that the ARMS is trapped away from some subspace with significant probability in the support of the multivariate distribution. While the ARMS updates samples only in the directions that are parallel to dimensions, our proposed method, the hit and run ARMS (HARARMS), updates samples in arbitrary directions determined by the hit and run algorithm, which makes it almost not possible to be trapped in any isolated subspaces. The HARARMS performs the same as ARMS in a single dimension while more reliable in multidimensional spaces. Its performance is illustrated by a Bayesian free-knot spline regression example. We showed that it overcomes the well-known ‘lethargy’ property and decisively find the global optimal number and locations of the knots of the spline function.  相似文献   

6.
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.  相似文献   

7.
Abstract.  Pareto sampling was introduced by Rosén in the late 1990s. It is a simple method to get a fixed size π ps sample though with inclusion probabilities only approximately as desired. Sampford sampling, introduced by Sampford in 1967, gives the desired inclusion probabilities but it may take time to generate a sample. Using probability functions and Laplace approximations, we show that from a probabilistic point of view these two designs are very close to each other and asymptotically identical. A Sampford sample can rapidly be generated in all situations by letting a Pareto sample pass an acceptance–rejection filter. A new very efficient method to generate conditional Poisson ( CP ) samples appears as a byproduct. Further, it is shown how the inclusion probabilities of all orders for the Pareto design can be calculated from those of the CP design. A new explicit very accurate approximation of the second-order inclusion probabilities, valid for several designs, is presented and applied to get single sum type variance estimates of the Horvitz–Thompson estimator.  相似文献   

8.
Abstract

The Coefficient of Variation is one of the most commonly used statistical tool across various scientific fields. This paper proposes a use of the Coefficient of Variation, obtained by Sampling, to define the polynomial probability density function (pdf) of a continuous and symmetric random variable on the interval [a, b]. The basic idea behind the first proposed algorithm is the transformation of the interval from [a, b] to [0, b-a]. The chi-square goodness-of-fit test is used to compare the proposed (observed) sample distribution with the expected probability distribution. The experimental results show that the collected data are approximated by the proposed pdf. The second algorithm proposes a new method to get a fast estimate for the degree of the polynomial pdf when the random variable is normally distributed. Using the known percentages of values that lie within one, two and three standard deviations of the mean, respectively, the so-called three-sigma rule of thumb, we conclude that the degree of the polynomial pdf takes values between 1.8127 and 1.8642. In the case of a Laplace (μ, b) distribution, we conclude that the degree of the polynomial pdf takes values greater than 1. All calculations and graphs needed are done using statistical software R.  相似文献   

9.
We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo version of ABC, implying that it is unnecessary to tune the number of pseudo-samples used in ABC-MCMC. This conclusion is in contrast to particle MCMC methods, for which increasing the number of particles can provide large gains in computational efficiency.  相似文献   

10.
We give a formal definition of a representative sample, but roughly speaking, it is a scaled‐down version of the population, capturing its characteristics. New methods for selecting representative probability samples in the presence of auxiliary variables are introduced. Representative samples are needed for multipurpose surveys, when several target variables are of interest. Such samples also enable estimation of parameters in subspaces and improved estimation of target variable distributions. We describe how two recently proposed sampling designs can be used to produce representative samples. Both designs use distance between population units when producing a sample. We propose a distance function that can calculate distances between units in general auxiliary spaces. We also propose a variance estimator for the commonly used Horvitz–Thompson estimator. Real data as well as illustrative examples show that representative samples are obtained and that the variance of the Horvitz–Thompson estimator is reduced compared with simple random sampling.  相似文献   

11.
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.  相似文献   

12.
An acceptance sampling plan is a method used to make a decision about acceptance or rejection of a product, based on adherence to a standard. Meanwhile, process capability indices (PCIs) have been applied in different manufacturing industries as capability measures based on specified criteria which include process departure from a target, process consistency, process yield and process loss. In this paper, a repetitive group sampling (RGS) plan based on PCI is introduced for variables’ inspection. First, the optimal parameters of the developed RGS plan are obtained considering constraints related to the risk of consumers and producers and also a double sampling plan, a multiple dependent state sampling plan and a sampling plan for resubmitted lots have been designed. Finally, after the development of variable sampling plans based on the Bayesian and exact approach, a comparison study has been performed between the developed RGS plan and other types of sampling plans and the results are elaborated.  相似文献   

13.
A new method for sampling from a finite population that is spread in one, two or more dimensions is presented. Weights are used to create strong negative correlations between the inclusion indicators of nearby units. The method can be used to produce unequal probability samples that are well spread over the population in every dimension, without any spatial stratification. Since the method is very general there are numerous possible applications, especially in sampling of natural resources where spatially balanced sampling has proven to be efficient. Two examples show that the method gives better estimates than other commonly used designs.  相似文献   

14.
Exact Sampling from a Continuous State Space   总被引:3,自引:0,他引:3  
Propp & Wilson (1996) described a protocol, called coupling from the past, for exact sampling from a target distribution using a coupled Markov chain Monte Carlo algorithm. In this paper we extend coupling from the past to various MCMC samplers on a continuous state space; rather than following the monotone sampling device of Propp & Wilson, our approach uses methods related to gamma-coupling and rejection sampling to simulate the chain, and direct accounting of sample paths.  相似文献   

15.
Random sampling from databases: a survey   总被引:2,自引:0,他引:2  
This paper reviews recent literature on techniques for obtaining random samples from databases. We begin with a discussion of why one would want to include sampling facilities in database management systems. We then review basic sampling techniques used in constructing DBMS sampling algorithms, e.g. acceptance/rejection and reservoir sampling. A discussion of sampling from various data structures follows: B + trees, hash files, spatial data structures (including R-trees and quadtrees). Algorithms for sampling from simple relational queries, e.g. single relational operators such as selection, intersection, union, set difference, projection, and join are then described. We then describe sampling for estimation of aggregates (e.g. the size of query results). Here we discuss both clustered sampling, and sequential sampling approaches. Decision-theoretic approaches to sampling for query optimization are reviewed.  相似文献   

16.
A computational problem in many fields is to evaluate multiple integrals and expectations simultaneously. Consider probability distributions with unnormalized density functions indexed by parameters on a 2-dimensional grid, and assume that samples are simulated from distributions on a subgrid. Examples of such unnormalized density functions include the observed-data likelihoods in the presence of missing data and the prior times the likelihood in Bayesian inference. There are various methods using a single sample only or multiple samples jointly to compute each integral. Path sampling seems a compromise, using samples along a 1-dimensional path to compute each integral. However, different choices of the path lead to different estimators, which should ideally be identical. We propose calibrated estimators by the method of control variates to exploit such constraints for variance reduction. We also propose biquadratic interpolation to approximate integrals with parameters outside the subgrid, consistently with the calibrated estimators on the subgrid. These methods can be extended to compute differences of expectations through an auxiliary identity for path sampling. Furthermore, we develop stepwise bridge-sampling methods in parallel but complementary to path sampling. In three simulation studies, the proposed methods lead to substantially reduced mean squared errors compared with existing methods.  相似文献   

17.
This article describes a method for producing size-biased probability samples as originally proposed by Hanurav (1967) and Vijayan (1968). The complexity of the procedure has led to the development of microcomputer software that greatly facilitates the production of sampling plans as well as the computation of population estimates.  相似文献   

18.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   

19.
20.
This paper considers the multiple change-point estimation for exponential distribution with truncated and censored data by Gibbs sampling. After all the missing data of interest is filled in by some sampling methods such as rejection sampling method, the complete-data likelihood function is obtained. The full conditional distributions of all parameters are discussed. The means of Gibbs samples are taken as Bayesian estimations of the parameters. The implementation steps of Gibbs sampling are introduced in detail. Finally random simulation test is developed, and the results show that Bayesian estimations are fairly accurate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号