首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

2.
Samawi (1999) showed that the efficiency of Monte Carlo methods of integrals estimation can be substantially improved by using ranked simulated samples (RSIS) in place of uniform simulated samples (USIS). However, in this paper it is shown that substantial improvement of efficiency can be achieved further by using the steady state ranked simulated sample (SRSIS). It appears that the modified Monte Carlo methods using SRSIS provide unbiased and more efficient estimators for the integrals. Some theoretical properties of SRSIS are given. A simulation study is conducted to compare the performance of the methods using SRSIS with respect to USIS, for some examples.  相似文献   

3.
Markov chain Monte Carlo methods, in particular, the Gibbs sampler, are widely used algorithms both in application and theoretical works in the classical and Bayesian paradigms. However, these algorithms are often computer intensive. Samawi et al. [Steady-state ranked Gibbs sampler. J. Stat. Comput. Simul. 2012;82(8), 1223–1238. doi:10.1080/00949655.2011.575378] demonstrate through theory and simulation that the dependent steady-state Gibbs sampler is more efficient and accurate in model parameter estimation than the original Gibbs sampler. This paper proposes the independent steady-state Gibbs sampler (ISSGS) approach to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in multidimensional problems.  相似文献   

4.
When measuring units are expensive or time consuming, while ranking them is relatively easy and inexpensive, it is known that ranked set sampling (RSS) is preferable to simple random sampling (SRS). Many authors have suggested several extensions of RSS. As a variation, Al-Saleh and Al-Kadiri [Double ranked set sampling, Statist. Probab. Lett. 48 (2000), pp. 205–212] introduced double ranked set sampling (DRSS) and it was extended by Al-Saleh and Al-Omari [Multistage ranked set sampling, J. Statist. Plann. Inference 102 (2002), pp. 273–286] to multistage ranked set sampling (MSRSS). The entropy of a random variable (r.v.) is a measure of its uncertainty. It is a measure of the amount of information required on the average to determine the value of a (discrete) r.v.. In this work, we discuss entropy estimation in RSS design and aforementioned extensions and compare the results with those in SRS design in terms of bias and root mean square error (RMSE). Motivated by the above observed efficiency, we continue to investigate entropy-based goodness-of-fit test for the inverse Gaussian distribution using RSS. Critical values for some sample sizes determined by means of Monte Carlo simulations are presented for each design. A Monte Carlo power analysis is performed under various alternative hypotheses in order to compare the proposed testing procedure with the existing methods. The results indicate that tests based on RSS and its extensions are superior alternatives to the entropy test based on SRS.  相似文献   

5.
Recent developments in forensic science have lead to a proliferation of methods for quantifying the probative value of evidence by constructing a Bayes Factor that allows a decision-maker to select between the prosecution and defense models. Unfortunately, the analytical form of a Bayes Factor is often computationally intractable. A typical approach in statistics uses Monte Carlo integration to numerically approximate the marginal likelihoods composing the Bayes Factor. This article focuses on developing a generally applicable method for characterizing the numerical error associated with Monte Carlo integration techniques used in constructing the Bayes Factor. The derivation of an asymptotic Monte Carlo standard error (MCSE) for the Bayes Factor will be presented and its applicability to quantifying the value of evidence will be explored using a simulation-based example involving a benchmark data set. The simulation will also explore the effect of prior choice on the Bayes Factor approximations and corresponding MCSEs.  相似文献   

6.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

7.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

8.
A copula can fully characterize the dependence of multiple variables. The purpose of this paper is to provide a Bayesian nonparametric approach to the estimation of a copula, and we do this by mixing over a class of parametric copulas. In particular, we show that any bivariate copula density can be arbitrarily accurately approximated by an infinite mixture of Gaussian copula density functions. The model can be estimated by Markov Chain Monte Carlo methods and the model is demonstrated on both simulated and real data sets.  相似文献   

9.
In this paper, a robust extreme ranked set sampling (RERSS) procedure for estimating the population mean is introduced. It is shown that the proposed method gives an unbiased estimator with smaller variance, provided the underlying distribution is symmetric. However, for asymmetric distributions a weighted mean is given, where the optimal weights are computed by using Shannon's entropy. The performance of the population mean estimator is discussed along with its properties. Monte Carlo simulations are used to demonstrate the performance of the RERSS estimator relative to the simple random sample (SRS), ranked set sampling (RSS) and extreme ranked set sampling (ERSS) estimators. The results indicate that the proposed estimator is more efficient than the estimators based on the traditional sampling methods.  相似文献   

10.
We investigate the relative performance of stratified bivariate ranked set sampling (SBVRSS), with respect to stratified simple random sampling (SSRS) for estimating the population mean with regression methods. The mean and variance of the proposed estimators are derived with the mean being shown to be unbiased. We perform a simulation study to compare the relative efficiency of SBVRSS to SSRS under various data-generating scenarios. We also compare the two sampling schemes on a real data set from trauma victims in a hospital setting. The results of our simulation study and the real data illustration indicate that using SBVRSS for regression estimation provides more efficiency than SSRS in most cases.  相似文献   

11.
Ranked set sampling (RSS) is an advanced sampling method which is very effective for estimating mean of the population when exact measurement of observation is difficult and/or expensive. Balanced Groups RSS (BGRSS) is one of the modification of RSS where only the lowest, the median and the largest ranked units are taken into account. Although BGRSS is advantageous and useful for some specific cases, it has strict restrictions regarding the set size which could be problematic for sampling plans. In this study, we make an improvement on BGRSS and propose a new design called Partial Groups RSS which offers a more flexible sampling plan providing the independence of the set size and sample size. Partial Groups RSS also has a cost advantage over BGRSS. We construct a Monte Carlo simulation study comparing the performance of the mean estimators of the proposed sampling design and BGRSS according to their sampling costs and mean squared errors for various type of distributions. In addition, we give a biometric data application for investigating the efficiency of Partial Groups RSS in real life applications.  相似文献   

12.
The purpose of the current work is to introduce stratified bivariate ranked set sampling (SBVRSS) and investigate its performance for estimating the population mean using both naïve and ratio methods. The properties of the proposed estimator are derived along with the optimal allocation with respect to stratification. We conduct a simulation study to demonstrate the relative efficiency of SBVRSS as compared to stratified bivariate simple random sampling (SBVSRS) for ratio estimation. Data that consist of weights and bilirubin levels in the blood of 120 babies are used to illustrate the procedure on a real data set. Based on our simulation, SBVRSS for ratio estimation is more efficient than using SBVSRS in all cases.  相似文献   

13.
In this paper, proportion estimators and associated variance estimators are proposed for a binary variable with a concomitant variable based on modified ranked set sampling methods, which are extreme ranked set sampling (ERSS), median ranked set sampling (MRSS), percentile ranked set sampling (Per-RSS) and L ranked set sampling (LRSS) methods. The Monte Carlo simulation study is performed to compare the performance of the estimators based on bias, mean squared error, and relative efficiency for different levels of correlation coefficient, set and cycle sizes under normal and log-normal distributions. Moreover, the study is supported with real data application.  相似文献   

14.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples.  相似文献   

15.
In this article, we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC), and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets and simulation studies provide evidence that the extra analytical work involved in Hamiltonian Monte Carlo algorithms is compensated by a more efficient exploration of the parameter space.  相似文献   

16.
This article describes two bivariate geometric distributions. We investigate characterizations of bivariate geometric distributions using conditional failure rates and study properties of the bivariate geometric distributions. The bivariate models are fitted to real-life data using the Method of Moments, Maximum Likelihood, and Bayes Estimators. Two methods of moments estimators, in each bivariate geometric model, are compared and evaluated for their performance in terms of bias vector and covariance matrix. This comparison is done through a Monte Carlo simulation. Chi-square goodness-of-fit tests are used to evaluate model performance.  相似文献   

17.
Estimation of bivariate characteristics using ranked set sampling   总被引:5,自引:0,他引:5  
The superiority of ranked set sampling (RSS) over simple random sampling (SRS) for estimating the mean of a population is well known. This paper introduces and investigates a bivariate version of RSS for estimating the means of two characteristics simultaneously. It turns out that this technique is always superior to SRS and the usual univariate RSS of the same size. The performance of this procedure for a specific distribution can be evaluated using simulation or numerical computation. For the bivariate normal distribution, the efficiency of the procedure with respect to that of SRS is evaluated exactly for set size m = 2 and 3. The paper shows that the proposed estimator is more efficient than the regression RSS estimators proposed by Yu & Lam (1997) and Chen (2001). Real data that consist of heights and diameters of 399 trees are used to illustrate the procedure. The procedure can be generalized to the case of multiple characteristics.  相似文献   

18.
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.  相似文献   

19.
The generation of decision-theoretic Bayesian optimal designs is complicated by the significant computational challenge of minimising an analytically intractable expected loss function over a, potentially, high-dimensional design space. A new general approach for approximately finding Bayesian optimal designs is proposed which uses computationally efficient normal-based approximations to posterior summaries to aid in approximating the expected loss. This new approach is demonstrated on illustrative, yet challenging, examples including hierarchical models for blocked experiments, and experimental aims of parameter estimation and model discrimination. Where possible, the results of the proposed methodology are compared, both in terms of performance and computing time, to results from using computationally more expensive, but potentially more accurate, Monte Carlo approximations. Moreover, the methodology is also applied to problems where the use of Monte Carlo approximations is computationally infeasible.  相似文献   

20.
This article suggests Monte Carlo multiple test procedures which are provably valid in finite samples. These include combination methods originally proposed for independent statistics and further improvements which formalize statistical practice. We also adopt the Monte Carlo test method to noncontinuous combined statistics. The methods suggested are applied to test serial dependence and predictability. In particular, we introduce and analyze new procedures that account for endogenous lag selection. A simulation study illustrates the properties of the proposed methods. Results show that concrete and nonspurious power gains (over standard combination methods) can be achieved through the combined Monte Carlo test approach, and confirm arguments in favor of variance-ratio type criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号