首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
We present a versatile Monte Carlo method for estimating multidimensional integrals, with applications to rare-event probability estimation. The method fuses two distinct and popular Monte Carlo simulation methods—Markov chain Monte Carlo and importance sampling—into a single algorithm. We show that for some applied numerical examples the proposed Markov Chain importance sampling algorithm performs better than methods based solely on importance sampling or MCMC.  相似文献   

2.
We propose Bayesian parameter estimation in a multidimensional item response theory model using the Gibbs sampling algorithm. We apply this approach to dichotomous responses to a questionnaire on sleep quality. The analysis helps determine the underlying dimensions.  相似文献   

3.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

4.
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management.  相似文献   

5.
An algorithm for sampling from non-log-concave multivariate distributions is proposed, which improves the adaptive rejection Metropolis sampling (ARMS) algorithm by incorporating the hit and run sampling. It is not rare that the ARMS is trapped away from some subspace with significant probability in the support of the multivariate distribution. While the ARMS updates samples only in the directions that are parallel to dimensions, our proposed method, the hit and run ARMS (HARARMS), updates samples in arbitrary directions determined by the hit and run algorithm, which makes it almost not possible to be trapped in any isolated subspaces. The HARARMS performs the same as ARMS in a single dimension while more reliable in multidimensional spaces. Its performance is illustrated by a Bayesian free-knot spline regression example. We showed that it overcomes the well-known ‘lethargy’ property and decisively find the global optimal number and locations of the knots of the spline function.  相似文献   

6.
Layer Sampling     
Layer sampling is an algorithm for generating variates from a non-normalized multidimensional distribution p( · ). It empirically constructs a majorizing function for p( · ) from a sequence of layers. The method first selects a layer based on the previous variate. Next, a sample is drawn from the selected layer, using a method such as Rejection Sampling. Layer sampling is regenerative. At regeneration times, the layers may be adapted to increase mixing of the Markov chain. Layer sampling may also be used to estimate arbitrary integrals, including normalizing constants.  相似文献   

7.
贺建风 《统计研究》2018,35(4):104-116
在现代抽样调查中,校准估计方法能够通过有效利用辅助信息来提高估计量的精度,多重抽样框抽样调查则不仅可以解决单一抽样框覆盖不全的问题,还可以节约抽样设计阶段的成本。本文将这两种现代抽样估计与设计方法进行结合,将校准估计方法引入到基于多重抽样框的抽样调查体系中,在实现节约调查成本的同时,还能够提高估计量的精度。文章首先按照分离抽样框与组合抽样框估计方法的分类思路,对传统多重抽样框估计方法进行系统梳理;然后在最短距离法校准估计的分析框架下,按照调查时所能掌握辅助信息的具体情况,给出了两类多重抽样框估计情形下的各种不同形式的校准估计量;随后数值分析的比较结果也表明在多重抽样框中校准估计量的估计效率明显优于传统估计量;最后对本文研究进行总结的基础上,给出了我国抽样实践中应用这套先进抽样估计方法体系的展望。  相似文献   

8.
A method of maximum likelihood estimation of gross flows from overlapping stratified sample data is developed. The approach taken is model-based and the EM algorithm is used to solve the estimation problem. Inference is thus based on information from the total sample at each time period. This can be contrasted with the conventional approach to gross flows estimation which only uses information from the overlapping sub-sample. An application to estimation of flows of Australian cropping and livestock industries farms into and out of an “at risk” situation over the period 1979–84 is presented, as well as a discussion of extensions to more complex sampling situations.  相似文献   

9.
Abstract.  We consider the problem of estimating a collection of integrals with respect to an unknown finite measure μ from noisy observations of some of the integrals. A new method to carry out Bayesian inference for the integrals is proposed. We use a Dirichlet or Gamma process as a prior for μ , and construct an approximation to the posterior distribution of the integrals using the sampling importance resampling algorithm and samples from a new multidimensional version of a Markov chain by Feigin and Tweedie. We prove that the Markov chain is positive Harris recurrent, and that the approximating distribution converges weakly to the posterior as the sample size increases, under a mild integrability condition. Applications to polymer chemistry and mathematical finance are given.  相似文献   

10.
Complex models can only be realized a limited number of times due to large computational requirements. Methods exist for generating input parameters for model realizations including Monte Carlo simulation (MCS) and Latin hypercube sampling (LHS). Recent algorithms such as maximinLHS seek to maximize the minimum distance between model inputs in the multivariate space. A novel extension of Latin hypercube sampling (LHSMDU) for multivariate models is developed here that increases the multidimensional uniformity of the input parameters through sequential realization elimination. Correlations are considered in the LHSMDU sampling matrix using a Cholesky decomposition of the correlation matrix. Computer code implementing the proposed algorithm supplements this article. A simulation study comparing MCS, LHS, maximinLHS and LHSMDU demonstrates that increased multidimensional uniformity can significantly improve realization efficiency and that LHSMDU is effective for large multivariate problems.  相似文献   

11.
Boosting is one of the most important methods for fitting regression models and building prediction rules. A notable feature of boosting is that the technique can be modified such that it includes a built-in mechanism for shrinking coefficient estimates and variable selection. This regularization mechanism makes boosting a suitable method for analyzing data characterized by small sample sizes and large numbers of predictors. We extend the existing methodology by developing a boosting method for prediction functions with multiple components. Such multidimensional functions occur in many types of statistical models, for example in count data models and in models involving outcome variables with a mixture distribution. As will be demonstrated, the new algorithm is suitable for both the estimation of the prediction function and regularization of the estimates. In addition, nuisance parameters can be estimated simultaneously with the prediction function.  相似文献   

12.
Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example.  相似文献   

13.
艾小青 《统计教育》2010,(1):29-32,36
本文通过比例估计的例子,揭示了不同抽样理念、统计学派以及估计方法在抽样推断中的应用及特点,特别的分析了基于模型的抽样理念下,贝叶斯思想和极大似然思想的应用。本文反映出统计学科中,面对同一个问题有各种不同角度的理解和解决方法。  相似文献   

14.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

15.
Combinatorial estimation is a new area of application for sequential Monte Carlo methods. We use ideas from sampling theory to introduce new without-replacement sampling methods in such discrete settings. These without-replacement sampling methods allow the addition of merging steps, which can significantly improve the resulting estimators. We give examples showing the use of the proposed methods in combinatorial rare-event probability estimation and in discrete state-space models.  相似文献   

16.
We propose a novel Bayesian nonparametric (BNP) model, which is built on a class of species sampling models, for estimating density functions of temporal data. In particular, we introduce species sampling mixture models with temporal dependence. To accommodate temporal dependence, we define dependent species sampling models by modeling random support points and weights through an autoregressive model, and then we construct the mixture models based on the collection of these dependent species sampling models. We propose an algorithm to generate posterior samples and present simulation studies to compare the performance of the proposed models with competitors that are based on Dirichlet process mixture models. We apply our method to the estimation of densities for the price of apartment in Seoul, the closing price in Korea Composite Stock Price Index (KOSPI), and climate variables (daily maximum temperature and precipitation) of around the Korean peninsula.  相似文献   

17.
We discuss posterior sampling for two distinct multivariate generalisations of the univariate autoregressive integrated moving average (ARIMA) model with fractional integration. The existing approach to Bayesian estimation, introduced by Ravishanker & Ray, claims to provide a posterior‐sampling algorithm for fractionally integrated vector autoregressive moving averages (FIVARMAs). We show that this algorithm produces posterior draws for vector autoregressive fractionally integrated moving averages (VARFIMAs), a model of independent interest that has not previously received attention in the Bayesian literature.  相似文献   

18.
金勇进  张喆 《统计研究》2014,31(9):79-84
用样本数据推断总体,权数的作用十分重要。使用权数,不仅能将样本还原到总体,还能调整样本结构,使其与总体结构相一致,因此正确的使用权数是我们进行统计推断的基础。本文系统阐述了抽样调查分析中权数的获取过程,以及后期对初始权数调整过程。由于权数是把双刃剑,在提高精度的同时,有可能提高估计量的误差,本文提出了对权数进行评估的方法,研讨如何对权数进行控制,最后根据我国综合社会调查项目(CGSS)的数据进行实证分析,按照所给方法不仅能提高估计精度,而且能够降低抽样推断中的权效应。  相似文献   

19.
Network tomography is concerned with reconstruction and estimation of properties of traffic flow that are linked to the observed data through an underdetermined linear system. The likelihood function for such problems can be expressed only as the sum over integer‐valued points in a convex polytope. Typically this set is too large to enumerate, so that statistical inference must proceed by sampling from the polytope. Recent progress has seen the development of polytope sampling algorithms that operate well when the network link‐path incidence matrix is totally unimodular. In this paper we examine whether this property is likely to hold in practical applications. We find that total unimodularity is assured for certain simple networks, but that it can fail in more complex cases. We show that when it does fail, the existing polytope samplers may not generate the requisite irreducible Markov chain. As a remedy, a modified algorithm is proposed in which the basis for the polytope adapts to ensure adequate mixing over the entire polytope. The operation of this algorithm is illustrated by a numerical example.  相似文献   

20.
We improve a Monte Carlo algorithm which computes accurate approximations of smooth functions on multidimensional Tchebychef polynomials by using quasi-random sequences. We first show that the convergence of the previous algorithm is twice faster using these sequences. Then, we slightly modify this algorithm to make it work from a single set of random or quasi-random points. This especially leads to a Quasi-Monte Carlo method with an increased rate of convergence for numerical integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号