首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   

2.
It is indicated by some researchers in the literature that it might be difficult to exactly determine the minimum sample size for the estimation of a binomial parameter with prescribed margin of error and confidence level. In this paper, we investigate such a very old but also extremely important problem and demonstrate that the difficulty for obtaining the exact solution is not insurmountable. Unlike the classical approximate sample size method based on the central limit theorem, we develop a new approach for computing the minimum sample size that does not require any approximation. Moreover, our approach overcomes the conservatism of existing rigorous sample size methods derived from Bernoulli's theorem or Chernoff-Hoeffding bound.Our computational machinery consists of two essential ingredients. First, we prove that the minimum of coverage probability with respect to a binomial parameter bounded in an interval is attained at a discrete set of finite many values of the binomial parameter. This allows for reducing infinite many evaluations of coverage probability to finite many evaluations. Second, a recursive bounding technique is developed to further improve the efficiency of computation.  相似文献   

3.
This paper addresses the estimation for the unknown scale parameter of the half-logistic distribution based on a Type-I progressively hybrid censoring scheme. We evaluate the maximum likelihood estimate (MLE) via numerical method, and EM algorithm, and also the approximate maximum likelihood estimate (AMLE). We use a modified acceptance rejection method to obtain the Bayes estimate and corresponding highest posterior confidence intervals. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyze one dataset for illustrative purposes.  相似文献   

4.
In this work, we propose a method for estimating the Hurst index, or memory parameter, of a stationary process with long memory in a Bayesian fashion. Such approach provides an approximation for the posterior distribution for the memory parameter and it is based on a simple application of the so-called approximate Bayesian computation (ABC), also known as likelihood-free method. Some popular existing estimators are reviewed and compared to this method for the fractional Brownian motion, for a long-range binary process and for the Rosenblatt process. The performance of our proposal is remarkably efficient.  相似文献   

5.
It has long been asserted that in univariate location-scale models, when concerned with inference for either the location or scale parameter, the use of the inverse of the scale parameter as a Bayesian prior yields posterior credible sets that have exactly the correct frequentist confidence set interpretation. This claim dates to at least Peers, and has subsequently been noted by various authors, with varying degrees of justification. We present a simple, direct demonstration of the exact matching property of the posterior credible sets derived under use of this prior in the univariate location-scale model. This is done by establishing an equivalence between the conditional frequentist and posterior densities of the pivotal quantities on which conditional frequentist inferences are based.  相似文献   

6.
We consider the issue of sampling from the posterior distribution of exponential random graph (ERG) models and other statistical models with intractable normalizing constants. Existing methods based on exact sampling are either infeasible or require very long computing time. We study a class of approximate Markov chain Monte Carlo (MCMC) sampling schemes that deal with this issue. We also develop a new Metropolis–Hastings kernel to sample sparse large networks from ERG models. We illustrate the proposed methods on several examples.  相似文献   

7.
In this article, we present a procedure for approximate negative binomial tolerance intervals. We utilize an approach that has been well-studied to approximate tolerance intervals for the binomial and Poisson settings, which is based on the confidence interval for the parameter in the respective distribution. A simulation study is performed to assess the coverage probabilities and expected widths of the tolerance intervals. The simulation study also compares eight different confidence interval approaches for the negative binomial proportions. We recommend using those in practice that perform the best based on our simulation results. The method is also illustrated using two real data examples.  相似文献   

8.
Abstract.  In this paper we propose fast approximate methods for computing posterior marginals in spatial generalized linear mixed models. We consider the common geostatistical case with a high dimensional latent spatial variable and observations at known registration sites. The methods of inference are deterministic, using no simulation-based inference. The first proposed approximation is fast to compute and is 'practically sufficient', meaning that results do not show any bias or dispersion effects that might affect decision making. Our second approximation, an improvement of the first version, is 'practically exact', meaning that one would have to run MCMC simulations for very much longer than is typically done to detect any indication of error in the approximate results. For small-count data the approximations are slightly worse, but still very accurate. Our methods are limited to likelihood functions that give unimodal full conditionals for the latent variable. The methods help to expand the future scope of non-Gaussian geostatistical models as illustrated by applications of model choice, outlier detection and sampling design. The approximations take seconds or minutes of CPU time, in sharp contrast to overnight MCMC runs for solving such problems.  相似文献   

9.
In this article, the Bayes estimates of two-parameter gamma distribution are considered. It is well known that the Bayes estimators of the two-parameter gamma distribution do not have compact form. In this paper, it is assumed that the scale parameter has a gamma prior and the shape parameter has any log-concave prior, and they are independently distributed. Under the above priors, we use Gibbs sampling technique to generate samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and can also construct HPD credible intervals. We also compute the approximate Bayes estimates using Lindley's approximation under the assumption of gamma priors of the shape parameter. Monte Carlo simulations are performed to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes. We further discuss the Bayesian prediction of future observation based on the observed sample and it is seen that the Gibbs sampling technique can be used quite effectively for estimating the posterior predictive density and also for constructing predictive intervals of the order statistics from the future sample.  相似文献   

10.
Elimination of a nuisance variable is often non‐trivial and may involve the evaluation of an intractable integral. One approach to evaluate these integrals is to use the Laplace approximation. This paper concentrates on a new approximation, called the partial Laplace approximation, that is useful when the integrand can be partitioned into two multiplicative disjoint functions. The technique is applied to the linear mixed model and shows that the approximate likelihood obtained can be partitioned to provide a conditional likelihood for the location parameters and a marginal likelihood for the scale parameters equivalent to restricted maximum likelihood (REML). Similarly, the partial Laplace approximation is applied to the t‐distribution to obtain an approximate REML for the scale parameter. A simulation study reveals that, in comparison to maximum likelihood, the scale parameter estimates of the t‐distribution obtained from the approximate REML show reduced bias.  相似文献   

11.
We introduce two classes of multivariate log-skewed distributions with normal kernel: the log canonical fundamental skew-normal (log-CFUSN) and the log unified skew-normal. We also discuss some properties of the log-CFUSN family of distributions. These new classes of log-skewed distributions include the log-normal and multivariate log-skew normal families as particular cases. We discuss some issues related to Bayesian inference in the log-CFUSN family of distributions, mainly we focus on how to model the prior uncertainty about the skewing parameter. Based on the stochastic representation of the log-CFUSN family, we propose a data augmentation strategy for sampling from the posterior distributions. This proposed family is used to analyse the US national monthly precipitation data. We conclude that a high-dimensional skewing function lead to a better model fit.  相似文献   

12.
In this article we propose a novel non-parametric sampling approach to estimate posterior distributions from parameters of interest. Starting from an initial sample over the parameter space, this method makes use of this initial information to form a geometrical structure known as Voronoi tessellation over the whole parameter space. This rough approximation to the posterior distribution provides a way to generate new points from the posterior distribution without any additional costly model evaluations. By using a traditional Markov Chain Monte Carlo (MCMC) over the non-parametric tessellation, the initial approximate distribution is refined sequentially. We applied this method to a couple of climate models to show that this hybrid scheme successfully approximates the posterior distribution of the model parameters.  相似文献   

13.
This paper discusses five methods for constructing approximate confidence intervals for the binomial parameter Θ, based on Y successes in n Bernoulli trials. In a recent paper, Chen (1990) discusses various approximate methods and suggests a new method based on a Bayes argument, which we call method I here. Methods II and III are based on the normal approximation without and with continuity correction. Method IV uses the Poisson approximation of the binomial distribution and then exploits the fact that the exact confidence limits for the parameter of the Poisson distribution can be found through the x2 distribution. The confidence limits of method IV are then provided by the Wilson-Hilferty approximation of the x2. Similarly, the exact confidence limits for the binomial parameter can be expressed through the F distribution. Method V approximates these limits through a suitable version of the Wilson-Hilferty approximation. We undertake a comparison of the five methods in respect to coverage probability and expected length. The results indicate that method V has an advantage over Chen's Bayes method as well as over the other three methods.  相似文献   

14.
In this paper, we consider the maximum likelihood and Bayes estimation of the scale parameter of the half-logistic distribution based on a multiply type II censored sample. However, the maximum likelihood estimator(MLE) and Bayes estimator do not exist in an explicit form for the scale parameter. We consider a simple method of deriving an explicit estimator by approximating the likelihood function and discuss the asymptotic variances of MLE and approximate MLE. Also, an approximation based on the Laplace approximation (Tierney & Kadane, 1986) is used to obtain the Bayes estimator. In order to compare the MLE, approximate MLE and Bayes estimates of the scale parameter, Monte Carlo simulation is used.  相似文献   

15.
In this article, we develop a new and novel kernel density estimator for a sum of weighted averages from a single population based on utilizing the well defined kernel density estimator in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of weighed averages from two independent populations. The resulting estimator is “bootstrap-like” in terms of its properties with respect to the derivation of approximate confidence intervals via a “plug-in” approach. This new approach is distinct from the bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function through direct calculation. Thus, our approach eliminates the error due to Monte Carlo resampling that arises within the context of simulation based approaches that are oftentimes necessary in order to derive bootstrap-based confidence intervals for statistics involving weighted averages of i.i.d. random variables. We provide several examples and carry forth a simulation study to show that our kernel density estimator performs better than the standard central limit theorem based approximation in term of coverage probability.  相似文献   

16.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

17.
This paper studies lower confidence limits of response probabilities based on sensitivity testing data set. The saddlepoint approximation to a conditional distribution is developed. Based on it we give a modified algorithm to find approximate confidence limits for the parameter of interest. A simulation study shows that the saddlepoint approximation with proper corrections gives better coverage probability than the direct saddlepoint approximation and the asymptotic normality approximation. Finally, we apply the proposed approximation to a real data set.  相似文献   

18.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

19.
Bootstrap smoothed (bagged) parameter estimators have been proposed as an improvement on estimators found after preliminary data‐based model selection. A result of Efron in 2014 is a very convenient and widely applicable formula for a delta method approximation to the standard deviation of the bootstrap smoothed estimator. This approximation provides an easily computed guide to the accuracy of this estimator. In addition, Efron considered a confidence interval centred on the bootstrap smoothed estimator, with width proportional to the estimate of this approximation to the standard deviation. We evaluate this confidence interval in the scenario of two nested linear regression models, the full model and a simpler model, and a preliminary test of the null hypothesis that the simpler model is correct. We derive computationally convenient expressions for the ideal bootstrap smoothed estimator and the coverage probability and expected length of this confidence interval. In terms of coverage probability, this confidence interval outperforms the post‐model‐selection confidence interval with the same nominal coverage and based on the same preliminary test. We also compare the performance of the confidence interval centred on the bootstrap smoothed estimator, in terms of expected length, to the usual confidence interval, with the same minimum coverage probability, based on the full model.  相似文献   

20.
Reference priors are theoretically attractive for the analysis of geostatistical data since they enable automatic Bayesian analysis and have desirable Bayesian and frequentist properties. But their use is hindered by computational hurdles that make their application in practice challenging. In this work, we derive a new class of default priors that approximate reference priors for the parameters of some Gaussian random fields. It is based on an approximation to the integrated likelihood of the covariance parameters derived from the spectral approximation of stationary random fields. This prior depends on the structure of the mean function and the spectral density of the model evaluated at a set of spectral points associated with an auxiliary regular grid. In addition to preserving the desirable Bayesian and frequentist properties, these approximate reference priors are more stable, and their computations are much less onerous than those of exact reference priors. Unlike exact reference priors, the marginal approximate reference prior of correlation parameter is always proper, regardless of the mean function or the smoothness of the correlation function. This property has important consequences for covariance model selection. An illustration comparing default Bayesian analyses is provided with a dataset of lead pollution in Galicia, Spain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号