首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The full Bayesian significance test (FBST) was introduced by Pereira and Stern for measuring the evidence of a precise null hypothesis. The FBST requires both numerical optimization and multidimensional integration, whose computational cost may be heavy when testing a precise null hypothesis on a scalar parameter of interest in the presence of a large number of nuisance parameters. In this paper we propose a higher order approximation of the measure of evidence for the FBST, based on tail area expansions of the marginal posterior of the parameter of interest. When in particular focus is on matching priors, further results are highlighted. Numerical illustrations are discussed.  相似文献   

2.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   

3.
For testing a scalar interest parameter in a large sample asymptotic context, methods with third-order accuracy are now available that make a reduction to the simple case having a scalar parameter and scalar variable. For such simple models on the real line, we develop canonical versions that correspond closely to an exponential model and to a location model; these canonical versions are obtained by standardizing and reexpressing the variable and the parameters, the needed steps being given in algorithmic form. The exponential and location approximations have three parameters, two corresponding to the pure-model type and one for departure from that type. We also record the connections among the common test quantities: the signed likelihood departure, the standardized score variable, and the location-scale corrected signed likelihood ratio. These connections are for fixed data point and would bear on the effectiveness of the quantities for inference with the particular data; an earlier paper recorded the connections for fixed parameter value, and would bear on distributional properties.  相似文献   

4.
Inference for a scalar parameter in the pressence of nuisance parameters requires high dimensional integrations of the joint density of the pivotal quantities. Recent development in asymptotic methods provides accurate approximations for significance levels and thus confidence intervals for a scalar component parameter. In this paper, a simple, efficient and accurate numerical procedure is first developed for the location model and is then extended to the location-scale model and the linear regression model. This numerical procedure only requires a fine tabulation of the parameter and the observed log likelihood function, which can be either the full, marginal or conditional observed log likelihood function, as input and output is the corresponding significance function. Numerical results showed that this approximation is not only simple but also very accurate. It outperformed the usual approximations such as the signed likelihood ratio statistic, the maximum likelihood estimate and the score statistic.  相似文献   

5.
This paper synthesizes a global approach to both Bayesian and likelihood treatments of the estimation of the parameters of a hidden Markov model in the cases of normal and Poisson distributions. The first step of this global method is to construct a non-informative prior based on a reparameterization of the model; this prior is to be considered as a penalizing and bounding factor from a likelihood point of view. The second step takes advantage of the special structure of the posterior distribution to build up a simple Gibbs algorithm. The maximum likelihood estimator is then obtained by an iterative procedure replicating the original sample until the corresponding Bayes posterior expectation stabilizes on a local maximum of the original likelihood function.  相似文献   

6.
Asymptotic cumulants of the maximum likelihood estimator of the canonical parameter in the exponential family are obtained up to the fourth order with the added higher-order asymptotic variance. In the case of a scalar parameter, the corresponding results with and without studentization are given. These results are also obtained for the estimators by the weighted score, especially for those using the Jeffreys prior. The asymptotic cumulants are used for reducing bias and mean square error to improve a point estimator and for interval estimation to have higher-order accuracy. It is shown that the kurtosis to squared skewness ratio of the sufficient statistic plays a fundamental role.  相似文献   

7.
We discuss higher-order adjustments for a quasi-profile likelihood for a scalar parameter of interest, in order to alleviate some of the problems inherent to the presence of nuisance parameters, such as bias and inconsistency. Indeed, quasi-profile score functions for the parameter of interest have bias of order O(1)O(1), and such bias can lead to poor inference on the parameter of interest. The higher-order adjustments are obtained so that the adjusted quasi-profile score estimating function is unbiased and its variance is the negative expected derivative matrix of the adjusted profile estimating equation. The modified quasi-profile likelihood is then obtained as the integral of the adjusted profile estimating function. We discuss two methods for the computation of the modified quasi-profile likelihoods: a bootstrap simulation method and a first-order asymptotic expression, which can be simplified under an orthogonality assumption. Examples in the context of generalized linear models and of robust inference are provided, showing that the use of a modified quasi-profile likelihood ratio statistic may lead to coverage probabilities more accurate than those pertaining to first-order Wald-type confidence intervals.  相似文献   

8.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   

9.
Abstract. Deterministic Bayesian inference for latent Gaussian models has recently become available using integrated nested Laplace approximations (INLA). Applying the INLA‐methodology, marginal estimates for elements of the latent field can be computed efficiently, providing relevant summary statistics like posterior means, variances and pointwise credible intervals. In this article, we extend the use of INLA to joint inference and present an algorithm to derive analytical simultaneous credible bands for subsets of the latent field. The algorithm is based on approximating the joint distribution of the subsets by multivariate Gaussian mixtures. Additionally, we present a saddlepoint approximation to compute Bayesian contour probabilities, representing the posterior support of fixed parameter vectors of interest. We perform a simulation study and apply the given methods to two real examples.  相似文献   

10.
This article addresses the various properties and different methods of estimation of the unknown parameter of length and area-biased Maxwell distributions. Although, our main focus is on estimation from both frequentist and Bayesian point of view, yet, various mathematical and statistical properties of length and area-biased Maxwell distributions (such as moments, moment-generating function (mgf), hazard rate function, mean residual lifetime function, residual lifetime function, reversed residual life function, conditional moments and conditional mgf, stochastic ordering, and measures of uncertainty) are derived. We briefly describe different frequentist approaches, namely, maximum likelihood estimator, moments estimator, least-square and weighted least-square estimators, maximum product of spacings estimator and compare them using extensive numerical simulations. Next we consider Bayes estimation under different types of loss function (symmetric and asymmetric loss functions) using inverted gamma prior for the scale parameter. Furthermore, Bayes estimators and their respective posterior risks are computed and compared using Markov chain Monte Carlo (MCMC) algorithm. Also, bootstrap confidence intervals using frequentist approaches are provided to compare with Bayes credible intervals. Finally, a real dataset has been analyzed for illustrative purposes.  相似文献   

11.
Relative surprise inferences are based on how beliefs change from a priori to a posteriori. As they are based on the posterior distribution of the integrated likelihood, inferences of this type are invariant under relabellings of the parameter of interest. The authors demonstrate that these inferences possess a certain optimality property. Further, they develop computational techniques for implementing them, provided that algorithms are available to sample from the prior and posterior distributions.  相似文献   

12.
In most hierarchical Bayes cases the posterior distributions are difficult to derive and cannot be obtained in closed form. In some special cases, however, it is possible to obtain the exact moments of the posterior distributions.

By applying these moments and Pearson curves or Cornish-Fisher expansions to real problems, good approximations of the exact posterior distributions of individual parameter values as well as linear combinations of parameter values could easily be obtained.  相似文献   

13.

Point estimators for a scalar parameter of interest in the presence of nuisance parameters can be defined as zero-level confidence intervals as explained in Skovgaard (1989). A natural implementation of this approach is based on estimating equations obtained from higher-order pivots for the parameter of interest. In this paper, generalising the results in Pace and Salvan (1999) outside exponential families, we take as an estimating function the modified directed likelihood. This is a higher-order pivotal quantity that can be easily computed in practice for a wide range of models, using recent advances in higher-order asymptotics (HOA, 2000). The estimators obtained from these estimating equations are a refinement of the maximum likelihood estimators, improving their small sample properties and keeping equivariance under reparameterisation. Simple explicit approximate versions of these estimators are also derived and have the form of the maximum likelihood estimator plus a function of derivatives of the loglikelihood function. Some examples and simulation studies are discussed for widely-used model classes.  相似文献   

14.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

15.
We present a practical way to find matching priors via the use of saddlepoint approximations and obtain p-values of tests of an interest parameter in the presence of nuisance parameters. The advantages of our procedure are the flexibility in choosing different initial conditions so that one may adjust the performance of a test, and the less intensive computational efforts compared to a Markov Chain Monto Carlo method.  相似文献   

16.
This paper examines the general third-order theory to the log-normal regression model. The interest parameter is its conditional mean. For inference, traditional first-order approximations need large sample sizes and normal-like distributions. Some specific third-order methods need the explicit forms of the nuisance parameter and ancillary statistic, which are quite complicated. Note that this general third-order theory can be applied to any continuous models with standard asymptotic properties. It only needs the log-likelihood function. With small sample settings, the simulation studies for confidence intervals of the conditional mean illustrate that the general third-order theory is much superior to the traditional first-order methods.  相似文献   

17.
ABSTRACT

In influence analysis several problems arise in the field of Principal Components when applying different sample versions. Among these are the difficulty of determining a certain correspondence between the eigenvalues before and after the deletion of observations, the choice of the sign of the eigenvectors and the computational problem derived from the resolution of a great number of eigenproblems. In this article, such problems are discussed from the joint influence point of view and a solution is proposed by using approximations. Furthermore, the influence on a new parameter of interest is introduced: the proportion of variance explained by a set of principal components.  相似文献   

18.
Scoring rules give rise to methods for statistical inference and are useful tools to achieve robustness or reduce computations. Scoring rule inference is generally performed through first-order approximations to the distribution of the scoring rule estimator or of the ratio-type statistic. In order to improve the accuracy of first-order methods even in simple models, we propose bootstrap adjustments of signed scoring rule root statistics for a scalar parameter of interest in presence of nuisance parameters. The method relies on the parametric bootstrap approach that avoids onerous calculations specific of analytical adjustments. Numerical examples illustrate the accuracy of the proposed method.  相似文献   

19.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

20.
Threshold methods for multivariate extreme values are based on the use of asymptotically justified approximations of both the marginal distributions and the dependence structure in the joint tail. Models derived from these approximations are fitted to a region of the observed joint tail which is determined by suitably chosen high thresholds. A drawback of the existing methods is the necessity for the same thresholds to be taken for the convergence of both marginal and dependence aspects, which can result in inefficient estimation. In this paper an extension of the existing models, which removes this constraint, is proposed. The resulting model is semi-parametric and requires computationally intensive techniques for likelihood evaluation. The methods are illustrated using a coastal engineering application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号