首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
The theory of higher-order asymptotics provides accurate approximations to posterior distributions for a scalar parameter of interest, and to the corresponding tail area, for practical use in Bayesian analysis. The aim of this article is to extend these approximations to pseudo-posterior distributions, e.g., posterior distributions based on a pseudo-likelihood function and a suitable prior, which are proved to be particularly useful when the full likelihood is analytically or computationally infeasible. In particular, from a theoretical point of view, we derive the Laplace approximation for a pseudo-posterior distribution, and for the corresponding tail area, for a scalar parameter of interest, also in the presence of nuisance parameters. From a computational point of view, starting from these higher-order approximations, we discuss the higher-order tail area (HOTA) algorithm useful to approximate marginal posterior distributions, and related quantities. Compared to standard Markov chain Monte Carlo methods, the main advantage of the HOTA algorithm is that it gives independent samples at a negligible computational cost. The relevant computations are illustrated by two examples.  相似文献   

2.
In semiparametric inference we distinguish between the parameter of interest which may be a location parameter, and a nuisance parameter that determines the remaining shape of the sampling distribution. As was pointed out by Diaconis and Freedman the main problem in semiparametric Bayesian inference is to obtain a consistent posterior distribution for the parameter of interest. The present paper considers a semiparametric Bayesian method based on a pivotal likelihood function. It is shown that when the parameter of interest is the median, this method produces a consistent posterior distribution and is easily implemented, Numerical comparisons with classical methods and with Bayesian methods based on a Dirichlet prior are provided. It is also shown that in the case of symmetric intervals, the classical confidence coefficients have a Bayesian interpretation as the limiting posterior probability of the interval based on the Dirichlet prior with a parameter that converges to zero.  相似文献   

3.
Abstract. This paper deals with the issue of performing a default Bayesian analysis on the shape parameter of the skew‐normal distribution. Our approach is based on a suitable pseudo‐likelihood function and a matching prior distribution for this parameter, when location (or regression) and scale parameters are unknown. This approach is important for both theoretical and practical reasons. From a theoretical perspective, it is shown that the proposed matching prior is proper thus inducing a proper posterior distribution for the shape parameter, also when the likelihood is monotone. From the practical perspective, the proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals.  相似文献   

4.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

5.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

6.
Suppose a prior is specified only on the interest parameter and a posterior distribution, free from nuisance parameters, is considered on the basis of the profile likelihood or an adjusted version thereof. In this setup, we derive higher order asymptotic results on the construction of confidence intervals that have approximately correct posterior as well as frequentist coverage. Apart from meeting both Bayesian and frequentist objectives under prior specification on the interest parameter alone, these results allow a comparison with their counterpart arising when the nuisance parameters are known, and hence provide additional justification for the Cox and Reid adjustment from a Bayesian-cum-frequentist perspective, with regard to neutralization of unknown nuisance parameters.  相似文献   

7.
In this study, adjustment of profile likelihood function of parameter of interest in presence of many nuisance parameters is investigated for survival regression models. Our objective is to extend the Barndorff–Nielsen’s technique to Weibull regression models for estimation of shape parameter in presence of many nuisance and regression parameters. We conducted Monte-Carlo simulation studies and a real data analysis, all of which demonstrate and suggest that the modified profile likelihood estimators outperform the profile likelihood estimators in terms of three comparison criterion: mean squared errors, bias and standard errors.  相似文献   

8.
In studies that involve censored time-to-event data, stratification is frequently encountered due to different reasons, such as stratified sampling or model adjustment due to violation of model assumptions. Often, the main interest is not in the clustering variables, and the cluster-related parameters are treated as nuisance. When inference is about a parameter of interest in presence of many nuisance parameters, standard likelihood methods often perform very poorly and may lead to severe bias. This problem is particularly evident in models for clustered data with cluster-specific nuisance parameters, when the number of clusters is relatively high with respect to the within-cluster size. However, it is still unclear how the presence of censoring would affect this issue. We consider clustered failure time data with independent censoring, and propose frequentist inference based on an integrated likelihood. We then apply the proposed approach to a stratified Weibull model. Simulation studies show that appropriately defined integrated likelihoods provide very accurate inferential results in all circumstances, such as for highly clustered data or heavy censoring, even in extreme settings where standard likelihood procedures lead to strongly misleading results. We show that the proposed method performs generally as well as the frailty model, but it is superior when the frailty distribution is seriously misspecified. An application, which concerns treatments for a frequent disease in late-stage HIV-infected people, illustrates the proposed inferential method in Weibull regression models, and compares different inferential conclusions from alternative methods.  相似文献   

9.
We consider approximate Bayesian inference about scalar parameters of linear regression models with possible censoring. A second-order expansion of their Laplace posterior is seen to have a simple and intuitive form for logconcave error densities with nondecreasing hazard functions. The accuracy of the approximations is assessed for normal and Gumbel errors when the number of regressors increases with sample size. Perturbations of the prior and the likelihood are seen to be easily accommodated within our framework. Links with the work of DiCiccio et al. (1990) and Viveros and Sprott (1987) extend the applicability of our results to conditional frequentist inference based on likelihood-ratio statistics.  相似文献   

10.
The aim of this paper is to extend in a natural fashion the results on the treatment of nuisance parameters from the profile likelihood theory to the field of robust statistics. Similarly to what happens when there are no nuisance parameters, the attempt is to derive a bounded estimating function for a parameter of interest in the presence of nuisance parameters. The proposed method is based on a classical truncation argument of the theory of robustness applied to a generalized profile score function. By means of comparative studies, we show that this robust procedure for inference in the presence of a nuisance parameter can be used successfully in a parametric setting.  相似文献   

11.
The full Bayesian significance test (FBST) was introduced by Pereira and Stern for measuring the evidence of a precise null hypothesis. The FBST requires both numerical optimization and multidimensional integration, whose computational cost may be heavy when testing a precise null hypothesis on a scalar parameter of interest in the presence of a large number of nuisance parameters. In this paper we propose a higher order approximation of the measure of evidence for the FBST, based on tail area expansions of the marginal posterior of the parameter of interest. When in particular focus is on matching priors, further results are highlighted. Numerical illustrations are discussed.  相似文献   

12.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

13.
The conventional Cox proportional hazards regression model contains a loglinear relative risk function, linking the covariate information to the hazard ratio with a finite number of parameters. A generalization, termed the partly linear Cox model, allows for both finite dimensional parameters and an infinite dimensional parameter in the relative risk function, providing a more robust specification of the relative risk function. In this work, a likelihood based inference procedure is developed for the finite dimensional parameters of the partly linear Cox model. To alleviate the problems associated with a likelihood approach in the presence of an infinite dimensional parameter, the relative risk is reparameterized such that the finite dimensional parameters of interest are orthogonal to the infinite dimensional parameter. Inference on the finite dimensional parameters is accomplished through maximization of the profile partial likelihood, profiling out the infinite dimensional nuisance parameter using a kernel function. The asymptotic distribution theory for the maximum profile partial likelihood estimate is established. It is determined that this estimate is asymptotically efficient; the orthogonal reparameterization enables employment of profile likelihood inference procedures without adjustment for estimation of the nuisance parameter. An example from a retrospective analysis in cancer demonstrates the methodology.  相似文献   

14.
Scoring rules give rise to methods for statistical inference and are useful tools to achieve robustness or reduce computations. Scoring rule inference is generally performed through first-order approximations to the distribution of the scoring rule estimator or of the ratio-type statistic. In order to improve the accuracy of first-order methods even in simple models, we propose bootstrap adjustments of signed scoring rule root statistics for a scalar parameter of interest in presence of nuisance parameters. The method relies on the parametric bootstrap approach that avoids onerous calculations specific of analytical adjustments. Numerical examples illustrate the accuracy of the proposed method.  相似文献   

15.
We present a method for using posterior samples produced by the computer program BUGS (Bayesian inference Using Gibbs Sampling) to obtain approximate profile likelihood functions of parameters or functions of parameters in directed graphical models with incomplete data. The method can also be used to approximate integrated likelihood functions. It is easily implemented and it performs a good approximation. The profile likelihood represents an aspect of the parameter uncertainty which does not depend on the specification of prior distributions, and it can be used as a worthwhile supplement to BUGS that enable us to do both Bayesian and likelihood based analyses in directed graphical models.  相似文献   

16.
The aim of this paper is to investigate the robustness properties of likelihood inference with respect to rounding effects. Attention is focused on exponential families and on inference about a scalar parameter of interest, also in the presence of nuisance parameters. A summary value of the influence function of a given statistic, the local-shift sensitivity, is considered. It accounts for small fluctuations in the observations. The main result is that the local-shift sensitivity is bounded for the usual likelihood-based statistics, i.e. the directed likelihood, the Wald and score statistics. It is also bounded for the modified directed likelihood, which is a higher-order adjustment of the directed likelihood. The practical implication is that likelihood inference is expected to be robust with respect to rounding effects. Theoretical analysis is supplemented and confirmed by a number of Monte Carlo studies, performed to assess the coverage probabilities of confidence intervals based on likelihood procedures when data are rounded. In addition, simulations indicate that the directed likelihood is less sensitive to rounding effects than the Wald and score statistics. This provides another criterion for choosing among first-order equivalent likelihood procedures. The modified directed likelihood shows the same robustness as the directed likelihood, so that its gain in inferential accuracy does not come at the price of an increase in instability with respect to rounding.  相似文献   

17.
The conditional likelihood is widely used in logistic regression models with stratified binary data. In particular, it leads to accurate inference for the parameters of interest, which are common to all strata, eliminating stratum-specific nuisance parameters. The modified profile likelihood is an accurate approximation to the conditional likelihood, but has the advantage of being available for general parametric models. Here, we propose the modified profile likelihood as an ideal extension of the conditional likelihood in generalized linear models for binary data, with generic link function. An important feature is that for the implementation we only need standard outputs of routines for generalized linear models. The accuracy of the method is supported by theoretical properties and is confirmed by simulation results.This research was supported by MIUR COFIN 2001-2003.  相似文献   

18.
Longitudinal data are commonly modeled with the normal mixed-effects models. Most modeling methods are based on traditional mean regression, which results in non robust estimation when suffering extreme values or outliers. Median regression is also not a best choice to estimation especially for non normal errors. Compared to conventional modeling methods, composite quantile regression can provide robust estimation results even for non normal errors. In this paper, based on a so-called pseudo composite asymmetric Laplace distribution (PCALD), we develop a Bayesian treatment to composite quantile regression for mixed-effects models. Furthermore, with the location-scale mixture representation of the PCALD, we establish a Bayesian hierarchical model and achieve the posterior inference of all unknown parameters and latent variables using Markov Chain Monte Carlo (MCMC) method. Finally, this newly developed procedure is illustrated by some Monte Carlo simulations and a case analysis of HIV/AIDS clinical data set.  相似文献   

19.
Lu Lin   《Statistical Methodology》2006,3(4):444-455
If the form of the distribution of data is unknown, the Bayesian method fails in the parametric inference because there is no posterior distribution of the parameter. In this paper, a theoretical framework of Bayesian likelihood is introduced via the Hilbert space method, which is free of the distributions of data and the parameter. The posterior distribution and posterior score function based on given inner products are defined and, consequently, the quasi posterior distribution and quasi posterior score function are derived, respectively, as the projections of the posterior distribution and posterior score function onto the space spanned by given estimating functions. In the space spanned by data, particularly, an explicit representation for the quasi posterior score function is obtained, which can be derived as a projection of the true posterior score function onto this space. The methods of constructing conservative quasi posterior score and quasi posterior log-likelihood are proposed. Some examples are given to illustrate the theoretical results. As an application, the quasi posterior distribution functions are used to select variables for generalized linear models. It is proved that, for linear models, the variable selections via quasi posterior distribution functions are equivalent to the variable selections via the penalized residual sum of squares or regression sum of squares.  相似文献   

20.
Abstract

This article is concerned with the comparison of Bayesian and classical testing of a point null hypothesis for the Pareto distribution when there is a nuisance parameter. In the first stage, using a fixed prior distribution, the posterior probability is obtained and compared with the P-value. In the second case, lower bounds of the posterior probability of H0, under a reasonable class of prior distributions, are compared with the P-value. It has been shown that even in the presence of nuisance parameters for the model, these two approaches can lead to different results in statistical inference.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号