首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

2.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Abstract. This paper deals with the issue of performing a default Bayesian analysis on the shape parameter of the skew‐normal distribution. Our approach is based on a suitable pseudo‐likelihood function and a matching prior distribution for this parameter, when location (or regression) and scale parameters are unknown. This approach is important for both theoretical and practical reasons. From a theoretical perspective, it is shown that the proposed matching prior is proper thus inducing a proper posterior distribution for the shape parameter, also when the likelihood is monotone. From the practical perspective, the proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals.  相似文献   

4.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

5.
Abstract. The modelling process in Bayesian Statistics constitutes the fundamental stage of the analysis, since depending on the chosen probability laws the inferences may vary considerably. This is particularly true when conflicts arise between two or more sources of information. For instance, inference in the presence of an outlier (which conflicts with the information provided by the other observations) can be highly dependent on the assumed sampling distribution. When heavy‐tailed (e.g. t) distributions are used, outliers may be rejected whereas this kind of robust inference is not available when we use light‐tailed (e.g. normal) distributions. A long literature has established sufficient conditions on location‐parameter models to resolve conflict in various ways. In this work, we consider a location–scale parameter structure, which is more complex than the single parameter cases because conflicts can arise between three sources of information, namely the likelihood, the prior distribution for the location parameter and the prior for the scale parameter. We establish sufficient conditions on the distributions in a location–scale model to resolve conflicts in different ways as a single observation tends to infinity. In addition, for each case, we explicitly give the limiting posterior distributions as the conflict becomes more extreme.  相似文献   

6.
University first-year students grades are naturally correlated with the scores obtained at placement tests. Often this characteristic leads the university grades in the first exams to be asymmetrically distributed. Motivated by the analysis of grades of the basic Statistics examination of first-year students, we discuss informative priors for the shape parameter of the skew-normal model, a class of distribution which account for several degree of asymmetry. Our proposed prior leads to closed-form full-conditional posterior distributions, particularly useful in Markov Chain Monte Carlo simulation. A Gibbs sampling algorithm is discussed for the joint vector of parameters and the method is applied to a real data set from the School of Economics, University of Padua, Italy. Our analysis reveals that the correlation between the placement test and the grades of first-year students leads to a measurable positive skewness of the distribution of the university grades.  相似文献   

7.
In this article, the Bayes estimates of two-parameter gamma distribution are considered. It is well known that the Bayes estimators of the two-parameter gamma distribution do not have compact form. In this paper, it is assumed that the scale parameter has a gamma prior and the shape parameter has any log-concave prior, and they are independently distributed. Under the above priors, we use Gibbs sampling technique to generate samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and can also construct HPD credible intervals. We also compute the approximate Bayes estimates using Lindley's approximation under the assumption of gamma priors of the shape parameter. Monte Carlo simulations are performed to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes. We further discuss the Bayesian prediction of future observation based on the observed sample and it is seen that the Gibbs sampling technique can be used quite effectively for estimating the posterior predictive density and also for constructing predictive intervals of the order statistics from the future sample.  相似文献   

8.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

9.
In this paper, we develop noninformative priors for the generalized half-normal distribution when scale and shape parameters are of interest, respectively. Especially, we develop the first and second order matching priors for both parameters. For the shape parameter, we reveal that the second order matching prior is a highest posterior density (HPD) matching prior and a cumulative distribution function (CDF) matching prior. In addition, it matches the alternative coverage probabilities up to the second order. For the scale parameter, we reveal that the second order matching prior is neither a HPD matching prior nor a CDF matching prior. Also, it does not match the alternative coverage probabilities up to the second order. For both parameters, we present that the one-at-a-time reference prior is a second order matching prior. However, Jeffreys’ prior is neither a first nor a second order matching prior. Methods are illustrated with both a simulation study and a real data set.  相似文献   

10.
In this paper, we develop the non-informative priors for the inverse Weibull model when the parameters of interest are the scale and the shape parameters. We develop the first-order and the second-order matching priors for both parameters. For the scale parameter, we reveal that the second-order matching prior is not a highest posterior density (HPD) matching prior, does not match the alternative coverage probabilities up to the second order and is not a cumulative distribution function (CDF) matching prior. Also for the shape parameter, we reveal that the second-order matching prior is an HPD matching prior and a CDF matching prior and also matches the alternative coverage probabilities up to the second order. For both parameters, we reveal that the one-at-a-time reference prior is the second-order matching prior, but Jeffreys’ prior is not the first-order and the second-order matching prior. A simulation study is performed to compare the target coverage probabilities and a real example is given.  相似文献   

11.
Due to computational challenges and non-availability of conjugate prior distributions, Bayesian variable selection in quantile regression models is often a difficult task. In this paper, we address these two issues for quantile regression models. In particular, we develop an informative stochastic search variable selection (ISSVS) for quantile regression models that introduces an informative prior distribution. We adopt prior structures which incorporate historical data into the current data by quantifying them with a suitable prior distribution on the model parameters. This allows ISSVS to search more efficiently in the model space and choose the more likely models. In addition, a Gibbs sampler is derived to facilitate the computation of the posterior probabilities. A major advantage of ISSVS is that it avoids instability in the posterior estimates for the Gibbs sampler as well as convergence problems that may arise from choosing vague priors. Finally, the proposed methods are illustrated with both simulation and real data.  相似文献   

12.
Instrumental variable (IV) regression provides a number of statistical challenges due to the shape of the likelihood. We review the main Bayesian literature on instrumental variables and highlight these pathologies. We discuss Jeffreys priors, the connection to the errors-in-the-variables problems and more general error distributions. We propose, as an alternative to the inverted Wishart prior, a new Cholesky-based prior for the covariance matrix of the errors in IV regressions. We argue that this prior is more flexible and more robust thanthe inverted Wishart prior since it is not based on only one tightness parameter and therefore can be more informative about certain components of the covariance matrix and less informative about others. We show how prior-posterior inference can be formulated in a Gibbs sampler and compare its performance in the weak instruments case for synthetic as well as two illustrations based on well-known real data.  相似文献   

13.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

14.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   

15.
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.  相似文献   

16.
The choice of prior distributions for the variances can be important and quite difficult in Bayesian hierarchical and variance component models. For situations where little prior information is available, a ‘nonin-formative’ type prior is usually chosen. ‘Noninformative’ priors have been discussed by many authors and used in many contexts. However, care must be taken using these prior distributions as many are improper and thus, can lead to improper posterior distributions. Additionally, in small samples, these priors can be ‘informative’. In this paper, we investigate a proper ‘vague’ prior, the uniform shrinkage prior (Strawder-man 1971; Christiansen & Morris 1997). We discuss its properties and show how posterior distributions for common hierarchical models using this prior lead to proper posterior distributions. We also illustrate the attractive frequentist properties of this prior for a normal hierarchical model including testing and estimation. To conclude, we generalize this prior to the multivariate situation of a covariance matrix.  相似文献   

17.
Consider the problem of estimating under entropy loss an arbitrarily positive, strictly increasing or decreasing parametric function based on a sample of size n in an one parameter noregular family of absolutly continuous distributions with both endpoints of the support depending on a single parameter. We first provide sufficient conditions for the admissibility of generalized Bayes estimator with respect to some specific priors and then treat several examples which illustrate the admissibility of best invariant estimators is some location or scale parameter problems.  相似文献   

18.
The Weibull distribution is widely used due to its versatility and relative simplicity. In our paper, the non informative priors for the ratio of the scale parameters of two Weibull models are provided. The asymptotic matching of coverage probabilities of Bayesian credible intervals is considered, with the corresponding frequentist coverage probabilities. We developed the various priors for the ratio of two scale parameters using the following matching criteria: quantile matching, matching of distribution function, highest posterior density matching, and inversion of test statistics. One particular prior, which meets all the matching criteria, is found. Next, we derive the reference priors for groups of ordering. We see that all the reference priors satisfy a first-order matching criterion and that the one-at-a-time reference prior is a second-order matching prior. A simulation study is performed and an example given.  相似文献   

19.
Bayesian hierarchical formulations are utilized by the U.S. Bureau of Labor Statistics (BLS) with respondent‐level data for missing item imputation because these formulations are readily parameterized to capture correlation structures. BLS collects survey data under informative sampling designs that assign probabilities of inclusion to be correlated with the response on which sampling‐weighted pseudo posterior distributions are estimated for asymptotically unbiased inference about population model parameters. Computation is expensive and does not support BLS production schedules. We propose a new method to scale the computation that divides the data into smaller subsets, estimates a sampling‐weighted pseudo posterior distribution, in parallel, for every subset and combines the pseudo posterior parameter samples from all the subsets through their mean in the Wasserstein space of order 2. We construct conditions on a class of sampling designs where posterior consistency of the proposed method is achieved. We demonstrate on both synthetic data and in application to the Current Employment Statistics survey that our method produces results of similar accuracy as the usual approach while offering substantially faster computation.  相似文献   

20.
In this article, we develop an empirical Bayesian approach for the Bayesian estimation of parameters in four bivariate exponential (BVE) distributions. We have opted for gamma distribution as a prior for the parameters of the model in which the hyper parameters have been estimated based on the method of moments and maximum likelihood estimates (MLEs). A simulation study was conducted to compute empirical Bayesian estimates of the parameters and their standard errors. We use moment estimators or MLEs to estimate the hyper parameters of the prior distributions. Furthermore, we compare the posterior mode of parameters obtained by different prior distributions and the Bayesian estimates based on gamma priors are very close to the true values as compared to improper priors. We use MCMC method to obtain the posterior mean and compared the same using the improper priors and the classical estimates, MLEs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号