首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In reliability analysis, it is common to consider several causes, either mechanical or electrical, those are competing to fail a unit. These causes are called “competing risks.” In this paper, we consider the simple step-stress model with competing risks for failure from Weibull distribution under progressive Type-II censoring. Based on the proportional hazard model, we obtain the maximum likelihood estimates (MLEs) of the unknown parameters. The confidence intervals are derived by using the asymptotic distributions of the MLEs and bootstrap method. For comparison, we obtain the Bayesian estimates and the highest posterior density (HPD) credible intervals based on different prior distributions. Finally, their performance is discussed through simulations.  相似文献   

2.
In Bayesian analysis, people usually report the highest posterior density (HPD) credible interval as an interval estimate of an unknown parameter. However, when the unknown parameter is the nonnegative normal mean, the Bayesian HPD credible interval under the uniform prior has quite a low minimum frequentist coverage probability. To enhance the minimum frequentist coverage probability of a credible interval, I propose a new method of reporting the Bayesian credible interval. Numerical results show that the new reported credible interval has a much higher minimum frequentist coverage probability than the HPD credible interval.  相似文献   

3.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

4.
Suppose that just the lower bound of the probability of a measurable subset K in the parameter space Ω is a priori known, when inferences are to be made about measurable subsets A in Ω. Instead of eliciting a unique prior distribution, consider the class Г of all the distributions compatible with such bound. Under mild regularity conditions about the likelihood function, the range of the posterior probability of any A is found, as the prior distribution varies in Г. Such ranges are analysed according to the robust Bayesian viewpoint. Furthermore, some characterising properties of the extended likelihood sets are proved. The prior distributions in Г are then considered as a neighbour class of an elicited prior, comparing likelihood sets and HPD in terms of robustness.  相似文献   

5.
ABSTRACT

This paper deals with Bayes, robust Bayes, and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference, the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not able to specify the prior totally or when a problem must be solved by two statisticians, they may agree on the choice of the prior but not the values of the hyperparameters. A common approach to the prior uncertainty in Bayesian analysis is to choose a class of prior distributions and compute some functional quantity. This is known as Robust Bayesian analysis which provides a way to consider the prior knowledge in terms of a class of priors Γ for global prevention against bad choices of hyperparameters. Under a scale invariant precautionary loss function, we deal with robust Bayes predictions of Y based on X. We carried out a simulation study and a real data analysis to illustrate the practical utility of the prediction procedure.  相似文献   

6.
The implementation of the Bayesian paradigm to model comparison can be problematic. In particular, prior distributions on the parameter space of each candidate model require special care. While it is well known that improper priors cannot be routinely used for Bayesian model comparison, we claim that also the use of proper conventional priors under each model should be regarded as suspicious, especially when comparing models having different dimensions. The basic idea is that priors should not be assigned separately under each model; rather they should be related across models, in order to acquire some degree of compatibility, and thus allow fairer and more robust comparisons. In this connection, the intrinsic prior as well as the expected posterior prior (EPP) methodology represent a useful tool. In this paper we develop a procedure based on EPP to perform Bayesian model comparison for discrete undirected decomposable graphical models, although our method could be adapted to deal also with directed acyclic graph models. We present two possible approaches. One based on imaginary data, and one which makes use of a limited number of actual data. The methodology is illustrated through the analysis of a 2×3×4 contingency table.  相似文献   

7.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

8.
Robust Bayesian methodology deals with the problem of explaining uncertainty of the inputs (the prior, the model, and the loss function) and provides a breakthrough way to take into account the input’s variation. If the uncertainty is in terms of the prior knowledge, robust Bayesian analysis provides a way to consider the prior knowledge in terms of a class of priors \(\varGamma \) and derive some optimal rules. In this paper, we motivate utilizing robust Bayes methodology under the asymmetric general entropy loss function in insurance and pursue two main goals, namely (i) computing premiums and (ii) predicting a future claim size. To achieve the goals, we choose some classes of priors and deal with (i) Bayes and posterior regret gamma minimax premium computation, (ii) Bayes and posterior regret gamma minimax prediction of a future claim size under the general entropy loss. We also perform a prequential analysis and compare the performance of posterior regret gamma minimax predictors against the Bayes predictors.  相似文献   

9.
In this report we describe the Bayesian analysis of a logistic dose-response curve in a Phase I study, and we present two simple and intuitive numerical approaches to construction of prior probability distributions for the model parameters. We combine these priors with the expert prior opinion and compare the results of the analyses with those obtained from the use of alternative prior formulations.  相似文献   

10.
In this article, we consider a Bayesian analysis of a possible change in the parameters of autoregressive time series of known order p, AR(p). An unconditional Bayesian test based on highest posterior density (HPD) credible sets is determined. The test is useful to detect a change in any one of the parameters separately. Using the Gibbs sampler algorithm, we approximate the posterior densities of the change point and other parameters to calculate the p-values that define our test.  相似文献   

11.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   

12.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

13.
This article studies the construction of a Bayesian confidence interval for the ratio of marginal probabilities in matched-pair designs. Under a Dirichlet prior distribution, the exact posterior distribution of the ratio is derived. The tail confidence interval and the highest posterior density (HPD) interval are studied, and their frequentist performances are investigated by simulation in terms of mean coverage probability and mean expected length of the interval. An advantage of Bayesian confidence interval is that it is always well defined for any data structure and has shorter mean expected width. We also find that the Bayesian tail interval at Jeffreys prior performs as well as or better than the frequentist confidence intervals.  相似文献   

14.
For normal populations with unequal variances, we develop matching priors and reference priors for a linear combination of the means. Here, we find three second-order matching priors: a highest posterior density (HPD) matching prior, a cumulative distribution function (CDF) matching prior, and a likelihood ratio (LR) matching prior. Furthermore, we show that the reference priors are all first-order matching priors, but that they do not satisfy the second-order matching criterion that establishes the symmetry and the unimodality of the posterior under the developed priors. The results of a simulation indicate that the second-order matching prior outperforms the reference priors in terms of matching the target coverage probabilities, in a frequentist sense. Finally, we compare the Bayesian credible intervals based on the developed priors with the confidence intervals derived from real data.  相似文献   

15.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   

16.
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET.  相似文献   

17.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

18.
In this paper, we develop the non-informative priors for the inverse Weibull model when the parameters of interest are the scale and the shape parameters. We develop the first-order and the second-order matching priors for both parameters. For the scale parameter, we reveal that the second-order matching prior is not a highest posterior density (HPD) matching prior, does not match the alternative coverage probabilities up to the second order and is not a cumulative distribution function (CDF) matching prior. Also for the shape parameter, we reveal that the second-order matching prior is an HPD matching prior and a CDF matching prior and also matches the alternative coverage probabilities up to the second order. For both parameters, we reveal that the one-at-a-time reference prior is the second-order matching prior, but Jeffreys’ prior is not the first-order and the second-order matching prior. A simulation study is performed to compare the target coverage probabilities and a real example is given.  相似文献   

19.
Bayesian estimators of variance components are developed, based on posterior mean and posterior mode, respectively, in a one-way ANOVA random effects model with independent prior distributions. The formulas for the proposed estimators are simple. The estimators give sensible results for 'badly-behaved' datasets, where the standard unbiased estimates are negative. They are markedly robust as compared to the existing estimators such as the maximum likelihood estimators and the maximum posterior density estimators.  相似文献   

20.
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for binomial random variables with several zero responses. Particular attention is paid to the coherence between the prior distributions of the study model parameters and the meta‐parameter. Thus, our method introduces a simple way to examine the sensitivity of these quantities to the structure dependence selected for study. For illustrative purposes, an example with real data is analysed, using the proposed Bayesian meta‐analysis model for binomial sparse data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号