首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Laplace Approximations for Natural Exponential Families with Cuts   总被引:1,自引:0,他引:1  
Standard and fully exponential form Laplace approximations to marginal densities are described and conditions under which these give exact answers are investigated. A general result is obtained and is subsequently applied in the case of natural exponential families with cuts, in order to derive the marginal posterior density of the mean parameter corresponding to the cut, the canonical parameter corresponding to the complement of the cut and transformations of these. Important cases of families for which a cut exists and the approximations are exact are presented as examples  相似文献   

2.
In this paper the independence between a block of natural parameters and the complementary block of mean value parameters holding for densities which are natural conjugate to some regular exponential families is used to design in a convenient way a Gibbs' sampler with block updates. Even when the densities of interest are obtained by conditioning to zero a block of natural parameters in a density conjugate to a larger "saturated" model, the updates require only the computation of marginal distributions under the "unconditional" density. For exponential families which are closed under marginalization, including both the zero mean Gaussian family and the cross-classified Bernoulli family such an implementation of the Gibbs' sampler can be seen as an Iterative Proportional Fitting algorithm with random inputs.  相似文献   

3.
ABSTRACT

The important problem of discriminating between separate families of distributions is the theme of this work. The Bayesian significance test, FBST, is compared with the celebrated Cox test. The three families most used in survival analysis, lognormal, gamma and Weibull, are considered for the discrimination. A convex combination—with unknown weights—of the three densities is used for this discrimination. After these weights have been estimated, the one with the highest value indicates the best statistical model among the three. Another important feature considered is the parameterization used. All the three densities are written as a function of the common population mean and variance. Including the weights, the number of parameters is reduced from eight (two of each density and two of the convex combination) to four (two from the common mean and variance plus two of the weights). Some numerical results from simulations are given. In these simulations, the results of FBST are compared with those obtained with the Cox test. Two real examples properly illustrate the procedures.  相似文献   

4.
Some matrix representations of diverse diagonal arrays are studied in this work; the results allow new definitions of classes of elliptical distributions indexed by kernels mixing Hadamard and usual products. A number of applications are derived in the setting of prior densities from the Bayesian multivariate regression model and families of non-elliptical distributions, such as the matrix multivariate generalized Birnbaum–Saunders density. The philosophy of the research about matrix representations of quadratic and inverse quadratic forms can be extended as a methodology for exploring possible new applications in non-standard distributions, matrix transformations and inference.  相似文献   

5.
6.
Let X1,X2…be i.i.d. observations from a mixture density. The support of the unknown prior distribution is the union of two unknown intervals. The paper deals with an empirical Bayes testing approach (?≤ c against>c where c is an unknown parameter to be estimated) in order to classify the observed variables as coming from one population or the other as ? belongs to one or the other unknown interval. Two methods are proposed in which asymptotically optimal decision rules are constructed avoiding the estimation of the unknown prior. The first method deals with the case of exponential families and is a generalization of the method of Johns and Van Ryzin (1971, 1972) whereas the second one deals with families that are closed under convolution and is a Fourier method. The application of the Fourier method to some densities (i.e. contaminated Gaussian distributions, exponential distribution, double-exponential distribution) which are interesting in view of applications and which cannot be studied by means of the direct method, is also considered herein.  相似文献   

7.
We propose a Bayesian nonparametric procedure for density estimation, for data in a closed, bounded interval, say [0,1]. To this aim, we use a prior based on Bemstein polynomials. This corresponds to expressing the density of the data as a mixture of given beta densities, with random weights and a random number of components. The density estimate is then obtained as the corresponding predictive density function. Comparison with classical and Bayesian kernel estimates is provided. The proposed procedure is illustrated in an example; an MCMC algorithm for approximating the estimate is also discussed.  相似文献   

8.
Utilizing the notion of matching predictives as in Berger and Pericchi, we show that for the conjugate family of prior distributions in the normal linear model, the symmetric Kullback-Leibler divergence between two particular predictive densities is minimized when the prior hyperparameters are taken to be those corresponding to the predictive priors proposed in Ibrahim and Laud and Laud and Ibrahim. The main application for this result is for Bayesian variable selection.  相似文献   

9.
Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two-stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

10.
《Econometric Reviews》2013,32(2):203-215
Abstract

Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two‐stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

11.
Construction methods for prior densities are investigated from a predictive viewpoint. Predictive densities for future observables are constructed by using observed data. The simultaneous distribution of future observables and observed data is assumed to belong to a parametric submodel of a multinomial model. Future observables and data are possibly dependent. The discrepancy of a predictive density to the true conditional density of future observables given observed data is evaluated by the Kullback-Leibler divergence. It is proved that limits of Bayesian predictive densities form an essentially complete class. Latent information priors are defined as priors maximizing the conditional mutual information between the parameter and the future observables given the observed data. Minimax predictive densities are constructed as limits of Bayesian predictive densities based on prior sequences converging to the latent information priors.  相似文献   

12.
The posterior mode under the standardized prior density is proposed to estimate a mean (vector) parameter, and its potential usefulness is discussed. Priors in this study include a conjugate prior and its generalized forms. When a prior density is factored into the standardized prior density and the supporting measure density, our suggestion is to discard the latter density and then to calculate the posterior mode of the mean under the standardized prior density. This treatment makes our choice of a prior density flexible. Implications of this treatment are discussed.  相似文献   

13.
In the case of prior knowledge about the unknown parameter, the Bayesian predictive density coincides with the Bayes estimator for the true density in the sense of the Kullback-Leibler divergence, but this is no longer true if we consider another loss function. In this paper we present a generalized Bayes rule to obtain Bayes density estimators with respect to any α-divergence, including the Kullback-Leibler divergence and the Hellinger distance. For curved exponential models, we study the asymptotic behaviour of these predictive densities. We show that, whatever prior we use, the generalized Bayes rule improves (in a non-Bayesian sense) the estimative density corresponding to a bias modification of the maximum likelihood estimator. It gives rise to a correspondence between choosing a prior density for the generalized Bayes rule and fixing a bias for the maximum likelihood estimator in the classical setting. A criterion for comparing and selecting prior densities is also given.  相似文献   

14.
In this article, we propose some families of estimators for finite population variance of post-stratified sample mean using information on two auxiliary variables. The families of estimators are discussed in their optimum cases. The MSE of these estimators are derived to the first order of approximation. The percent relative efficiency of proposed families of estimators has been demonstrated with the numerical illustrations.  相似文献   

15.
A general family of univariate distributions generated by beta random variables, proposed by Jones, has been discussed recently in the literature. This family of distributions possesses great flexibility while fitting symmetric as well as skewed models with varying tail weights. In a similar vein, we define here a family of univariate distributions generated by Stacy’s generalized gamma variables. For these two families of univariate distributions, we discuss maximum entropy characterizations under suitable constraints. Based on these characterizations, an expected ratio of quantile densities is proposed for the discrimination of members of these two broad families of distributions. Several special cases of these results are then highlighted. An alternative to the usual method of moments is also proposed for the estimation of the parameters, and the form of these estimators is particularly amenable to these two families of distributions.  相似文献   

16.
The full Bayesian analysis of multinomial data using informative and flexible prior distributions has, in the past, been restricted by the technical problems involved in performing the numerical integrations required to obtain marginal densities for parameters and other functions thereof. In this paper it is shown that Gibbs sampling is suitable for obtaining accurate approximations to marginal densities for a large and flexible family of posterior distributions—the family. The method is illustrated with a three-way contingency table. Two alternative Monte Carlo strategies are also discussed.  相似文献   

17.
Finite mixtures of densities from an exponential family are frequently used in the statistical analysis of data. Modelling by finite mixtures of densities from different exponential families provide more flexibility in the fittings, and get better results. However, in mixture problems, the log-likelihood function very often does not have an upper bound and therefore a global maximum does not always exist. Redner and Walker (1984. Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev. 26, 195–239) provide conditions to assure the existence, consistency and asymptotic normality of the maximum likelihood estimator.  相似文献   

18.
Information in a statistical procedure arising from sources other than sampling is called prior information, and its incorporation into the procedure forms the basis of the Bayesian approach to statistics. Under hypergeometric sampling, methodology is developed which quantifies the amount of information provided by the sample data relative to that provided by the prior distribution and allows for a ranking of prior distributions with respect to conservativeness, where conservatism refers to restraint of extraneous information embedded in any prior distribution. The most conservative prior distribution from a specified class (each member of which carries the available prior information) is that prior distribution within the class over which the likelihood function has the greatest average domination. Four different families of prior distributions are developed by considering a Bayesian approach to the formation of lots. The most conservative prior distribution from each of the four families of prior distributions is determined and compared for the situation when no prior information is available. The results of the comparison advocate the use of the Polya (beta-binomial) prior distribution in hypergeometric sampling.  相似文献   

19.
The completely random character of radioactive disintegration provides the basis of a strong justification for a Poisson linear model for single-photon emission computed tomography data, which can be used to produce reconstructions of isotope densities, whether by maximum likelihood or Bayesian methods. However, such a model requires the construction of a matrix of weights, which represent the mean rates of arrival at each detector of photons originating from each point within the body space. Two methods of constructing these weights are discussed, and reconstructions resulting from phantom and real data are presented.  相似文献   

20.
The completely random character of radioactive disintegration provides the basis of a strong justification for a Poisson linear model for single-photon emission computed tomography data, which can be used to produce reconstructions of isotope densities, whether by maximum likelihood or Bayesian methods. However, such a model requires the construction of a matrix of weights, which represent the mean rates of arrival at each detector of photons originating from each point within the body space. Two methods of constructing these weights are discussed, and reconstructions resulting from phantom and real data are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号