首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 305 毫秒
1.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

2.
This paper considers the problem of making statistical inferences about a parameter when a narrow interval centred at a given value of the parameter is considered special, which is interpreted as meaning that there is a substantial degree of prior belief that the true value of the parameter lies in this interval. A clear justification of the practical importance of this problem is provided. The main difficulty with the standard Bayesian solution to this problem is discussed and, as a result, a pseudo-Bayesian solution is put forward based on determining lower limits for the posterior probability of the parameter lying in the special interval by means of a sensitivity analysis. Since it is not assumed that prior beliefs necessarily need to be expressed in terms of prior probabilities, nor that post-data probabilities must be Bayesian posterior probabilities, hybrid methods of inference are also proposed that are based on specific ways of measuring and interpreting the classical concept of significance. The various methods that are outlined are compared and contrasted at both a foundational level, and from a practical viewpoint by applying them to real data from meta-analyses that appeared in a well-known medical article.  相似文献   

3.
Suppose some quantiles of the prior distribution of a nonnegative parameter θ are specified. Instead of eliciting just one prior density function, consider the class Γ of all the density functions compatible with the quantile specification. Given a likelihood function, find the posterior upper and lower bounds for the expected value of any real-valued function h(θ), as the density varies in Γ. Such a scheme agrees with a robust Bayesian viewpoint. Under mild regularity conditions about h(θ) and the likelihood, a procedure for finding bounds is derived and applied to an example, after transforming the given functional optimisation problems into finite-dimensional ones.  相似文献   

4.
In this paper, we develop noninformative priors for the generalized half-normal distribution when scale and shape parameters are of interest, respectively. Especially, we develop the first and second order matching priors for both parameters. For the shape parameter, we reveal that the second order matching prior is a highest posterior density (HPD) matching prior and a cumulative distribution function (CDF) matching prior. In addition, it matches the alternative coverage probabilities up to the second order. For the scale parameter, we reveal that the second order matching prior is neither a HPD matching prior nor a CDF matching prior. Also, it does not match the alternative coverage probabilities up to the second order. For both parameters, we present that the one-at-a-time reference prior is a second order matching prior. However, Jeffreys’ prior is neither a first nor a second order matching prior. Methods are illustrated with both a simulation study and a real data set.  相似文献   

5.
Abstract.  It is well known that Jeffreys' prior is asymptotically least favorable under the entropy risk, i.e. it asymptotically maximizes the mutual information between the sample and the parameter. However, in this paper we show that the prior that minimizes (subject to certain constraints) the mutual information between the sample and the parameter is natural conjugate when the model belongs to a natural exponential family. A conjugate prior can thus be regarded as maximally informative in the sense that it minimizes the weight of the observations on inferences about the parameter; in other words, the expected relative entropy between prior and posterior is minimized when a conjugate prior is used.  相似文献   

6.
An extension to the class of conventional numerical probability models for nondeterministic phenomena has been identified by Dempster and Shafer in the class of belief functions. We were originally stimulated by this work, but have since come to believe that the bewildering diversity of uncertainty and chance phenomena cannot be encompassed within either the conventional theory of probability, its relatively minor modifications (e.g., not requiring countable additivity), or the theory of belief functions. In consequence, we have been examining the properties of, and prospects for, the generalization of belief functions that is known as upper and lower, or interval-valued, probability. After commenting on what we deem to be problematic elements of common personalist/subjectivist/Bayesian positions that employ either finitely or countably additive probability to represent strength of belief and that are intended to be normative for rational behavior, we sketch some of the ways in which the set of lower envelopes, a subset of the set of lower probabilities that contains the belief functions, enables us to preserve the core of Bayesian reasoning while admitting a more realistic (e.g., in its reduced insistence upon an underlying precision in our beliefs) class of probability-like models. Particular advantages of lower envelopes are identified in the area of the aggregation of beliefs.

The focus of our own research is in the area of objective probabilistic reasoning about time series generated by physical or other empirical (e.g., societal) processes. As it is not the province of a general mathematical methodology such as probability theory to a priori rule out of existence empirical phenomena, we are concerned by the contraint imposed by conventional probability theory that an empirical process of bounded random variables that is believed to have a time- invariant generating mechanism must then exhibot long-run stable time averages. We have shown that lower probability models that allow for unstable time averages can only lie in the class of undominated lower probabilities, a subset of lower probability models disjoint from the lower envelopes and having the weakest relationship to conventional probability measures. Our research has been devoted to exploring and developing the theory of undominated lower probabilities so that it can be applied to model and understand nondeterministic phenomena, and we have also been interested in identifying actual physical processes (e.g., flicker noises) that exhibit behavior requiring such novel models.  相似文献   


7.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   

8.
In this paper, we develop the non-informative priors for the inverse Weibull model when the parameters of interest are the scale and the shape parameters. We develop the first-order and the second-order matching priors for both parameters. For the scale parameter, we reveal that the second-order matching prior is not a highest posterior density (HPD) matching prior, does not match the alternative coverage probabilities up to the second order and is not a cumulative distribution function (CDF) matching prior. Also for the shape parameter, we reveal that the second-order matching prior is an HPD matching prior and a CDF matching prior and also matches the alternative coverage probabilities up to the second order. For both parameters, we reveal that the one-at-a-time reference prior is the second-order matching prior, but Jeffreys’ prior is not the first-order and the second-order matching prior. A simulation study is performed to compare the target coverage probabilities and a real example is given.  相似文献   

9.
10.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   

11.
This paper examines Bayesian posterior probabilities as a function of selected elements within the set of data, x, when the prior distribution is assumed fixed. The posterior probabilities considered here are those of the parameter vector lying in a subset of the total parameter space. The theorems of this paper provide insight into the effect of elements within x on this posterior probability. These results have applications, for example, in the study of the impact of outliers within the data and in the isolation of misspecified parameters in a model.  相似文献   

12.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

13.
It has long been asserted that in univariate location-scale models, when concerned with inference for either the location or scale parameter, the use of the inverse of the scale parameter as a Bayesian prior yields posterior credible sets that have exactly the correct frequentist confidence set interpretation. This claim dates to at least Peers, and has subsequently been noted by various authors, with varying degrees of justification. We present a simple, direct demonstration of the exact matching property of the posterior credible sets derived under use of this prior in the univariate location-scale model. This is done by establishing an equivalence between the conditional frequentist and posterior densities of the pivotal quantities on which conditional frequentist inferences are based.  相似文献   

14.
Relative surprise inferences are based on how beliefs change from a priori to a posteriori. As they are based on the posterior distribution of the integrated likelihood, inferences of this type are invariant under relabellings of the parameter of interest. The authors demonstrate that these inferences possess a certain optimality property. Further, they develop computational techniques for implementing them, provided that algorithms are available to sample from the prior and posterior distributions.  相似文献   

15.
For normal populations with unequal variances, we develop matching priors and reference priors for a linear combination of the means. Here, we find three second-order matching priors: a highest posterior density (HPD) matching prior, a cumulative distribution function (CDF) matching prior, and a likelihood ratio (LR) matching prior. Furthermore, we show that the reference priors are all first-order matching priors, but that they do not satisfy the second-order matching criterion that establishes the symmetry and the unimodality of the posterior under the developed priors. The results of a simulation indicate that the second-order matching prior outperforms the reference priors in terms of matching the target coverage probabilities, in a frequentist sense. Finally, we compare the Bayesian credible intervals based on the developed priors with the confidence intervals derived from real data.  相似文献   

16.
In statistical practice, it is quite common that some data are unknown or disregarded for various reasons. In the present paper, on the basis of a multiply censored sample from a Pareto population, the problem of finding the highest posterior density (HPD) estimates of the inequality and precision parameters is discussed assuming a natural joint conjugate prior. HPD estimates are obtained in closed forms for complete or right censored data. In the general multiple censoring case, it is shown the existence and uniqueness of the estimates. Explicit lower and upper bounds are also provided. Due to the posterior unimodality, HPD credibility regions are simply connected sets. For illustration, two numerical examples are included.  相似文献   

17.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

18.
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semi-parametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coefficients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coefficients and the model space, they propose a semi-automatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology.  相似文献   

19.
Optimizing criteria for choosing a confidence set for a parameter are formulated as mathematical programming problems. The two optimizing criteria, probability of coverage and size of set, give rise to a pair of inverse programming problems. Several examples are worked out. The programming problems are then formulated to allow the incorporation of partial information about the parameter. By varying the family of prior distributions, a continuum of problems from the frequency approach to a Bayesian approach is obtained. Some examples are considered in which the family of priors contains more than one but not all prior distributions.  相似文献   

20.
The Dempster Shafer theory of belief functions is a method of quantifying uncertainty that generalizes probability theory. We review the theory of belief functions in the context of statistical inference. We mainly focus on a particular belief function based on the likelihood function and its application to problems with partial prior information. We also consider connections to upper and lower probabilities and Bayesian robustness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号