首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers the problem of making statistical inferences about a parameter when a narrow interval centred at a given value of the parameter is considered special, which is interpreted as meaning that there is a substantial degree of prior belief that the true value of the parameter lies in this interval. A clear justification of the practical importance of this problem is provided. The main difficulty with the standard Bayesian solution to this problem is discussed and, as a result, a pseudo-Bayesian solution is put forward based on determining lower limits for the posterior probability of the parameter lying in the special interval by means of a sensitivity analysis. Since it is not assumed that prior beliefs necessarily need to be expressed in terms of prior probabilities, nor that post-data probabilities must be Bayesian posterior probabilities, hybrid methods of inference are also proposed that are based on specific ways of measuring and interpreting the classical concept of significance. The various methods that are outlined are compared and contrasted at both a foundational level, and from a practical viewpoint by applying them to real data from meta-analyses that appeared in a well-known medical article.  相似文献   

2.
In this paper, we develop noninformative priors for the generalized half-normal distribution when scale and shape parameters are of interest, respectively. Especially, we develop the first and second order matching priors for both parameters. For the shape parameter, we reveal that the second order matching prior is a highest posterior density (HPD) matching prior and a cumulative distribution function (CDF) matching prior. In addition, it matches the alternative coverage probabilities up to the second order. For the scale parameter, we reveal that the second order matching prior is neither a HPD matching prior nor a CDF matching prior. Also, it does not match the alternative coverage probabilities up to the second order. For both parameters, we present that the one-at-a-time reference prior is a second order matching prior. However, Jeffreys’ prior is neither a first nor a second order matching prior. Methods are illustrated with both a simulation study and a real data set.  相似文献   

3.
The paper develops some objective priors for correlation coefficient of the bivariate normal distribution. The criterion used is the asymptotic matching of coverage probabilities of Bayesian credible intervals with the corresponding frequentist coverage probabilities. The paper uses various matching criteria, namely, quantile matching, highest posterior density matching, and matching via inversion of test statistics. Each matching criterion leads to a different prior for the parameter of interest. We evaluate their performance by comparing credible intervals through simulation studies. In addition, inference through several likelihood-based methods have been discussed.  相似文献   

4.
In this paper, we develop the non-informative priors for the inverse Weibull model when the parameters of interest are the scale and the shape parameters. We develop the first-order and the second-order matching priors for both parameters. For the scale parameter, we reveal that the second-order matching prior is not a highest posterior density (HPD) matching prior, does not match the alternative coverage probabilities up to the second order and is not a cumulative distribution function (CDF) matching prior. Also for the shape parameter, we reveal that the second-order matching prior is an HPD matching prior and a CDF matching prior and also matches the alternative coverage probabilities up to the second order. For both parameters, we reveal that the one-at-a-time reference prior is the second-order matching prior, but Jeffreys’ prior is not the first-order and the second-order matching prior. A simulation study is performed to compare the target coverage probabilities and a real example is given.  相似文献   

5.
Abstract. Deterministic Bayesian inference for latent Gaussian models has recently become available using integrated nested Laplace approximations (INLA). Applying the INLA‐methodology, marginal estimates for elements of the latent field can be computed efficiently, providing relevant summary statistics like posterior means, variances and pointwise credible intervals. In this article, we extend the use of INLA to joint inference and present an algorithm to derive analytical simultaneous credible bands for subsets of the latent field. The algorithm is based on approximating the joint distribution of the subsets by multivariate Gaussian mixtures. Additionally, we present a saddlepoint approximation to compute Bayesian contour probabilities, representing the posterior support of fixed parameter vectors of interest. We perform a simulation study and apply the given methods to two real examples.  相似文献   

6.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   

7.
Quantitative Trait Loci (QTL) mapping is a growing field in statistical genetics. However, dealing with this type of data from a statistical perspective is often perilous. In this paper we extend and apply a Markov Chain Monte Carlo Model Composition (MC3) technique to a data set of the Arabidopsis thaliana plant for locating the QTL mapping associated with cotyledon opening. The posterior model probabilities as well as the marginal posterior probabilities of each locus belonging to the model are presented. Furthermore, we show how the MC3 method can be used to deal with the situation where the sample size is less than the number of parameters in a model using a restricted model space approach.  相似文献   

8.
In practice, it often happens that we have a number of base methods of classification. We are not able to clearly determine which method is optimal in the sense of the smallest error rate. Then we have a combined method that allows us to consolidate information from multiple sources in a better classifier. I propose a different approach, a sequential approach. Sequentiality is understood here in the sense of adding posterior probabilities to the original data set and so created data are used during classification process. We combine posterior probabilities obtained from base classifiers using all combining methods. Finally, we combine these probabilities using a mean combining method. To the original data set we add obtained posterior probabilities as additional features. In each step we change our additional probabilities to achieve the minimum error rate for base methods. Experimental results on different data sets demonstrate that the method is efficient and that this approach outperforms base methods providing a reduction in the mean classification error rate.  相似文献   

9.
Bayesian hierarchical formulations are utilized by the U.S. Bureau of Labor Statistics (BLS) with respondent‐level data for missing item imputation because these formulations are readily parameterized to capture correlation structures. BLS collects survey data under informative sampling designs that assign probabilities of inclusion to be correlated with the response on which sampling‐weighted pseudo posterior distributions are estimated for asymptotically unbiased inference about population model parameters. Computation is expensive and does not support BLS production schedules. We propose a new method to scale the computation that divides the data into smaller subsets, estimates a sampling‐weighted pseudo posterior distribution, in parallel, for every subset and combines the pseudo posterior parameter samples from all the subsets through their mean in the Wasserstein space of order 2. We construct conditions on a class of sampling designs where posterior consistency of the proposed method is achieved. We demonstrate on both synthetic data and in application to the Current Employment Statistics survey that our method produces results of similar accuracy as the usual approach while offering substantially faster computation.  相似文献   

10.
The authors consider the correlation between two arbitrary functions of the data and a parameter when the parameter is regarded as a random variable with given prior distribution. They show how to compute such a correlation and use closed form expressions to assess the dependence between parameters and various classical or robust estimators thereof, as well as between p‐values and posterior probabilities of the null hypothesis in the one‐sided testing problem. Other applications involve the Dirichlet process and stationary Gaussian processes. Using this approach, the authors also derive a general nonparametric upper bound on Bayes risks.  相似文献   

11.
A new method is proposed for drawing coherent statistical inferences about a real-valued parameter in problems where there is little or no prior information. Prior ignorance about the parameter is modelled by the set of all continuous probability density functions for which the derivative of the log-density is bounded by a positive constant. This set is translation-invariant, it contains density functions with a wide variety of shapes and tail behaviour, and it generates prior probabilities that are highly imprecise. Statistical inferences can be calculated by solving a simple type of optimal control problem whose general solution is characterized. Detailed results are given for the problems of calculating posterior upper and lower means, variances, distribution functions and probabilities of intervals. In general, posterior upper and lower expectations are achieved by prior density functions that are piecewise exponential. The results are illustrated by normal and binomial examples  相似文献   

12.
In this paper, we consider the classification of high-dimensional vectors based on a small number of training samples from each class. The proposed method follows the Bayesian paradigm, and it is based on a small vector which can be viewed as the regression of the new observation on the space spanned by the training samples. The classification method provides posterior probabilities that the new vector belongs to each of the classes, hence it adapts naturally to any number of classes. Furthermore, we show a direct similarity between the proposed method and the multicategory linear support vector machine introduced in Lee et al. [2004. Multicategory support vector machines: theory and applications to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), 67–81]. We compare the performance of the technique proposed in this paper with the SVM classifier using real-life military and microarray datasets. The study shows that the misclassification errors of both methods are very similar, and that the posterior probabilities assigned to each class are fairly accurate.  相似文献   

13.
In this paper we consider generalized linear models for binary data subject to inequality constraints on the regression coefficients, and propose a simple and efficient Bayesian method for parameter estimation and model selection by using Markov chain Monte Carlo (MCMC). In implementing MCMC, we introduce appropriate latent variables and use a simple approximation of a link function, to resolve computational difficulties and obtain convenient forms for full conditional posterior densities of elements of parameters. Bayes factors are computed via the Savage-Dickey density ratios and the method of Oh (Comput. Stat. Data Anal. 29:411–427, 1999), for which posterior samples from the full model with no degenerate parameter and the full conditional posterior densities of elements are needed. Since it uses one set of posterior samples from the full model for any model in consideration, it performs simultaneous comparison of all possible models and is very efficient compared with other model selection methods which require one to fit all candidate models. A simulation study shows that significant improvements can be made by taking the constraints into account. Real data on purchase intention of a product subject to order constraints is analyzed by using the proposed method. The analysis results show that there exist some price changes which significantly affect the consumer behavior. The results also show the importance of simultaneous comparison of models rather than separate pairwise comparisons of models since the latter may yield misleading results from ignoring possible correlations between parameters.  相似文献   

14.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

15.
In this paper, we consider a Bayesian mixture model that allows us to integrate out the weights of the mixture in order to obtain a procedure in which the number of clusters is an unknown quantity. To determine clusters and estimate parameters of interest, we develop an MCMC algorithm denominated by sequential data-driven allocation sampler. In this algorithm, a single observation has a non-null probability to create a new cluster and a set of observations may create a new cluster through the split-merge movements. The split-merge movements are developed using a sequential allocation procedure based in allocation probabilities that are calculated according to the Kullback–Leibler divergence between the posterior distribution using the observations previously allocated and the posterior distribution including a ‘new’ observation. We verified the performance of the proposed algorithm on the simulated data and then we illustrate its use on three publicly available real data sets.  相似文献   

16.
For analyzing incidence data on diabetes and health problems, the bivariate geometric probability distribution is a natural choice but remained unexplored largely due to lack of models linking covariates with the probabilities of bivariate incidence of correlated outcomes. In this paper, bivariate geometric models are proposed for two correlated incidence outcomes. The extended generalized linear models are developed to take into account covariate dependence of the bivariate probabilities of correlated incidence outcomes for diabetes and heart diseases for the elderly population. The estimation and test procedures are illustrated using the Health and Retirement Study data. Two models are shown in this paper, one based on conditional-marginal approach and the other one based on the joint probability distribution with an association parameter. The joint model with association parameter appears to be a very good choice for analyzing the covariate dependence of the joint incidence of diabetes and heart diseases. Bootstrapping is performed to measure the accuracy of estimates and the results indicate very small bias.  相似文献   

17.
This paper reviews two types of geometric methods proposed in recent years for defining statistical decision rules based on 2-dimensional parameters that characterize treatment effect in a medical setting. A common example is that of making decisions, such as comparing treatments or selecting a best dose, based on both the probability of efficacy and the probability toxicity. In most applications, the 2-dimensional parameter is defined in terms of a model parameter of higher dimension including effects of treatment and possibly covariates. Each method uses a geometric construct in the 2-dimensional parameter space based on a set of elicited parameter pairs as a basis for defining decision rules. The first construct is a family of contours that partitions the parameter space, with the contours constructed so that all parameter pairs on a given contour are equally desirable. The partition is used to define statistical decision rules that discriminate between parameter pairs in term of their desirabilities. The second construct is a convex 2-dimensional set of desirable parameter pairs, with decisions based on posterior probabilities of this set for given combinations of treatments and covariates under a Bayesian formulation. A general framework for all of these methods is provided, and each method is illustrated by one or more applications.  相似文献   

18.
Label switching is a well-known and fundamental problem in Bayesian estimation of finite mixture models. It arises when exploring complex posterior distributions by Markov Chain Monte Carlo (MCMC) algorithms, because the likelihood of the model is invariant to the relabelling of mixture components. If the MCMC sampler randomly switches labels, then it is unsuitable for exploring the posterior distributions for component-related parameters. In this paper, a new procedure based on the post-MCMC relabelling of the chains is proposed. The main idea of the method is to perform a clustering technique on the similarity matrix, obtained through the MCMC sample, whose elements are the probabilities that any two units in the observed sample are drawn from the same component. Although it cannot be generalized to any situation, it may be handy in many applications because of its simplicity and very low computational burden.  相似文献   

19.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

20.
Lu Lin   《Statistical Methodology》2006,3(4):444-455
If the form of the distribution of data is unknown, the Bayesian method fails in the parametric inference because there is no posterior distribution of the parameter. In this paper, a theoretical framework of Bayesian likelihood is introduced via the Hilbert space method, which is free of the distributions of data and the parameter. The posterior distribution and posterior score function based on given inner products are defined and, consequently, the quasi posterior distribution and quasi posterior score function are derived, respectively, as the projections of the posterior distribution and posterior score function onto the space spanned by given estimating functions. In the space spanned by data, particularly, an explicit representation for the quasi posterior score function is obtained, which can be derived as a projection of the true posterior score function onto this space. The methods of constructing conservative quasi posterior score and quasi posterior log-likelihood are proposed. Some examples are given to illustrate the theoretical results. As an application, the quasi posterior distribution functions are used to select variables for generalized linear models. It is proved that, for linear models, the variable selections via quasi posterior distribution functions are equivalent to the variable selections via the penalized residual sum of squares or regression sum of squares.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号