首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

2.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

3.
The subject of this paper is Bayesian inference about the fixed and random effects of a mixed-effects linear statistical model with two variance components. It is assumed that a priori the fixed effects have a noninformative distribution and that the reciprocals of the variance components are distributed independently (of each other and of the fixed effects) as gamma random variables. It is shown that techniques similar to those employed in a ridge analysis of a response surface can be used to construct a one-dimensional curve that contains all of the stationary points of the posterior density of the random effects. The “ridge analysis” (of the posterior density) can be useful (from a computational standpoint) in finding the number and the locations of the stationary points and can be very informative about various features of the posterior density. Depending on what is revealed by the ridge analysis, a multivariate normal or multivariate-t distribution that is centered at a posterior mode may provide a satisfactory approximation to the posterior distribution of the random effects (which is of the poly-t form).  相似文献   

4.
Several authors have discussed Kalman filtering procedures using a mixture of normals as a model for the distributions of the noise in the observation and/or the state space equations. Under this model, resulting posteriors involve a mixture of normal distributions, and a “collapsing method” must be found in order to keep the recursive procedure simple. We prove that the Kullback-Leibler distance between the mixture posterior and that of a single normal distribution is minimized when we choose the mean and variance of the single normal distribution to be the mean and variance of the mixture posterior. Hence, “collapsing by moments” is optimal in this sense. We then develop the resulting optimal algorithm for “Kalman filtering” for this situation, and illustrate its performance with an example.  相似文献   

5.
Two methods of estimating the intraclass correlation coefficient (p) for the one-way random effects model were compared in several simulation experiments using balanced and unbalanced designs. Estimates based on a Bayes approach and a maximum likelihood approach were compared on the basis of their biases (differences between estimates and true values of p) and mean square errors (mean square errors of estimates of p) in each of the simulation experiments. The Bayes approach used the median of a conditional posterior density as its estimator.  相似文献   

6.
We propose a novel approach to estimation, where a set of estimators of a parameter is combined into a weighted average to produce the final estimator. The weights are chosen to be proportional to the likelihood evaluated at the estimators. We investigate the method for a set of estimators obtained by using the maximum likelihood principle applied to each individual observation. The method can be viewed as a Bayesian approach with a data-driven prior distribution. We provide several examples illustrating the new method and argue for its consistency, asymptotic normality, and efficiency. We also conduct simulation studies to assess the performance of the estimators. This straightforward methodology produces consistent estimators comparable with those obtained by the maximum likelihood method. The method also approximates the distribution of the estimator through the “posterior” distribution.  相似文献   

7.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

8.
We extend the classical one-dimensional Bayes binary classifier to create a new classification rule that has a region of neutrality to account for cases where the implied weight of evidence is too weak for a confident classification. Our proposed rule allows a “No Prediction” when the observation is too ambiguous to have confidence in a definite prediction. The motivation for making “No Prediction” is that in our microbial community profiling application, a wrong prediction can be worse than making no prediction at all. On the other hand, too many “No Predictions” have adverse implications as well. Consequently, our proposed rule incorporates this trade-off using a cost structure that weighs the penalty for not making a definite prediction against the penalty for making an incorrect definite prediction. We demonstrate that our proposed rule outperforms a naive neutral-zone rule that has been routinely used in biological applications similar to ours.  相似文献   

9.
A Bayesian formulation of the canonical form of the standard regression model is used to compare various Stein-type estimators and the ridge estimator of regression coefficients, A particular (“constant prior”) Stein-type estimator having the same pattern of shrinkage as the ridge estimator is recommended for use.  相似文献   

10.
Outliers that commonly occur in business sample surveys can have large impacts on domain estimates. The authors consider an outlier‐robust design and smooth estimation approach, which can be related to the so‐called “Surprise stratum” technique [Kish, “Survey Sampling,” Wiley, New York (1965)]. The sampling design utilizes a threshold sample consisting of previously observed outliers that are selected with probability one, together with stratified simple random sampling from the rest of the population. The domain predictor is an extension of the Winsorization‐based estimator proposed by Rivest and Hidiroglou [Rivest and Hidiroglou, “Outlier Treatment for Disaggregated Estimates,” in “Proceedings of the Section on Survey Research Methods,” American Statistical Association (2004), pp. 4248–4256], and is similar to the estimator for skewed populations suggested by Fuller [Fuller, Statistica Sinica 1991;1:137–158]. It makes use of a domain Winsorized sample mean plus a domain‐specific adjustment of the estimated overall mean of the excess values on top of that. The methods are studied in theory from a design‐based perspective and by simulations based on the Norwegian Research and Development Survey data. Guidelines for choosing the threshold values are provided. The Canadian Journal of Statistics 39: 147–164; 2011 © 2010 Statistical Society of Canada  相似文献   

11.
An important problem for fitting local linear regression is the choice of the smoothing parameter. As the smoothing parameter becomes large, the estimator tends to a straight line, which is the least squares fit in the ordinary linear regression setting. This property may be used to assess the adequacy of a simple linear model. Motivated by Silverman's (1981) work in kernel density estimation, a suitable test statistic is the critical smoothing parameter where the estimate changes from nonlinear to linear, while linearity or non- linearity requires a more precise judgment. We define the critical smoothing parameter through the approximate F-tests by Hastie and Tibshirani (1990). To assess the significance, the “wild bootstrap” procedure is used to replicate the data and the proportion of bootstrap samples which give a nonlinear estimate when using the critical bandwidth is obtained as the p-value. Simulation results show that the critical smoothing test is useful in detecting a wide range of alternatives.  相似文献   

12.
We consider Khamis' (1960) Laguerre expansion with gamma weight function as a class of “near-gamma” priors (K-prior) to obtain the Bayes predictor of a finite population mean under the Poisson regression superpopulation model using Zellner's balanced loss function (BLF). Kullback–Leibler (K-L) distance between gamma and some K-priors is tabulated to examine the quantitative prior robustness. Some numerical investigations are also conducted to illustrate the effects of a change in skewness and/or kurtosis on the Bayes predictor and the corresponding minimal Bayes predictive expected loss (MBPEL). Loss robustness with respect to the class of BLFs is also examined in terms of relative savings loss (RSL).  相似文献   

13.
This paper provides a new method and algorithm for making inferences about the parameters of a two-level multivariate normal hierarchical model. One has observed J p -dimensional vector outcomes, distributed at level 1 as multivariate normal with unknown mean vectors and with known covariance matrices. At level 2, the unknown mean vectors also have normal distributions, with common unknown covariance matrix A and with means depending on known covariates and on unknown regression coefficients. The algorithm samples independently from the marginal posterior distribution of A by using rejection procedures. Functions such as posterior means and covariances of the level 1 mean vectors and of the level 2 regression coefficient are estimated by averaging over posterior values calculated conditionally on each value of A drawn. This estimation accounts for the uncertainty in A , unlike standard restricted maximum likelihood empirical Bayes procedures. It is based on independent draws from the exact posterior distributions, unlike Gibbs sampling. The procedure is demonstrated for profiling hospitals based on patients' responses concerning p =2 types of problems (non-surgical and surgical). The frequency operating characteristics of the rule corresponding to a particular vague multivariate prior distribution are shown via simulation to achieve their nominal values in that setting.  相似文献   

14.
This paper discusses a pre-test regression estimator which uses the least squares estimate when it is “large” and a ridge regression estimate for “small” regression coefficients, where the preliminary test is applied separately to each regression coefficient in turn to determine whether it is “large” or “small.” For orthogonal regressors, the exact finite-sample bias and mean squared error of the pre-test estimator are derived. The latter is less biased than a ridge estimator, and over much of the parameter space the pre-test estimator has smaller mean squared error than least squares. A ridge estimator is found to be inferior to the pre-test estimator in terms of mean squared error in many situations, and at worst the latter estimator is only slightly less efficient than the former at commonly used significance levels.  相似文献   

15.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   

16.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   

17.
In this article, we consider Bayes prediction in a finite population under the simple location error-in-variables superpopulation model. Bayes predictor of the finite population mean under Zellner's balanced loss function and the corresponding relative losses and relative savings loss are derived. The prior distribution of the unknown location parameter of the model is assumed to have a non-normal distribution belonging to the class of Edgeworth series distributions. Effects of non normality of the “true” prior distribution and that of a possible misspecification of the loss function on the Bayes predictor are illustrated for a hypothetical population.  相似文献   

18.
Bayesian estimators of variance components are developed, based on posterior mean and posterior mode, respectively, in a one-way ANOVA random effects model with independent prior distributions. The formulas for the proposed estimators are simple. The estimators give sensible results for 'badly-behaved' datasets, where the standard unbiased estimates are negative. They are markedly robust as compared to the existing estimators such as the maximum likelihood estimators and the maximum posterior density estimators.  相似文献   

19.
It has long been asserted that in univariate location-scale models, when concerned with inference for either the location or scale parameter, the use of the inverse of the scale parameter as a Bayesian prior yields posterior credible sets that have exactly the correct frequentist confidence set interpretation. This claim dates to at least Peers, and has subsequently been noted by various authors, with varying degrees of justification. We present a simple, direct demonstration of the exact matching property of the posterior credible sets derived under use of this prior in the univariate location-scale model. This is done by establishing an equivalence between the conditional frequentist and posterior densities of the pivotal quantities on which conditional frequentist inferences are based.  相似文献   

20.
ABSTRACT

Researchers commonly use p-values to answer the question: How strongly does the evidence favor the alternative hypothesis relative to the null hypothesis? p-Values themselves do not directly answer this question and are often misinterpreted in ways that lead to overstating the evidence against the null hypothesis. Even in the “post p?<?0.05 era,” however, it is quite possible that p-values will continue to be widely reported and used to assess the strength of evidence (if for no other reason than the widespread availability and use of statistical software that routinely produces p-values and thereby implicitly advocates for their use). If so, the potential for misinterpretation will persist. In this article, we recommend three practices that would help researchers more accurately interpret p-values. Each of the three recommended practices involves interpreting p-values in light of their corresponding “Bayes factor bound,” which is the largest odds in favor of the alternative hypothesis relative to the null hypothesis that is consistent with the observed data. The Bayes factor bound generally indicates that a given p-value provides weaker evidence against the null hypothesis than typically assumed. We therefore believe that our recommendations can guard against some of the most harmful p-value misinterpretations. In research communities that are deeply attached to reliance on “p?<?0.05,” our recommendations will serve as initial steps away from this attachment. We emphasize that our recommendations are intended merely as initial, temporary steps and that many further steps will need to be taken to reach the ultimate destination: a holistic interpretation of statistical evidence that fully conforms to the principles laid out in the ASA statement on statistical significance and p-values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号