首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   

2.
Abstract.  The plug-in solution is usually not entirely adequate for computing prediction intervals, as their coverage probability may differ substantially from the nominal value. Prediction intervals with improved coverage probability can be defined by adjusting the plug-in ones, using rather complicated asymptotic procedures or suitable simulation techniques. Other approaches are based on the concept of predictive likelihood for a future random variable. The contribution of this paper is the definition of a relatively simple predictive distribution function giving improved prediction intervals. This distribution function is specified as a first-order unbiased modification of the plug-in predictive distribution function based on the constrained maximum likelihood estimator. Applications of the results to the Gaussian and the generalized extreme-value distributions are presented.  相似文献   

3.
The computational demand required to perform inference using Markov chain Monte Carlo methods often obstructs a Bayesian analysis. This may be a result of large datasets, complex dependence structures, or expensive computer models. In these instances, the posterior distribution is replaced by a computationally tractable approximation, and inference is based on this working model. However, the error that is introduced by this practice is not well studied. In this paper, we propose a methodology that allows one to examine the impact on statistical inference by quantifying the discrepancy between the intractable and working posterior distributions. This work provides a structure to analyse model approximations with regard to the reliability of inference and computational efficiency. We illustrate our approach through a spatial analysis of yearly total precipitation anomalies where covariance tapering approximations are used to alleviate the computational demand associated with inverting a large, dense covariance matrix.  相似文献   

4.
On Parametric Bootstrapping and Bayesian Prediction   总被引:1,自引:0,他引:1  
Abstract.  We investigate bootstrapping and Bayesian methods for prediction. The observations and the variable being predicted are distributed according to different distributions. Many important problems can be formulated in this setting. This type of prediction problem appears when we deal with a Poisson process. Regression problems can also be formulated in this setting. First, we show that bootstrap predictive distributions are equivalent to Bayesian predictive distributions in the second-order expansion when some conditions are satisfied. Next, the performance of predictive distributions is compared with that of a plug-in distribution with an estimator. The accuracy of prediction is evaluated by using the Kullback–Leibler divergence. Finally, we give some examples.  相似文献   

5.
Deterministic computer simulations are often used as replacement for complex physical experiments. Although less expensive than physical experimentation, computer codes can still be time-consuming to run. An effective strategy for exploring the response surface of the deterministic simulator is the use of an approximation to the computer code, such as a Gaussian process (GP) model, coupled with a sequential sampling strategy for choosing design points that can be used to build the GP model. The ultimate goal of such studies is often the estimation of specific features of interest of the simulator output, such as the maximum, minimum, or a level set (contour). Before approximating such features with the GP model, sufficient runs of the computer simulator must be completed.Sequential designs with an expected improvement (EI) design criterion can yield good estimates of the features with minimal number of runs. The challenge is that the expected improvement function itself is often multimodal and difficult to maximize. We develop branch and bound algorithms for efficiently maximizing the EI function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of a contour. These branch and bound algorithms outperform other optimization strategies such as genetic algorithms, and can lead to significantly more accurate estimation of the features of interest.  相似文献   

6.
A Monte Carlo (MC) method is suggested for calculating an upper prediction limit for the mean of a future sample of small size N from a lognormal distribution. This is done by obtaining a Monte Carlo estimator of the limit utilizing the future sample generated from the Gibbs sampler. For the Gibbs sampler, a full conditional posterior predictive distribution of each observation in the future sample is derived. The MC method is straightforward to specify distributionally and to implement computationally, with output readily adapted for required inference summaries. In an example, practical application of the method is described.  相似文献   

7.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

8.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   

9.
Bayesian selection of variables is often difficult to carry out because of the challenge in specifying prior distributions for the regression parameters for all possible models, specifying a prior distribution on the model space and computations. We address these three issues for the logistic regression model. For the first, we propose an informative prior distribution for variable selection. Several theoretical and computational properties of the prior are derived and illustrated with several examples. For the second, we propose a method for specifying an informative prior on the model space, and for the third we propose novel methods for computing the marginal distribution of the data. The new computational algorithms only require Gibbs samples from the full model to facilitate the computation of the prior and posterior model probabilities for all possible models. Several properties of the algorithms are also derived. The prior specification for the first challenge focuses on the observables in that the elicitation is based on a prior prediction y 0 for the response vector and a quantity a 0 quantifying the uncertainty in y 0. Then, y 0 and a 0 are used to specify a prior for the regression coefficients semi-automatically. Examples using real data are given to demonstrate the methodology.  相似文献   

10.
ABSTRACT

Most statistical analyses use hypothesis tests or estimation about parameters to form inferential conclusions. I think this is noble, but misguided. The point of view expressed here is that observables are fundamental, and that the goal of statistical modeling should be to predict future observations, given the current data and other relevant information. Further, the prediction of future observables provides multiple advantages to practicing scientists, and to science in general. These include an interpretable numerical summary of a quantity of direct interest to current and future researchers, a calibrated prediction of what’s likely to happen in future experiments, a prediction that can be either “corroborated” or “refuted” through experimentation, and avoidance of inference about parameters; quantities that exists only as convenient indices of hypothetical distributions. Finally, the predictive probability of a future observable can be used as a standard for communicating the reliability of the current work, regardless of whether confirmatory experiments are conducted. Adoption of this paradigm would improve our rigor for scientific accuracy and reproducibility by shifting our focus from “finding differences” among hypothetical parameters to predicting observable events based on our current scientific understanding.  相似文献   

11.
This article reviews several techniques useful for forming point and interval predictions in regression models with Box-Cox transformed variables. The techniques reviewed—plug-in, mean squared error analysis, predictive likelihood, and stochastic simulation—take account of nonnormality and parameter uncertainty in varying degrees. A Monte Carlo study examining their small-sample accuracy indicates that uncertainty about the Box–Cox transformation parameter may be relatively unimportant. For certain parameters, deterministic point predictions are biased, and plug-in prediction intervals are also biased. Stochastic simulation, as usually carried out, leads to badly biased predictions. A modification of the usual approach renders stochastic simulation predictions largely unbiased.  相似文献   

12.
In this paper, bootstrap prediction is adapted to resolve some problems in small sample datasets. The bootstrap predictive distribution is obtained by applying Breiman's bagging to the plug-in distribution with the maximum likelihood estimator. The effectiveness of bootstrap prediction has previously been shown, but some problems may arise when bootstrap prediction is constructed in small sample datasets. In this paper, Bayesian bootstrap is used to resolve the problems. The effectiveness of Bayesian bootstrap prediction is confirmed by some examples. These days, analysis of small sample data is quite important in various fields. In this paper, some datasets are analyzed in such a situation. For real datasets, it is shown that plug-in prediction and bootstrap prediction provide very poor prediction when the sample size is close to the dimension of parameter while Bayesian bootstrap prediction provides stable prediction.  相似文献   

13.
New techniques for the analysis of stochastic volatility models in which the logarithm of conditional variance follows an autoregressive model are developed. A cyclic Metropolis algorithm is used to construct a Markov-chain simulation tool. Simulations from this Markov chain converge in distribution to draws from the posterior distribution enabling exact finite-sample inference. The exact solution to the filtering/smoothing problem of inferring about the unobserved variance states is a by-product of our Markov-chain method. In addition, multistep-ahead predictive densities can be constructed that reflect both inherent model variability and parameter uncertainty. We illustrate our method by analyzing both daily and weekly data on stock returns and exchange rates. Sampling experiments are conducted to compare the performance of Bayes estimators to method of moments and quasi-maximum likelihood estimators proposed in the literature. In both parameter estimation and filtering, the Bayes estimators outperform these other approaches.  相似文献   

14.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

15.
The prediction problem is considered for the multivariate regression model with an elliptically contoured error distribution. We show that the predictive distribution under elliptical errors assumption is the same as that obtained under normally distributed error in both the Bayesian approach using an im-proper prior and the classical approach. This gives inference robustness with respect to departures from the reference case of independent sampling from the normal distribution.  相似文献   

16.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

17.
Quantitative fatty acid signature analysis (QFASA) produces diet estimates containing the proportion of each species of prey in a predator's diet. Since the diet estimates are compositional, often contain an abundance of zeros (signifying the absence of a species in the diet), and samples sizes are generally small, inference problems require the use of nonstandard statistical methodology. Recently, a mixture distribution involving the multiplicative logistic normal distribution (and its skew-normal extension) was introduced in relation to QFASA to manage the problematic zeros. In this paper, we examine an alternative mixture distribution, namely, the recently proposed zero-inflated beta (ZIB) distribution. A potential advantage of using the ZIB distribution over the previously considered mixture models is that it does not require transformation of the data. To assess the usefulness of the ZIB distribution in QFASA inference problems, a simulation study is first carried out which compares the small sample properties of the maximum likelihood estimators of the means. The fit of the distributions is then examined using ‘pseudo-predators’ generated from a large real-life prey base. Finally, confidence intervals for the true diet based on the ZIB distribution are compared with earlier results through a simulation study and harbor seal data.  相似文献   

18.
A natural way to deal with the uncertainty of an ergodic finite state space Markov process is to investigate the entropy of its stationary distribution. When the process is observed, it becomes necessary to estimate this entropy.We estimate both the stationary distribution and its entropy by plug-in of the estimators of the infinitesimal generator. Three situations of observation are discussed: one long trajectory is observed, several independent short trajectories are observed, or the process is observed at discrete times. The good asymptotic behavior of the plug-in estimators is established. We also illustrate the behavior of the estimators through simulation.  相似文献   

19.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

20.
In non-parametric function estimation selection of a smoothing parameter is one of the most important issues. The performance of smoothing techniques depends highly on the choice of this parameter. Preferably the bandwidth should be determined via a data-driven procedure. In this paper we consider kernel estimators in a white noise model, and investigate whether locally adaptive plug-in bandwidths can achieve optimal global rates of convergence. We consider various classes of functions: Sobolev classes, bounded variation function classes, classes of convex functions and classes of monotone functions. We study the situations of pilot estimation with oversmoothing and without oversmoothing. Our main finding is that simple local plug-in bandwidth selectors can adapt to spatial inhomogeneity of the regression function as long as there are no local oscillations of high frequency. We establish the pointwise asymptotic distribution of the regression estimator with local plug-in bandwidth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号