首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  Posterior distributions for the joint projections of future temperature and precipitation trends and changes are derived by applying a Bayesian hierachical model to a rich data set of simulated climate from general circulation models. The simulations that are analysed here constitute the future projections on which the Intergovernmental Panel on Climate Change based its recent summary report on the future of our planet's climate, albeit without any sophisticated statistical handling of the data. Here we quantify the uncertainty that is represented by the variable results of the various models and their limited ability to represent the observed climate both at global and at regional scales. We do so in a Bayesian framework, by estimating posterior distributions of the climate change signals in terms of trends or differences between future and current periods, and we fully characterize the uncertain nature of a suite of other parameters, like biases, correlation terms and model-specific precisions. Besides presenting our results in terms of posterior distributions of the climate signals, we offer as an alternative representation of the uncertainties in climate change projections the use of the posterior predictive distribution of a new model's projections. The results from our analysis can find straightforward applications in impact studies, which necessitate not only best guesses but also a full representation of the uncertainty in climate change projections. For water resource and crop models, for example, it is vital to use joint projections of temperature and precipitation to represent the characteristics of future climate best, and our statistical analysis delivers just that.  相似文献   

2.
Summary.  Data in the social, behavioural and health sciences frequently come from observational studies instead of controlled experiments. In addition to random errors, observational data typically contain additional sources of uncertainty such as missing values, unmeasured confounders and selection biases. Also, the research question is often different from that which a particular source of data was designed to answer, and so not all relevant variables are measured. As a result, multiple sources of data are often necessary to identify the biases and to inform about different aspects of the research question. Bayesian graphical models provide a coherent way to connect a series of local submodels, based on different data sets, into a global unified analysis. We present a unified modelling framework that will account for multiple biases simultaneously and give more accurate parameter estimates than standard approaches. We illustrate our approach by analysing data from a study of water disinfection by-products and adverse birth outcomes in the UK.  相似文献   

3.
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given.  相似文献   

4.
5.
We present an application study which exemplifies a cutting edge statistical approach for detecting climate regime shifts. The algorithm uses Bayesian computational techniques that make time‐efficient analysis of large volumes of climate data possible. Output includes probabilistic estimates of the number and duration of regimes, the number and probability distribution of hidden states, and the probability of a regime shift in any year of the time series. Analysis of the Pacific Decadal Oscillation (PDO) index is provided as an example. Two states are detected: one is associated with positive values of the PDO and presents lower interannual variability, while the other corresponds to negative values of the PDO and greater variability. We compare this approach with existing alternatives from the literature and highlight the potential for ours to unlock features hidden in climate data.  相似文献   

6.
Bayesian palaeoclimate reconstruction   总被引:1,自引:0,他引:1  
Summary.  We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.  相似文献   

7.
ABSTRACT

Expert opinion and judgment enter into the practice of statistical inference and decision-making in numerous ways. Indeed, there is essentially no aspect of scientific investigation in which judgment is not required. Judgment is necessarily subjective, but should be made as carefully, as objectively, and as scientifically as possible.

Elicitation of expert knowledge concerning an uncertain quantity expresses that knowledge in the form of a (subjective) probability distribution for the quantity. Such distributions play an important role in statistical inference (for example as prior distributions in a Bayesian analysis) and in evidence-based decision-making (for example as expressions of uncertainty regarding inputs to a decision model). This article sets out a number of practices through which elicitation can be made as rigorous and scientific as possible.

One such practice is to follow a recognized protocol that is designed to address and minimize the cognitive biases that experts are prone to when making probabilistic judgments. We review the leading protocols in the field, and contrast their different approaches to dealing with these biases through the medium of a detailed case study employing the SHELF protocol.

The article ends with discussion of how to elicit a joint probability distribution for multiple uncertain quantities, which is a challenge for all the leading protocols. Supplementary materials for this article are available online.  相似文献   

8.
A simple multiplicative noise model with a constant signal has become a basic mathematical model in processing synthetic aperture radar images. The purpose of this paper is to examine a general multiplicative noise model with linear signals represented by a number of unknown parameters. The ordinary least squares (LS) and weighted LS methods are used to estimate the model parameters. The biases of the weighted LS estimates of the parameters are derived. The biases are then corrected to obtain a second-order unbiased estimator, which is shown to be exactly equivalent to the maximum log quasi-likelihood estimation, though the quasi-likelihood function is founded on a completely different theoretical consideration and is known, at the present time, to be a uniquely acceptable theory for multiplicative noise models. Synthetic simulations are carried out to confirm theoretical results and to illustrate problems in processing data contaminated by multiplicative noises. The sensitivity of the LS and weighted LS methods to extremely noisy data is analysed through the simulated examples.  相似文献   

9.
We investigate the estimation of dynamic models of criminal activity, when there is significant under-recording of crime. We give a theoretical analysis and use simulation techniques to investigate the resulting biases in conventional regression estimates. We find the biases to be of little practical significance. We develop and apply a new simulated maximum likelihood procedure that estimates simultaneously the measurement error and crime processes, using extraneous survey data. This also confirms that measurement error biases are small. Our estimation results for data from England and Wales imply a significant response of crime to both the economic and the enforcement environment.  相似文献   

10.
We consider time series models of the MA (moving average) family, and deal with the estimation of the residual variance. Results are known for maximum likelihood estimates under normality, both for known or unknown mean, in which case the asymptotic biases depend on the number of parameters (including the mean), and do not depend on the values of the parameters. For moment estimates the situation is different, because we find that the asymptotic biases depend on the values of the parameters, and become large as they approach the boundary of the region of invertibility. Our approach is to use Taylor series expansions, and the objective is to obtain asymptotic biases with error of o(l/T), where T is the sample size. Simulation results are presented, and corrections for bias suggested.  相似文献   

11.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   

12.
《Econometric Reviews》2007,26(5):529-556
In this paper, I study the timing of high school dropout decisions using data from High School and Beyond. I propose a Bayesian proportional hazard analysis framework that takes into account the specification of piecewise constant baseline hazard, the time-varying covariate of dropout eligibility, and individual, school, and state level random effects in the dropout hazard. I find that students who have reached their state compulsory school attendance ages are more likely to drop out of high school than those who have not reached compulsory school attendance ages. Regarding the school quality effects, a student is more likely to drop out of high school if the school she attends is associated with a higher pupil-teacher ratio or lower district expenditure per pupil. An interesting finding of the paper that comes along with the empirical results is that failure to account for the time-varying heterogeneity in the hazard, in this application, results in upward biases in the duration dependence estimates. Moreover, these upward biases are comparable in magnitude to the well-known downward biases in the duration dependence estimates when the modeling of the time-invariant heterogeneity in the hazard is absent.  相似文献   

13.
In this paper, I study the timing of high school dropout decisions using data from High School and Beyond. I propose a Bayesian proportional hazard analysis framework that takes into account the specification of piecewise constant baseline hazard, the time-varying covariate of dropout eligibility, and individual, school, and state level random effects in the dropout hazard. I find that students who have reached their state compulsory school attendance ages are more likely to drop out of high school than those who have not reached compulsory school attendance ages. Regarding the school quality effects, a student is more likely to drop out of high school if the school she attends is associated with a higher pupil–teacher ratio or lower district expenditure per pupil. An interesting finding of the paper that comes along with the empirical results is that failure to account for the time-varying heterogeneity in the hazard, in this application, results in upward biases in the duration dependence estimates. Moreover, these upward biases are comparable in magnitude to the well-known downward biases in the duration dependence estimates when the modeling of the time-invariant heterogeneity in the hazard is absent.  相似文献   

14.
This paper investigates bias in parameter estimates and residual diagnostics for parametric multinomial models by considering the effect of deleting a cell. In particular, it describes the average changes in the standardized residuals and maximum likelihood estimates resulting from conditioning on the given cells. These changes suggest how individual cell observations affect biases. Emphasis is placed on the role of individual cell observations in determining bias and on how bias affects standard diagnostic methods. Examples from genetics and log–linear models are considered. Numerical results show that conditioning on an influential cell results in substantial changes in biases.  相似文献   

15.
Multiple-bias modelling for analysis of observational data   总被引:3,自引:3,他引:0  
Summary.  Conventional analytic results do not reflect any source of uncertainty other than random error, and as a result readers must rely on informal judgments regarding the effect of possible biases. When standard errors are small these judgments often fail to capture sources of uncertainty and their interactions adequately. Multiple-bias models provide alternatives that allow one systematically to integrate major sources of uncertainty, and thus to provide better input to research planning and policy analysis. Typically, the bias parameters in the model are not identified by the analysis data and so the results depend completely on priors for those parameters. A Bayesian analysis is then natural, but several alternatives based on sensitivity analysis have appeared in the risk assessment and epidemiologic literature. Under some circumstances these methods approximate a Bayesian analysis and can be modified to do so even better. These points are illustrated with a pooled analysis of case–control studies of residential magnetic field exposure and childhood leukaemia, which highlights the diminishing value of conventional studies conducted after the early 1990s. It is argued that multiple-bias modelling should become part of the core training of anyone who will be entrusted with the analysis of observational data, and should become standard procedure when random error is not the only important source of uncertainty (as in meta-analysis and pooled analysis).  相似文献   

16.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

17.
We extend the Bayesian Model Averaging (BMA) framework to dynamic panel data models with endogenous regressors using a Limited Information Bayesian Model Averaging (LIBMA) methodology. Monte Carlo simulations confirm the asymptotic performance of our methodology both in BMA and selection, with high posterior inclusion probabilities for all relevant regressors, and parameter estimates very close to their true values. In addition, we illustrate the use of LIBMA by estimating a dynamic gravity model for bilateral trade. Once model uncertainty, dynamics, and endogeneity are accounted for, we find several factors that are robustly correlated with bilateral trade. We also find that applying methodologies that do not account for either dynamics or endogeneity (or both) results in different sets of robust determinants.  相似文献   

18.
Both knowledge-based systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty.

Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem.  相似文献   

19.
The aim of this study is to assess the biases of a Food Frequency Questionnaire (FFQ) by comparing total energy intake (TEI) with total energy expenditure (TEE) obtained from doubly labelled water(DLW) biomarker after adjusting measurement errors in DLW. We develop several Bayesian hierarchical measurement error models of DLW with different distributional assumptions on TEI to obtain precise bias estimates of TEI. Inference is carried out by using MCMC simulation techniques in a fully Bayesian framework, and model comparisons are done via the mean square predictive error. Our results showed that the joint model with random effects under the Gamma distribution is the best fit model in terms of the MSPE and residual diagnostics, in which bias in TEI is not significant based on the 95% credible interval. The Canadian Journal of Statistics 38: 506–516; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
Multiple-membership logit models with random effects are models for clustered binary data, where each statistical unit can belong to more than one group. The likelihood function of these models is analytically intractable. We propose two different approaches for parameter estimation: indirect inference and data cloning (DC). The former is a non-likelihood-based method which uses an auxiliary model to select reasonable estimates. We propose an auxiliary model with the same dimension of parameter space as the target model, which is particularly convenient to reach good estimates very fast. The latter method computes maximum likelihood estimates through the posterior distribution of an adequate Bayesian model, fitted to cloned data. We implement a DC algorithm specifically for multiple-membership models. A Monte Carlo experiment compares the two methods on simulated data. For further comparison, we also report Bayesian posterior mean and Integrated Nested Laplace Approximation hybrid DC estimates. Simulations show a negligible loss of efficiency for the indirect inference estimator, compensated by a relevant computational gain. The approaches are then illustrated with two real examples on matched paired data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号