首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
We consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.  相似文献   

2.
We consider the use of emulator technology as an alternative method to second-order Monte Carlo (2DMC) in the uncertainty analysis for a percentile from the output of a stochastic model. 2DMC is a technique that uses repeated sampling in order to make inferences on the uncertainty and variability in a model output. The conventional 2DMC approach can often be highly computational, making methods for uncertainty and sensitivity analysis unfeasible. We explore the adequacy and efficiency of the emulation approach, and we find that emulation provides a viable alternative in this situation. We demonstrate these methods using two different examples of different input dimensions, including an application that considers contamination in pre-pasteurised milk.  相似文献   

3.
This article considers the uncertainty of a proportion based on a stratified random sample of a small population. Using the hypergeometric distribution, a Clopper–Pearson type upper confidence bound is presented. Another frequentist approach that uses the estimated variance of the proportion estimator is also considered as well as a Bayesian alternative. These methods are demonstrated with an illustrative example. Some aspects of planning, that is, the impact of specified strata sample sizes, on uncertainty are studied through a simulation study.  相似文献   

4.
Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest (output of the model). One of the statistical tools used to quantify the influence of each input variable on the output is the Sobol sensitivity index. We consider the statistical estimation of this index from a finite sample of model outputs. We study asymptotic and non-asymptotic properties of two estimators of Sobol indices. These properties are applied to significance tests and estimation by confidence intervals.  相似文献   

5.
The major problem of mean–variance portfolio optimization is parameter uncertainty. Many methods have been proposed to tackle this problem, including shrinkage methods, resampling techniques, and imposing constraints on the portfolio weights, etc. This paper suggests a new estimation method for mean–variance portfolio weights based on the concept of generalized pivotal quantity (GPQ) in the case when asset returns are multivariate normally distributed and serially independent. Both point and interval estimations of the portfolio weights are considered. Comparing with Markowitz's mean–variance model, resampling and shrinkage methods, we find that the proposed GPQ method typically yields the smallest mean-squared error for the point estimate of the portfolio weights and obtains a satisfactory coverage rate for their simultaneous confidence intervals. Finally, we apply the proposed methodology to address a portfolio rebalancing problem.  相似文献   

6.
Summary.  A deterministic computer model is to be used in a situation where there is uncertainty about the values of some or all of the input parameters. This uncertainty induces uncertainty in the output of the model. We consider the problem of estimating a specific percentile of the distribution of this uncertain output. We also suppose that the computer code is computationally expensive, so we can run the model only at a small number of distinct inputs. This means that we must consider our uncertainty about the computer code itself at all untested inputs. We model the output, as a function of its inputs, as a Gaussian process, and after a few initial runs of the code use a simulation approach to choose further suitable design points and to make inferences about the percentile of interest itself. An example is given involving a model that is used in sewer design.  相似文献   

7.
Model-based estimates of future uncertainty are generally based on the in-sample fit of the model, as when Box–Jenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empirically estimated model of data revisions for U.S. output growth is used to investigate small-sample properties.  相似文献   

8.
舒元  才国伟 《统计研究》2007,24(9):23-29
本文首先分析了我国人力资本的变化规律,然后依此建立了人力资本积累的世代交叠模型。模型中引入了不确定性因素,以全社会人力资本分布作为考察对象,重点阐述了市场和公共教育体制下人力资本积累的内生决定以及教育融资体制改革问题。人力资本积累的不确定性和教育支出的产出弹性,内生决定了人力资本的积累路径和市场教育体制的改革方向。  相似文献   

9.
The starting point in uncertainty quantification is a stochastic model, which is fitted to a technical system in a suitable way, and prediction of uncertainty is carried out within this stochastic model. In any application, such a model will not be perfect, so any uncertainty quantification from such a model has to take into account the inadequacy of the model. In this paper, we rigorously show how the observed data of the technical system can be used to build a conservative non‐asymptotic confidence interval on quantiles related to experiments with the technical system. The construction of this confidence interval is based on concentration inequalities and order statistics. An asymptotic bound on the length of this confidence interval is presented. Here we assume that engineers use more and more of their knowledge to build models with order of errors bounded by . The results are illustrated by applying the newly proposed approach to real and simulated data.  相似文献   

10.
Quantifying uncertainty in the biospheric carbon flux for England and Wales   总被引:1,自引:0,他引:1  
Summary.  A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO2) concentrations in the atmosphere. Vegetation can extract CO2 through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 1012 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered.  相似文献   

11.

Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first-order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

  相似文献   

12.
A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime.  相似文献   

13.
Comment     
Using postwar annual data through 1987 from 46 countries, we confirm our earlier finding that the maximum impact (χ) of monetary shocks on real output is negatively correlated across countries with the variance of such shocks (σ ) [the Lucas proposition (LP)]. This holds whether the time series specification for each country is the one we reported in Kormendi and Meguire (1984) (KM), one selected by a Bayesian pretest (BPT) suggested by Poirier's results, or a uniform specification that nests both. Using the LP to restrict the coefficients of monetary shocks in the real output equation significantly improves forecasts of real output growth over the period 1978–1987. Over the same period, predictions of money and real output growth made from the BPT specifications often do not outperform comparable predictions made from the KM specifications.  相似文献   

14.
The Behrens–Fisher problem concerns the inferences for the difference between means of two independent normal populations without the assumption of equality of variances. In this article, we compare three approximate confidence intervals and a generalized confidence interval for the Behrens–Fisher problem. We also show how to obtain simultaneous confidence intervals for the three population case (analysis of variance, ANOVA) by the Bonferroni correction factor. We conduct an extensive simulation study to evaluate these methods in respect to their type I error rate, power, expected confidence interval width, and coverage probability. Finally, the considered methods are applied to two real dataset.  相似文献   

15.
This article provides empirical comparative statics under simultaneous price and output uncertainty. In so doing, it presents a simple (one-step) and general statistical methodology under price and output uncertainty.  相似文献   

16.
Abstract. We study statistical procedures to quantify uncertainty in multivariate climate projections based on several deterministic climate models. We introduce two different assumptions – called constant bias and constant relation respectively – for extrapolating the substantial additive and multiplicative biases present during the control period to the scenario period. There are also strong indications that the biases in the scenario period are different from the extrapolations from the control period. Including such changes in the statistical models leads to an identifiability problem that we solve in a frequentist analysis using a zero sum side condition and in a Bayesian analysis using informative priors. The Bayesian analysis provides estimates of the uncertainty in the parameter estimates and takes this uncertainty into account for the predictive distributions. We illustrate the method by analysing projections of seasonal temperature and precipitation in the Alpine region from five regional climate models in the PRUDENCE project.  相似文献   

17.
Chen  T.  Tracy  S.  Uno  H. 《Lifetime data analysis》2021,27(3):481-498

Classical simultaneous confidence bands for survival functions (i.e., Hall–Wellner, equal precision, and empirical likelihood bands) are derived from transformations of the asymptotic Brownian nature of the Nelson–Aalen or Kaplan–Meier estimators. Due to the properties of Brownian motion, a theoretical derivation of the highest confidence density region cannot be obtained in closed form. Instead, we provide confidence bands derived from a related optimization problem with local time processes. These bands can be applied to the one-sample problem regarding both cumulative hazard and survival functions. In addition, we present a solution to the two-sample problem for testing differences in cumulative hazard functions. The finite sample performance of the proposed method is assessed by Monte Carlo simulation studies. The proposed bands are applied to clinical trial data to assess survival times for primary biliary cirrhosis patients treated with D-penicillamine.

  相似文献   

18.
In this article, we present a novel approach to clustering finite or infinite dimensional objects observed with different uncertainty levels. The novelty lies in using confidence sets rather than point estimates to obtain cluster membership and the number of clusters based on the distance between the confidence set estimates. The minimal and maximal distances between the confidence set estimates provide confidence intervals for the true distances between objects. The upper bounds of these confidence intervals can be used to minimize the within clustering variability and the lower bounds can be used to maximize the between clustering variability. We assign objects to the same cluster based on a min–max criterion and we separate clusters based on a max–min criterion. We illustrate our technique by clustering a large number of curves and evaluate our clustering procedure with a synthetic example and with a specific application.  相似文献   

19.
Simultaneous confidence bands provide a useful adjunct to the popular Kaplan–Meier product limit estimator for a survival function, particularly when results are displayed graphically. They allow an assessment of the magnitude of sampling errors and provide a graphical view of a formal goodness-of-fit test. In this paper we evaluate a modified version of Nair's (1981) simultaneous confidence bands. The modification is based on a logistic transformation of the Kaplan–Meier estimator. We show that the modified bands have some important practical advantages.  相似文献   

20.
Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号