首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
This paper presents a general model for exposure to homegrown foods that is used with a Monte Carlo analysis to determine the relative contributions of variability (Type A uncertainty) and true uncertainty (Type B uncertainty) to the overall variance in prediction of the dose-to-concentration ratio. Although classification of exposure inputs as uncertain or variable is somewhat subjective, food consumption rates and exposure duration are judged to have a predicted variance that is dominated by variability among individuals by age, income, culture, and geographical region. Whereas, biotransfer factors and partition factors are inputs that, to a large extent, involve uncertainty. Using ingestion of fruits, vegetables, grains, dairy products, and meat and soils assumed to be contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a Monte Carlo analysis is used to explore the relative contribution of uncertainty and variability to overall variance in the estimated distribution of potential dose within the population that consumes homegrown foods. It is found that, when soil concentrations are specified, variances in ratios of dose-to-concentration for HCB are equally attributable to uncertainty and variability, whereas for BaP, variance in these ratios is dominated by true uncertainty.  相似文献   

2.
In a quantitative model with uncertain inputs, the uncertainty of the output can be summarized by a risk measure. We propose a sensitivity analysis method based on derivatives of the output risk measure, in the direction of model inputs. This produces a global sensitivity measure, explicitly linking sensitivity and uncertainty analyses. We focus on the case of distortion risk measures, defined as weighted averages of output percentiles, and prove a representation of the sensitivity measure that can be evaluated on a Monte Carlo sample, as a weighted average of gradients over the input space. When the analytical model is unknown or hard to work with, nonparametric techniques are used for gradient estimation. This process is demonstrated through the example of a nonlinear insurance loss model. Furthermore, the proposed framework is extended in order to measure sensitivity to constant model parameters, uncertain statistical parameters, and random factors driving dependence between model inputs.  相似文献   

3.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

4.
Uncertainty of environmental concentrations is calculated with the regional multimedia exposure model of EUSES 1.0 by considering probability input distributions for aqueous solubility, vapor pressure, and octanol-water partition coefficient, K(ow). Only reliable experimentally determined data are selected from available literature for eight reference chemicals representing a wide substance property spectrum. Monte Carlo simulations are performed with uniform, triangular, and log-normal input distributions to assess the influence of the choice of input distribution type on the predicted concentration distributions. The impact of input distribution shapes on output variance exceeds the effect on the output mean by one order of magnitude. Both are affected by influence and uncertainty (i.e., variance) of the input variable as well. Distributional shape has no influence when the sensitivity function of the respective parameter is perfectly linear. For nonlinear relationships, overlap of probability mass of input distribution with influential ranges of the parameter space is important. Differences in computed output distribution are greatest when input distributions differ in the most influential parameter range.  相似文献   

5.
大量经济、金融以及企业管理等领域研究对象的行为特征可以通过矩约束模型来刻画。然而,该模型中参数的估计对矩条件的选取非常敏感。如何选取最优的矩条件,进而得到更准确的参数估计和更精确的统计推断,是实证研究面临的重要问题。本文从估计量均方误差(MSE)最小的角度,研究了一般矩约束模型两步有效广义矩(GMM)估计的最优矩条件选取方法。首先,利用迭代的方法,推导出两步有效GMM估计的高阶MSE,然后通过Nagar分解,求出了两步有效GMM估计量的近似MSE。接着,基于近似MSE表达式,给出了两步有效GMM估计矩条件选取准则的一般理论,即定义了最优的矩条件,提出了两步有效GMM估计的最优矩条件选取准则,并证明了选取准则的渐近有效性。模拟结果表明,本文提出的矩条件选取方法能够很好地改善两步有效GMM估计量的有限样本表现,降低估计量的有效样本偏差。本研究为实证研究中面临的矩条件选择问题提供了理论依据。  相似文献   

6.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

7.
A. E. Ades  G. Lu 《Risk analysis》2003,23(6):1165-1172
Monte Carlo simulation has become the accepted method for propagating parameter uncertainty through risk models. It is widely appreciated, however, that correlations between input variables must be taken into account if models are to deliver correct assessments of uncertainty in risk. Various two-stage methods have been proposed that first estimate a correlation structure and then generate Monte Carlo simulations, which incorporate this structure while leaving marginal distributions of parameters unchanged. Here we propose a one-stage alternative, in which the correlation structure is estimated from the data directly by Bayesian Markov Chain Monte Carlo methods. Samples from the posterior distribution of the outputs then correctly reflect the correlation between parameters, given the data and the model. Besides its computational simplicity, this approach utilizes the available evidence from a wide variety of structures, including incomplete data and correlated and uncorrelated repeat observations. The major advantage of a Bayesian approach is that, rather than assuming the correlation structure is fixed and known, it captures the joint uncertainty induced by the data in all parameters, including variances and covariances, and correctly propagates this through the decision or risk model. These features are illustrated with examples on emissions of dioxin congeners from solid waste incinerators.  相似文献   

8.
Risk analysis often depends on complex, computer-based models to describe links between policies (e.g., required emission-control equipment) and consequences (e.g., probabilities of adverse health effects). Appropriate specification of many model aspects is uncertain, including details of the model structure; transport, reaction-rate, and other parameters; and application-specific inputs such as pollutant-release rates. Because these uncertainties preclude calculation of the precise consequences of a policy, it is important to characterize the plausible range of effects. In principle, a probability distribution function for the effects can be constructed using Monte Carlo analysis, but the combinatorics of multiple uncertainties and the often high cost of model runs quickly exhaust available resources. This paper presents and applies a method to choose sets of input conditions (scenarios) that efficiently represent knowledge about the joint probability distribution of inputs. A simple score function approximately relating inputs to a policy-relevant output—in this case, globally averaged stratospheric ozone depletion—is developed. The probability density function for the score-function value is analytically derived from a subjective joint probability density for the inputs. Scenarios are defined by selected quantiles of the score function. Using this method, scenarios can be systematically selected in terms of the approximate probability distribution function for the output of concern, and probability intervals for the joint effect of the inputs can be readily constructed.  相似文献   

9.
Measures of sensitivity and uncertainty have become an integral part of risk analysis. Many such measures have a conditional probabilistic structure, for which a straightforward Monte Carlo estimation procedure has a double‐loop form. Recently, a more efficient single‐loop procedure has been introduced, and consistency of this procedure has been demonstrated separately for particular measures, such as those based on variance, density, and information value. In this work, we give a unified proof of single‐loop consistency that applies to any measure satisfying a common rationale. This proof is not only more general but invokes less restrictive assumptions than heretofore in the literature, allowing for the presence of correlations among model inputs and of categorical variables. We examine numerical convergence of such an estimator under a variety of sensitivity measures. We also examine its application to a published medical case study.  相似文献   

10.
Uncertainty importance measures are quantitative tools aiming at identifying the contribution of uncertain inputs to output uncertainty. Their application ranges from food safety (Frey & Patil (2002)) to hurricane losses (Iman et al. (2005a, 2005b)). Results and indications an analyst derives depend on the method selected for the study. In this work, we investigate the assumptions at the basis of various indicator families to discuss the information they convey to the analyst/decisionmaker. We start with nonparametric techniques, and then present variance-based methods. By means of an example we show that output variance does not always reflect a decisionmaker state of knowledge of the inputs. We then examine the use of moment-independent approaches to global sensitivity analysis, i.e., techniques that look at the entire output distribution without a specific reference to its moments. Numerical results demonstrate that both moment-independent and variance-based indicators agree in identifying noninfluential parameters. However, differences in the ranking of the most relevant factors show that inputs that influence variance the most are not necessarily the ones that influence the output uncertainty distribution the most.  相似文献   

11.
期权定价的蒙特卡罗模拟综合性方差减少技术   总被引:6,自引:0,他引:6       下载免费PDF全文
主要将重要性抽样技术处理特殊衍生证券定价问题的能力与控制变量技术、分层抽样技术简单灵活、易于应用的特点有机地结合起来,把分层抽样技术和控制变量技术引入重要性抽样模拟估计的分析框架,提出更为有效的关于期权定价蒙特卡罗模拟的综合性方差减少技术;并以基于算术型亚式期权定价为例,进行了实证模拟分析.  相似文献   

12.
This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.  相似文献   

13.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

14.
The purpose of this note is to show how semiparametric estimators with a small bias property can be constructed. The small bias property (SBP) of a semiparametric estimator is that its bias converges to zero faster than the pointwise and integrated bias of the nonparametric estimator on which it is based. We show that semiparametric estimators based on twicing kernels have the SBP. We also show that semiparametric estimators where nonparametric kernel estimation does not affect the asymptotic variance have the SBP. In addition we discuss an interpretation of series and sieve estimators as idempotent transformations of the empirical distribution that helps explain the known result that they lead to the SBP. In Monte Carlo experiments we find that estimators with the SBP have mean‐square error that is smaller and less sensitive to bandwidth than those that do not have the SBP.  相似文献   

15.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

16.
基于全最小二乘拟蒙特卡罗方法的可转债定价研究   总被引:2,自引:0,他引:2  
基于传统的可转债最小二乘蒙特卡罗模拟定价方法,通过使用随机Faure序列和方差减小技术,有效地降低模型估计结果的误差,使用考虑解释变量和被解释变量误差的全最小二乘回归方法代替普通的最小二乘回归方法,提出可转债的全最小二乘拟蒙特卡罗定价方法,并给出该定价方法的具体算法步骤.以2002年10月16日发行的燕京可转换债券为例进行实证分析,从可转债的理论价值、计算标准差以及模型的运行时间等几个方面与传统的蒙特卡罗方法进行比较.研究结果表明,使用全最小二乘拟蒙特卡罗方法进行计算得到的结果更为合理,且估计误差和计算时间都更少,从而验证了该方法在可转债定价应用上的有效性.  相似文献   

17.
We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right‐hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second‐order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar‐type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar‐type estimators (to second order). We also demonstrate the high degree of similarity for k‐class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.  相似文献   

18.
本文提出了基于高阶矩波动的相依结构模型:Copula-NAGARCHSK-M模型。考虑资产的时变条件方差风险、条件偏度风险和条件峰度风险对边缘分布的影响,应用模型研究了上证综指和深证成指对数收益率之间、条件方差之间、条件偏度之间和条件峰度之间的相依结构。发现两股票市场的指数对数收益率之间、条件方差之间和条件峰度之间有相似的相依结构,而条件偏度之间的相依结构则是负方向的相似。  相似文献   

19.
Monte Carlo simulation requires a pseudo-random number generator with good statistical properties. Linear congruential generators (LCGs) are the most popular and well-studied computer method for generating pseudo-random numbers used in Monte Carlo studies. High quality LCGs are available with sufficient statistical quality to satisfy all but the most demanding needs of risk assessors. However, because of the discrete, deterministic nature of LCGs, it is important to evaluate the randomness and uniformity of the specific pseudo-random number subsequences used in important risk assessments. Recommended statistical tests for uniformity and randomness include the Kolmogorov-Smirnov test, extreme values test, and the runs test, including runs above and runs below the mean tests. Risk assessors should evaluate the stability of their risk model's output statistics, paying particular attention to instabilities in the mean and variance. When instabilities in the mean and variance are observed, more stable statistics, e.g., percentiles, should be reported. Analyses should be repeated using several non-overlapping pseudo-random number subsequences. More simulations than those traditionally used are also recommended for each analysis.  相似文献   

20.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号