首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

2.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

3.
Mixed Levels of Uncertainty in Complex Policy Models   总被引:3,自引:0,他引:3  
The characterization and treatment of uncertainty poses special challenges when modeling indeterminate or complex coupled systems such as those involved in the interactions between human activity, climate and the ecosystem. Uncertainty about model structure may become as, or more important than, uncertainty about parameter values. When uncertainty grows so large that prediction or optimization no longer makes sense, it may still be possible to use the model as a behavioral test bed to examine the relative robustness of alternative observational and behavioral strategies. When models must be run into portions of their phase space that are not well understood, different submodels may become unreliable at different rates. A common example involves running a time stepped model far into the future. Several strategies can be used to deal with such situations. The probability of model failure can be reported as a function of time. Possible alternative surprises can be assigned probabilities, modeled separately, and combined. Finally, through the use of subjective judgments, one may be able to combine, and over time shift between models, moving from more detailed to progressively simpler order-of-magnitude models, and perhaps ultimately, on to simple bounding analysis.  相似文献   

4.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

5.
Treatment of Uncertainty in Performance Assessments for Complex Systems   总被引:13,自引:0,他引:13  
When viewed at a high level, performance assessments (PAs) for complex systems involve two types of uncertainty: stochastic uncertainty, which arises because the system under study can behave in many different ways, and subjective uncertainty, which arises from a lack of knowledge about quantities required within the computational implementation of the PA. Stochastic uncertainty is typically incorporated into a PA with an experimental design based on importance sampling and leads to the final results of the PA being expressed as a complementary cumulative distribution function (CCDF). Subjective uncertainty is usually treated with Monte Carlo techniques and leads to a distribution of CCDFs. This presentation discusses the use of the Kaplan/Garrick ordered triple representation for risk in maintaining a distinction between stochastic and subjective uncertainty in PAs for complex systems. The topics discussed include (1) the definition of scenarios and the calculation of scenario probabilities and consequences, (2) the separation of subjective and stochastic uncertainties, (3) the construction of CCDFs required in comparisons with regulatory standards (e.g., 40 CFR Part 191, Subpart B for the disposal of radioactive waste), and (4) the performance of uncertainty and sensitivity studies. Results obtained in a preliminary PA for the Waste Isolation Pilot Plant, an uncertainty and sensitivity analysis of the MACCS reactor accident consequence analysis model, and the NUREG-1150 probabilistic risk assessments are used for illustration.  相似文献   

6.
Uncertainty Analysis in Multiplicative Models   总被引:3,自引:0,他引:3  
Wout Slob 《Risk analysis》1994,14(4):571-576
Uncertainties are usually evaluated by Monte Carlo analysis. However, multiplicative models with lognormal uncertainties, which are ubiquitous in risk assessments, allow for a simple and quick analytical uncertainty analysis. The necessary formulae are given, which may be evaluated by a desk calculator. Two examples illustrate the method.  相似文献   

7.
《Risk analysis》2018,38(9):1972-1987
Weed risk assessments (WRA) are used to identify plant invaders before introduction. Unfortunately, very few incorporate uncertainty ratings or evaluate the effects of uncertainty, a fundamental risk component. We developed a probabilistic model to quantitatively evaluate the effects of uncertainty on the outcomes of a question‐based WRA tool for the United States. In our tool, the uncertainty of each response is rated as Negligible, Low, Moderate, or High. We developed the model by specifying the likelihood of a response changing for each uncertainty rating. The simulations determine if responses change, select new responses, and sum the scores to determine the risk rating. The simulated scores reveal potential variation in WRA risk ratings. In testing with 204 species assessments, the ranges of simulated risk scores increased with greater uncertainty, and analyses for most species produced simulated risk ratings that differed from the baseline WRA rating. Still, the most frequent simulated rating matched the baseline rating for every High Risk species, and for 87% of all tested species. The remaining 13% primarily involved ambiguous Low Risk results. Changing final ratings based on the uncertainty analysis results was not justified here because accuracy (match between WRA tool and known risk rating) did not improve. Detailed analyses of three species assessments indicate that assessment uncertainty may be best reduced by obtaining evidence for unanswered questions, rather than obtaining additional evidence for questions with responses. This analysis represents an advance in interpreting WRA results, and has enhanced our regulation and management of potential weed species.  相似文献   

8.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

9.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

10.
Recent work in the assessment of risk in maritime transportation systems has used simulation-based probabilistic risk assessment techniques. In the Prince William Sound and Washington State Ferries risk assessments, the studies' recommendations were backed up by estimates of their impact made using such techniques and all recommendations were implemented. However, the level of uncertainty about these estimates was not available, leaving the decisionmakers unsure whether the evidence was sufficient to assess specific risks and benefits. The first step toward assessing the impact of uncertainty in maritime risk assessments is to model the uncertainty in the simulation models used. In this article, a study of the impact of proposed ferry service expansions in San Francisco Bay is used as a case study to demonstrate the use of Bayesian simulation techniques to propagate uncertainty throughout the analysis. The conclusions drawn in the original study are shown, in this case, to be robust to the inherent uncertainties. The main intellectual merit of this work is the development of Bayesian simulation technique to model uncertainty in the assessment of maritime risk. However, Bayesian simulations have been implemented only as theoretical demonstrations. Their use in a large, complex system may be considered state of the art in the field of computational sciences.  相似文献   

11.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

12.
Variability and Uncertainty Meet Risk Management and Risk Communication   总被引:1,自引:0,他引:1  
In the past decade, the use of probabilistic risk analysis techniques to quantitatively address variability and uncertainty in risks increased in popularity as recommended by the 1994 National Research Council that wrote Science and Judgment in Risk Assessment. Under the 1996 Food Quality Protection Act, for example, the U.S. EPA supported the development of tools that produce distributions of risk demonstrating the variability and/or uncertainty in the results. This paradigm shift away from the use of point estimates creates new challenges for risk managers, who now struggle with decisions about how to use distributions in decision making. The challenges for risk communication, however, have only been minimally explored. This presentation uses the case studies of variability in the risks of dying on the ground from a crashing airplane and from the deployment of motor vehicle airbags to demonstrate how better characterization of variability and uncertainty in the risk assessment lead to better risk communication. Analogies to food safety and environmental risks are also discussed. This presentation demonstrates that probabilistic risk assessment has an impact on both risk management and risk communication, and highlights remaining research issues associated with using improved sensitivity and uncertainty analyses in risk assessment.  相似文献   

13.
Methods for Uncertainty Analysis: A Comparative Survey   总被引:1,自引:0,他引:1  
This paper presents a survey and comparative evaluation of methods which have been developed for the determination of uncertainties in accident consequences and probabilities, for use in probabilistic risk assessment. The methods considered are: analytic techniques, Monte Carlo simulation, response surface approaches, differential sensitivity techniques, and evaluation of classical statistical confidence bounds. It is concluded that only the response surface and differential sensitivity approaches are sufficiently general and flexible for use as overall methods of uncertainty analysis in probabilistic risk assessment. The other methods considered, however, are very useful in particular problems.  相似文献   

14.
组合预测贝叶斯模型研究   总被引:5,自引:2,他引:3  
评述了组合预测的基本思想和组合预测贝叶斯模型的研究现状,研究了多步贝叶斯信息更新情况下针对不同偏性特征的无偏组合预测模型,讨论了结合非样本信息和样本信息的组合权重贝叶斯更新模型,扩展了贝叶斯组合预测模型的现有成果,指出了进一步研究的方向.  相似文献   

15.
A call for risk assessment approaches that better characterize and quantify uncertainty has been made by the scientific and regulatory community. This paper responds to that call by demonstrating a distributional approach that draws upon human data to derive potency estimates and to identify and quantify important sources of uncertainty. The approach is rooted in the science of decision analysis and employs an influence diagram, a decision tree, probabilistic weights, and a distribution of point estimates of carcinogenic potency. Its results estimate the likelihood of different carcinogenic risks (potencies) for a chemical under a specific scenario. For this exercise, human data on formaldehyde were employed to demonstrate the approach. Sensitivity analyses were performed to determine the relative impact of specific levels and alternatives on the potency distribution. The resulting potency estimates are compared with the results of an exercise using animal data on formaldehyde. The paper demonstrates that distributional risk assessment is readily adapted to situations in which epidemiologic data serve as the basis for potency estimates. Strengths and weaknesses of the distributional approach are discussed. Areas for further application and research are recommended.  相似文献   

16.
In the analysis of the risk associated to rare events that may lead to catastrophic consequences with large uncertainty, it is questionable that the knowledge and information available for the analysis can be reflected properly by probabilities. Approaches other than purely probabilistic have been suggested, for example, using interval probabilities, possibilistic measures, or qualitative methods. In this article, we look into the problem and identify a number of issues that are foundational for its treatment. The foundational issues addressed reflect on the position that “probability is perfect” and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.  相似文献   

17.
This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.  相似文献   

18.
Some analysts suggest that discussing uncertainties in health risk assessments might reduce citizens'perceptions of risk and increase their respect for the risk-assessing agency. We tested this assumption with simulated news stories varying simple displays of uncertainty (e.g., a range of risk estimates, with and without graphics). Subjects from Eugene, Oregon, read one story each, and then answered a questionnaire. Three studies tested between 180 and 272 subjects each. Two focus groups obtained more detailed responses to these stories. The results suggested that (1) people are unfamiliar with uncertainty in risk assessments and in science; (2) people may recognize uncertainty when it is presented simply; (3) graphics may help people recognize uncertainty; (4) reactions to the environmental problems in the stories seemed affected less by presentation of uncertainty than by general risk attitudes and perceptions; (5) agency discussion of uncertainty in risk estimates may signal agency honesty and agency incompetence for some people; and (6) people seem to see lower risk estimates (10-6, as opposed to 10-3) as less credible. These findings, if confirmed, would have important implications for risk communication.  相似文献   

19.
Many different techniques have been proposed for performing uncertainty and sensitivity analyses on computer models for complex processes. The objective of the present study is to investigate the applicability of three widely used techniques to three computer models having large uncertainties and varying degrees of complexity in order to highlight some of the problem areas that must be addressed in actual applications. The following approaches to uncertainty and sensitivity analysis are considered: (1) response surface methodology based on input determined from a fractional factorial design; (2) Latin hypercube sampling with and without regression analysis; and (3) differential analysis. These techniques are investigated with respect to (1) ease of implementation, (2) flexibility, (3) estimation of the cumulative distribution function of the output, and (4) adaptability to different methods of sensitivity analysis. With respect to these criteria, the technique using Latin hypercube sampling and regression analysis had the best overall performance. The models used in the investigation are well documented, thus making it possible for researchers to make comparisons of other techniques with the results in this study.  相似文献   

20.
Standard statistical methods understate the uncertainty one should attach to effect estimates obtained from observational data. Among the methods used to address this problem are sensitivity analysis, Monte Carlo risk analysis (MCRA), and Bayesian uncertainty assessment. Estimates from MCRAs have been presented as if they were valid frequentist or Bayesian results, but examples show that they need not be either in actual applications. It is concluded that both sensitivity analyses and MCRA should begin with the same type of prior specification effort as Bayesian analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号