首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine‐tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross‐validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.  相似文献   

2.
We conducted a regional‐scale integrated ecological and human health risk assessment by applying the relative risk model with Bayesian networks (BN‐RRM) to a case study of the South River, Virginia mercury‐contaminated site. Risk to four ecological services of the South River (human health, water quality, recreation, and the recreational fishery) was evaluated using a multiple stressor–multiple endpoint approach. These four ecological services were selected as endpoints based on stakeholder feedback and prioritized management goals for the river. The BN‐RRM approach allowed for the calculation of relative risk to 14 biotic, human health, recreation, and water quality endpoints from chemical and ecological stressors in five risk regions of the South River. Results indicated that water quality and the recreational fishery were the ecological services at highest risk in the South River. Human health risk for users of the South River was low relative to the risk to other endpoints. Risk to recreation in the South River was moderate with little spatial variability among the five risk regions. Sensitivity and uncertainty analysis identified stressors and other parameters that influence risk for each endpoint in each risk region. This research demonstrates a probabilistic approach to integrated ecological and human health risk assessment that considers the effects of chemical and ecological stressors across the landscape.  相似文献   

3.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

4.
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.  相似文献   

5.
The identification of societal vulnerable counties and regions and the factors contributing to social vulnerability are crucial for effective disaster risk management. Significant advances have been made in the study of social vulnerability over the past two decades, but we still know little regarding China's societal vulnerability profiles, especially at the county level. This study investigates the county‐level spatial and temporal patterns in social vulnerability in China from 1980 to 2010. Based on China's four most recent population censuses of 2,361 counties and their corresponding socioeconomic data, a social vulnerability index for each county was created using factor analysis. Exploratory spatial data analysis, including global and local autocorrelations, was applied to reveal the spatial patterns of county‐level social vulnerability. The results demonstrate that the dynamic characteristics of China's county‐level social vulnerability are notably distinct, and the dominant contributors to societal vulnerability for all of the years studied were rural character, development (urbanization), and economic status. The spatial clustering patterns of social vulnerability to natural disasters in China exhibited a gathering–scattering–gathering pattern over time. Further investigations indicate that many counties in the eastern coastal area of China are experiencing a detectable increase in social vulnerability, whereas the societal vulnerability of many counties in the western and northern areas of China has significantly decreased over the past three decades. These findings will provide policymakers with a sound scientific basis for disaster prevention and mitigation decisions.  相似文献   

6.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

7.
We study decision problems in which consequences of the various alternative actions depend on states determined by a generative mechanism representing some natural or social phenomenon. Model uncertainty arises because decision makers may not know this mechanism. Two types of uncertainty result, a state uncertainty within models and a model uncertainty across them. We discuss some two‐stage static decision criteria proposed in the literature that address state uncertainty in the first stage and model uncertainty in the second (by considering subjective probabilities over models). We consider two approaches to the Ellsberg‐type phenomena characteristic of such decision problems: a Bayesian approach based on the distinction between subjective attitudes toward the two kinds of uncertainty; and a non‐Bayesian approach that permits multiple subjective probabilities. Several applications are used to illustrate concepts as they are introduced.  相似文献   

8.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

9.
Detailed spatial representation of socioeconomic exposure and the related vulnerability to natural hazards has the potential to improve the quality and reliability of risk assessment outputs. We apply a spatially weighted dasymetric approach based on multiple ancillary data to downscale important socioeconomic variables and produce a grid data set for Italy that contains multilayered information about physical exposure, population, gross domestic product, and social vulnerability. We test the performances of our dasymetric approach compared to other spatial interpolation methods. Next, we combine the grid data set with flood hazard estimates to exemplify an application for the purpose of risk assessment.  相似文献   

10.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

11.
Survival models are developed to predict response and time‐to‐response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple‐dose data set to predict the probability of death through specifying functions of dose response and the time between exposure and the time‐to‐death (TTD). Among the models developed, the best‐fitting survival model (baseline model) is an exponential dose–response model with a Weibull TTD distribution. Alternative models assessed use different underlying dose–response functions and use the assumption that, in a multiple‐dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this article. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high‐dose rabbit data sets. More accurate survival models depend upon future development of dose–response data sets specifically designed to assess potential multiple‐dose effects on response and time‐to‐response. The process used in this article to develop the best‐fitting survival model for exposure of rabbits to multiple aerosol doses of B. anthracis spores should have broad applicability to other host–pathogen systems and dosing schedules because the empirical modeling approach is based upon pathogen‐specific empirically‐derived parameters.  相似文献   

12.
This article discusses how analyst's or expert's beliefs on the credibility and quality of models can be assessed and incorporated into the uncertainty assessment of an unknown of interest. The proposed methodology is a specialization of the Bayesian framework for the assessment of model uncertainty presented in an earlier paper. This formalism treats models as sources of information in assessing the uncertainty of an unknown, and it allows the use of predictions from multiple models as well as experimental validation data about the models’ performances. In this article, the methodology is extended to incorporate additional types of information about the model, namely, subjective information in terms of credibility of the model and its applicability when it is used outside its intended domain of application. An example in the context of fire risk modeling is also provided.  相似文献   

13.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

14.
Li R  Englehardt JD  Li X 《Risk analysis》2012,32(2):345-359
Multivariate probability distributions, such as may be used for mixture dose‐response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose‐response biomarker and genetic information. In this article, a new two‐stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn‐in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose‐response function (DRF). Results are shown for the five‐parameter common‐mode and seven‐parameter dissimilar‐mode models, based on published data for eight benzene–toluene dose pairs. The common mode conditional DRF is obtained with a 21‐fold reduction in data requirement versus MCMC. Example common‐mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126‐PCB 153 mixture. Applicability is analyzed and discussed. Matlab® computer programs are provided.  相似文献   

15.
We analyze the risk of severe fatal accidents causing five or more fatalities and for nine different activities covering the entire oil chain. Included are exploration and extraction, transport by different modes, refining and final end use in power plants, heating or gas stations. The risks are quantified separately for OECD and non‐OECD countries and trends are calculated. Risk is analyzed by employing a Bayesian hierarchical model yielding analytical functions for both frequency (Poisson) and severity distributions (Generalized Pareto) as well as frequency trends. This approach addresses a key problem in risk estimation—namely the scarcity of data resulting in high uncertainties in particular for the risk of extreme events, where the risk is extrapolated beyond the historically most severe accidents. Bayesian data analysis allows the pooling of information from different data sets covering, for example, the different stages of the energy chains or different modes of transportation. In addition, it also inherently delivers a measure of uncertainty. This approach provides a framework, which comprehensively covers risk throughout the oil chain, allowing the allocation of risk in sustainability assessments. It also permits the progressive addition of new data to refine the risk estimates. Frequency, severity, and trends show substantial differences between the activities, emphasizing the need for detailed risk analysis.  相似文献   

16.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

17.
The development of catastrophe models in recent years allows for assessment of the flood hazard much more effectively than when the federally run National Flood Insurance Program (NFIP) was created in 1968. We propose and then demonstrate a methodological approach to determine pure premiums based on the entire distribution of possible flood events. We apply hazard, exposure, and vulnerability analyses to a sample of 300,000 single‐family residences in two counties in Texas (Travis and Galveston) using state‐of‐the‐art flood catastrophe models. Even in zones of similar flood risk classification by FEMA there is substantial variation in exposure between coastal and inland flood risk. For instance, homes in the designated moderate‐risk X500/B zones in Galveston are exposed to a flood risk on average 2.5 times greater than residences in X500/B zones in Travis. The results also show very similar average annual loss (corrected for exposure) for a number of residences despite their being in different FEMA flood zones. We also find significant storm‐surge exposure outside of the FEMA designated storm‐surge risk zones. Taken together these findings highlight the importance of a microanalysis of flood exposure. The process of aggregating risk at a flood zone level—as currently undertaken by FEMA—provides a false sense of uniformity. As our analysis indicates, the technology to delineate the flood risks exists today.  相似文献   

18.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

19.
Quantitative risk analysis is being extensively employed to support policymakers and provides a strong conceptual framework for evaluating decision alternatives under uncertainty. Many problems involving environmental risks are, however, of a spatial nature, i.e., containing spatial impacts, spatial vulnerabilities, and spatial risk‐mitigation alternatives. Recent developments in multicriteria spatial analysis have enabled the assessment and aggregation of multiple impacts, supporting policymakers in spatial evaluation problems. However, recent attempts to conduct spatial multicriteria risk analysis have generally been weakly conceptualized, without adequate roots in quantitative risk analysis. Moreover, assessments of spatial risk often neglect the multidimensional nature of spatial impacts (e.g., social, economic, human) that are typically occurring in such decision problems. The aim of this article is therefore to suggest a conceptual quantitative framework for environmental multicriteria spatial risk analysis based on expected multi‐attribute utility theory. The framework proposes: (i) the formal assessment of multiple spatial impacts; (ii) the aggregation of these multiple spatial impacts; (iii) the assessment of spatial vulnerabilities and probabilities of occurrence of adverse events; (iv) the computation of spatial risks; (v) the assessment of spatial risk mitigation alternatives; and (vi) the design and comparison of spatial risk mitigation alternatives (e.g., reductions of vulnerabilities and/or impacts). We illustrate the use of the framework in practice with a case study based on a flood‐prone area in northern Italy.  相似文献   

20.
There is a need to advance our ability to characterize the risk of inhalational anthrax following a low‐dose exposure. The exposure scenario most often considered is a single exposure that occurs during an attack. However, long‐term daily low‐dose exposures also represent a realistic exposure scenario, such as what may be encountered by people occupying areas for longer periods. Given this, the objective of the current work was to model two rabbit inhalational anthrax dose‐response data sets. One data set was from single exposures to aerosolized Bacillus anthracis Ames spores. The second data set exposed rabbits repeatedly to aerosols of B. anthracis Ames spores. For the multiple exposure data the cumulative dose (i.e., the sum of the individual daily doses) was used for the model. Lethality was the response for both. Modeling was performed using Benchmark Dose Software evaluating six models: logprobit, loglogistic, Weibull, exponential, gamma, and dichotomous‐Hill. All models produced acceptable fits to either data set. The exponential model was identified as the best fitting model for both data sets. Statistical tests suggested there was no significant difference between the single exposure exponential model results and the multiple exposure exponential model results, which suggests the risk of disease is similar between the two data sets. The dose expected to cause 10% lethality was 15,600 inhaled spores and 18,200 inhaled spores for the single exposure and multiple exposure exponential dose‐response model, respectively, and the 95% lower confidence intervals were 9,800 inhaled spores and 9,200 inhaled spores, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号