首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 0 毫秒
1.
If the point of view is adopted that in calculations of real-world phenomena we almost invariably have significant uncertainty in the numerical values of our parameters, then, in these calculations, numerical quantities should be replaced by probability distributions and mathematical operations between these quantities should be replaced by analogous operations between probability distributions. Also, practical calculations one way or another always require discretization or truncation. Combining these two thoughts leads to a numerical approach to probabilistic calculations having great simplicity, power, and elegance. The philosophy and technique of this approach is described, some pitfalls are pointed out, and an application to seismic risk assessment is outlined.  相似文献   

2.
Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motions will be exceeded at a given location in a given future time period. Due to large uncertainties in all of the geosciences data and in their modeling, multiple model interpretations are often possible. This leads to disagreements among the experts, which in the past has led to disagreement on the selection of a ground motion for design at a given site. This paper reports on a project, co-sponsored by the U.S. Nuclear Regulatory Commission, the U.S. Department of Energy, and the Electric Power Research Institute, that was undertaken to review the state-of-the-art and improve on the overall stability of the PSHA process, by providing methodological guidance on how to perform a PSHA. The project reviewed past studies and examined ways to improve on the present state-of-the-art. In analyzing past PSHA studies, the most important conclusion is that differences in PSHA results are commonly due to process rather than technical differences. Thus, the project concentrated heavily on developing process recommendations, especially on the use of multiple experts, and this paper reports on those process recommendations. The problem of facilitating and integrating the judgments of a diverse group of experts is analyzed in detail. The authors believe that the concepts and process principles apply just as well to non-earthquake fields such as volcanic hazard, flood risk, nuclear-plant safety, and climate change.  相似文献   

3.
We consider the problem of estimating the probability of detection (POD) of flaws in an industrial steel component. Modeled as an increasing function of the flaw height, the POD characterizes the detection process; it is also involved in the estimation of the flaw size distribution, a key input parameter of physical models describing the behavior of the steel component when submitted to extreme thermodynamic loads. Such models are used to assess the resistance of highly reliable systems whose failures are seldom observed in practice. We develop a Bayesian method to estimate the flaw size distribution and the POD function, using flaw height measures from periodic in‐service inspections conducted with an ultrasonic detection device, together with measures from destructive lab experiments. Our approach, based on approximate Bayesian computation (ABC) techniques, is applied to a real data set and compared to maximum likelihood estimation (MLE) and a more classical approach based on Markov Chain Monte Carlo (MCMC) techniques. In particular, we show that the parametric model describing the POD as the cumulative distribution function (cdf) of a log‐normal distribution, though often used in this context, can be invalidated by the data at hand. We propose an alternative nonparametric model, which assumes no predefined shape, and extend the ABC framework to this setting. Experimental results demonstrate the ability of this method to provide a flexible estimation of the POD function and describe its uncertainty accurately.  相似文献   

4.
Risk assessors often use different probability plots as a way to assess the fit of a particular distribution or model by comparing the plotted points to a straight line and to obtain estimates of the parameters in parametric distributions or models. When empirical data do not fall in a sufficiently straight line on a probability plot, and when no other single parametric distribution provides an acceptable (graphical) fit to the data, the risk assessor may consider a mixture model with two component distributions. Animated probability plots are a way to visualize the possible behaviors of mixture models with two component distributions. When no single parametric distribution provides an adequate fit to an empirical dataset, animated probability plots can help an analyst pick some plausible mixture models for the data based on their qualitative fit. After using animations during exploratory data analysis, the analyst must then use other statistical tools, including but not limited to: Maximum Likelihood Estimation (MLE) to find the optimal parameters, Goodness of Fit (GoF) tests, and a variety of diagnostic plots to check the adequacy of the fit. Using a specific example with two LogNormal components, we illustrate the use of animated probability plots as a tool for exploring the suitability of a mixture model with two component distributions. Animations work well with other types of probability plots, and they may be extended to analyze mixture models with three or more component distributions.  相似文献   

5.
This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.  相似文献   

6.
Utility functions in the form of tables or matrices have often been used to combine discretely rated decision‐making criteria. Matrix elements are usually specified individually, so no one rule or principle can be easily stated for the utility function as a whole. A series of five matrices are presented that aggregate criteria two at a time using simple rules that express a varying degree of constraint of the lower rating over the higher. A further nine possible matrices were obtained by using a different rule either side of the main axis of the matrix to describe situations where the criteria have a differential influence on the outcome. Uncertainties in the criteria are represented by three alternative frequency distributions from which the assessors select the most appropriate. The output of the utility function is a distribution of rating frequencies that is dependent on the distributions of the input criteria. In pest risk analysis (PRA), seven of these utility functions were required to mimic the logic by which assessors for the European and Mediterranean Plant Protection Organization arrive at an overall rating of pest risk. The framework enables the development of PRAs that are consistent and easy to understand, criticize, compare, and change. When tested in workshops, PRA practitioners thought that the approach accorded with both the logic and the level of resolution that they used in the risk assessments.  相似文献   

7.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

8.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

9.
Moolgavkar  Suresh H.  Luebeck  E. Georg  Turim  Jay  Hanna  Linda 《Risk analysis》1999,19(4):599-611
We present the results of a quantitative assessment of the lung cancer risk associated with occupational exposure to refractory ceramic fibers (RCF). The primary sources of data for our risk assessment were two long-term oncogenicity studies in male Fischer rats conducted to assess the potential pathogenic effects associated with prolonged inhalation of RCF. An interesting feature of the data was the availability of the temporal profile of fiber burden in the lungs of experimental animals. Because of this information, we were able to conduct both exposure–response and dose–response analyses. Our risk assessment was conducted within the framework of a biologically based model for carcinogenesis, the two-stage clonal expansion model, which allows for the explicit incorporation of the concepts of initiation and promotion in the analyses. We found that a model positing that RCF was an initiator had the highest likelihood. We proposed an approach based on biological considerations for the extrapolation of risk to humans. This approach requires estimation of human lung burdens for specific exposure scenarios, which we did by using an extension of a model due to Yu. Our approach acknowledges that the risk associated with exposure to RCF depends on exposure to other lung carcinogens. We present estimates of risk in two populations: (1) a population of nonsmokers and (2) an occupational cohort of steelworkers not exposed to coke oven emissions, a mixed population that includes both smokers and nonsmokers.  相似文献   

10.
We consider a general representation of the delegation problem, with and without money burning, and provide sufficient and necessary conditions under which an interval allocation is optimal. We also apply our results to the theory of trade agreements among privately informed governments. For both perfect and monopolistic competition settings, we provide conditions under which tariff caps are optimal.  相似文献   

11.
In most commercial applications of k-means clustering, researchers choose one set of kseed points to start the partitioning process; often, the initial set of seeds is chosen randomly. Using Monte Carlo simulation, we show that significant benefits are associated with replicated starting configurations that incorporate seed selection procedures based on a hierarchical clustering of sample points drawn from the original data matrix. A real-world application of the approach is then presented.  相似文献   

12.
Foot and mouth disease (FMD) is considered by many as the most important animal disease in the world. FMD is highly contagious and outbreaks incur significant costs as affected countries are severely limited in their ability to trade. A number of trade commodities may be contaminated with FMD virus (FMDV) including animal products, for example, meat. As a member of the European Union, Great Britain (GB) has put in place a number of regulations to prevent the importation of pathogens in imported meat products. However, the illegal importation of meat provides a route by which safety controls may be circumvented and meat from FMD affected areas may be imported. This study assesses the FMD infection risk posed to the livestock population of GB from the illegal importation of meat, and estimates the major contributors to this overall risk, through the development of a quantitative risk assessment model. From model results, the total amount of illegal meat entering GB each year is estimated on average to be 11,875 tonnes. with 90% certainty that this is between 4,398 and 28,626 tonnes per year; of which between 64.5 and 565 kg are contaminated with FMDV. This flow of illegal meat results in an estimate of a frequency of FMD infection in GB livestock of 0.015 cases of infected animals per year, with 90% certainty it is between 0.0017 and 0.053. Imports from the region Near and Middle East account for 47% of this risk, and 68% of the risk is attributed to bone-in and dried de-boned products.  相似文献   

13.
14.
Land application is one of the major methods of managing municipal sludge in China. The sludge is used for fertilizing and conditioning soil, but due to the high concentration of heavy metals and other chemicals that it contains, improper use of sludge will lead to the contamination of farmland soil. To provide guidance on the application of sludge in China, the Control Standards for Pollutants in Sludge for Agricultural Use (CSPSAU) were enacted, and implemented in 1985. Afterwards, the National Environment Quality Standards for Soil (NEQSS) were also formulated and put into effect in 1996. In this article, these two national standards were examined by means of exposure assessment. The main exposure pathway to humans that was considered was dietary intake of crops grown on the sludge-applied farmland. Five major types of agricultural crops (rice, wheat, tuber roots, vegetables, and fruits) and three groups of exposure population (the urban individual group, the rural sludge-applying individual group, and the rural sludge nonapplying individual group) were assessed. This case study in Tianjin, China, shows the necessity of reexamining the national standards of the CSPSAU and the NEQSS in the context of risk assessment. More comprehensive surveys and monitoring programs assessing heavy metals contained in farmland soils and crop tissues will be necessary for examining the risks to human health.  相似文献   

15.
Recently, the concept of black swans has gained increased attention in the fields of risk assessment and risk management. Different types of black swans have been suggested, distinguishing between unknown unknowns (nothing in the past can convincingly point to its occurrence), unknown knowns (known to some, but not to relevant analysts), or known knowns where the probability of occurrence is judged as negligible. Traditional risk assessments have been questioned, as their standard probabilistic methods may not be capable of predicting or even identifying these rare and extreme events, thus creating a source of possible black swans. In this article, we show how a simulation model can be used to identify previously unknown potentially extreme events that if not identified and treated could occur as black swans. We show that by manipulating a verified and validated model used to predict the impacts of hazards on a system of interest, we can identify hazard conditions not previously experienced that could lead to impacts much larger than any previous level of impact. This makes these potential black swan events known and allows risk managers to more fully consider them. We demonstrate this method using a model developed to evaluate the effect of hurricanes on energy systems in the United States; we identify hurricanes with potentially extreme impacts, storms well beyond what the historic record suggests is possible in terms of impacts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号