首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 0 毫秒
1.
Fault Trees vs. Event Trees in Reliability Analysis   总被引:1,自引:0,他引:1  
Reliability analysis is the study of both the probability and the process of failure of a system. For that purpose, several tools are available, for example, fault trees, event trees, or the GO technique. These tools are often complementary and address different aspects of the questions. Experience shows that there is sometimes confusion between two of these methods: fault trees and event trees. Sometimes identified as equivalent, they, in fact, serve different purposes. Fault trees lay out relationships among events. Event trees lay out sequences of events linked by conditional probabilities. At least in theory, event trees can handle better notions of continuity (logical, temporal, and physical), whereas fault trees are most powerful in identifying and simplifying failure scenarios. Different characteristics of the system in question (e.g., a dam or a nuclear reactor) may guide the choice between fault trees, event trees, or a combination of the two. Some elements of this choice are examined, and observations are made about the relative capabilities of the two methods.  相似文献   

2.
Modeling Uncertainties in Mining Pillar Stability Analysis   总被引:1,自引:0,他引:1  
Many countries are now facing problems related to their past mining activities. One of the greatest problems concerns the potential surface instability. In areas where a room-and-pillar extraction method was used, deterministic methodologies are generally used to assess the hazard of surface collapses. However, those methodologies suffer from not being able to take into account all the uncertainties inherent in any hazard analysis. Through the practical example of the assessment of a single pillar stability in a very simple mining layout, this article introduces a logical framework that can be used to incorporate the different kinds of uncertainties related to data and models, as well as to specific expert's choices in the hazard or risk analysis process. Practical recommendations and efficient tools are also provided to help engineers and experts in their daily work.  相似文献   

3.
There are many uncertainties in a probabilistic risk analysis (PRA). We identify the different types of uncertainties and describe their implications. We then summarize the uncertainty analyses which have performed in current PRAs and characterize results which have been obtained. We draw conclusions regarding interpretations of uncertainties, areas having largest uncertainties, and needs which exist in uncertainty analysis. We finally characterize the robustness of various utilizations of PRA results.  相似文献   

4.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

5.
Access management, which systematically limits opportunities for egress and ingress of vehicles to highway lanes, is critical to protect trillions of dollars of current investment in transportation. This article addresses allocating resources for access management with incomplete and partially relevant data on crash rates, travel speeds, and other factors. While access management can be effective to avoid crashes, reduce travel times, and increase route capacities, the literature suggests a need for performance metrics to guide investments in resource allocation across large corridor networks and several time horizons. In this article, we describe a quantitative decision model to support an access management program via risk‐cost‐benefit analysis under data uncertainties from diverse sources of data and expertise. The approach quantifies potential benefits, including safety improvement and travel time savings, and costs of access management through functional relationships of input parameters including crash rates, corridor access point densities, and traffic volumes. Parameter uncertainties, which vary across locales and experts, are addressed via numerical interval analyses. This approach is demonstrated at several geographic scales across 7,000 kilometers of highways in a geographic region and several subregions. The demonstration prioritizes route segments that would benefit from risk management, including (i) additional data or elicitation, (ii) right‐of‐way purchases, (iii) restriction or closing of access points, (iv) new alignments, (v) developer proffers, and (vi) etc. The approach ought to be of wide interest to analysts, planners, policymakers, and stakeholders who rely on heterogeneous data and expertise for risk management.  相似文献   

6.
Distributions of pathogen counts in treated water over time are highly skewed, power‐law‐like, and discrete. Over long periods of record, a long tail is observed, which can strongly determine the long‐term mean pathogen count and associated health effects. Such distributions have been modeled with the Poisson lognormal (PLN) computed (not closed‐form) distribution, and a new discrete growth distribution (DGD), also computed, recently proposed and demonstrated for microbial counts in water (Risk Analysis 29, 841–856). In this article, an error in the original theoretical development of the DGD is pointed out, and the approach is shown to support the closed‐form discrete Weibull (DW). Furthermore, an information‐theoretic derivation of the DGD is presented, explaining the fit shown for it to the original nine empirical and three simulated (n = 1,000) long‐term waterborne microbial count data sets. Both developments result from a theory of multiplicative growth of outcome size from correlated, entropy‐forced cause magnitudes. The predicted DW and DGD are first borne out in simulations of continuous and discrete correlated growth processes, respectively. Then the DW and DGD are each demonstrated to fit 10 of the original 12 data sets, passing the chi‐square goodness‐of‐fit test (α= 0.05, overall p = 0.1184). The PLN was not demonstrated, fitting only 4 of 12 data sets (p = 1.6 × 10?8), explained by cause magnitude correlation. Results bear out predictions of monotonically decreasing distributions, and suggest use of the DW for inhomogeneous counts correlated in time or space. A formula for computing the DW mean is presented.  相似文献   

7.
Probabilistic risk analyses often construct multistage chance trees to estimate the joint probability of compound events. If random measurement error is associated with some or all of the estimates, we show that resulting estimates of joint probability may be highly skewed. Joint probability estimates based on the analysis of multistage chance trees are more likely than not to be below the true probability of adverse events, but will sometimes substantially overestimate them. In contexts such as insurance markets for environmental risks, skewed distributions of risk estimates amplify the "winner's curse" so that the estimated risk premium for low-probability events is likely to be lower than the normative value. Skewness may result even in unbiased estimators of expected value from simple lotteries, if measurement error is associated with both the probability and pay-off terms. Further, skewness may occur even if the error associated with these two estimates is symmetrically distributed. Under certain circumstances, skewed estimates of expected value may result in risk-neutral decisionmakers exhibiting a tendency to choose a certainty equivalent over a lottery of equal expected value, or vice versa. We show that when distributions of estimates of expected value are, positively skewed, under certain circumstances it will be optimal to choose lotteries with nominal values lower than the value of apparently superior certainty equivalents. Extending the previous work of Goodman (1960), we provide an exact formula for the skewness of products.  相似文献   

8.
To ascertain the viability of a project, undertake resource allocation, take part in bidding processes, and other related decisions, modern project management requires forecasting techniques for cost, duration, and performance of a project, not only under normal circumstances, but also under external events that might abruptly change the status quo. We provide a Bayesian framework that provides a global forecast of a project's performance. We aim at predicting the probabilities and impacts of a set of potential scenarios caused by combinations of disruptive events, and using this information to deal with project management issues. To introduce the methodology, we focus on a project's cost, but the ideas equally apply to project duration or performance forecasting. We illustrate our approach with an example based on a real case study involving estimation of the uncertainty in project cost while bidding for a contract.  相似文献   

9.
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model‐based methods, such as the paradigm of identifying a single “best‐fitting” model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications.  相似文献   

10.
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI‐L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk‐reducing effectiveness of WHTI‐L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI‐L, and a range of casualty cost estimates based on the willingness‐to‐pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI‐L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14–26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5–6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit‐cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.  相似文献   

11.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

12.
Use of similar or identical antibiotics in both human and veterinary medicine has come under increasing scrutiny by regulators concerned that bacteria resistant to animal antibiotics will infect people and resist treatment with similar human antibiotics, leading to excess illnesses and deaths. Scientists, regulators, and interest groups in the United States and Europe have urged bans on nontherapeutic and some therapeutic uses of animal antibiotics to protect human health. Many regulators and public health experts have also expressed dissatisfaction with the perceived limitations of quantitative risk assessment and have proposed alternative qualitative and judgmental approaches ranging from "attributable fraction" estimates to risk management recommendations based on the precautionary principle or on expert judgments about the importance of classes of compounds in human medicine. This article presents a more traditional quantitative risk assessment of the likely human health impacts of continuing versus withdrawing use of fluoroquinolones and macrolides in production of broiler chickens in the United States. An analytic framework is developed and applied to available data. It indicates that withdrawing animal antibiotics can cause far more human illness-days than it would prevent: the estimated human BENEFIT:RISK health ratio for human health impacts of continued animal antibiotic use exceeds 1,000:1 in many cases. This conclusion is driven by a hypothesized causal sequence in which withdrawing animal antibiotic use increases illnesses rates in animals, microbial loads in servings from the affected animals, and hence human health risks. This potentially important aspect of human health risk assessment for animal antibiotics has not previously been quantified.  相似文献   

13.
《Risk analysis》2018,38(5):962-977
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions.  相似文献   

14.
A Monte Carlo simulation is incorporated into a risk assessment for trichloroethylene (TCE) using physiologically-based pharmacokinetic (PBPK) modeling coupled with the linearized multistage model to derive human carcinogenic risk extrapolations. The Monte Carlo technique incorporates physiological parameter variability to produce a statistically derived range of risk estimates which quantifies specific uncertainties associated with PBPK risk assessment approaches. Both inhalation and ingestion exposure routes are addressed. Simulated exposure scenarios were consistent with those used by the Environmental Protection Agency (EPA) in their TCE risk assessment. Mean values of physiological parameters were gathered from the literature for both mice (carcinogenic bioassay subjects) and for humans. Realistic physiological value distributions were assumed using existing data on variability. Mouse cancer bioassay data were correlated to total TCE metabolized and area-under-the-curve (blood concentration) trichloroacetic acid (TCA) as determined by a mouse PBPK model. These internal dose metrics were used in a linearized multistage model analysis to determine dose metric values corresponding to 10-6 lifetime excess cancer risk. Using a human PBPK model, these metabolized doses were then extrapolated to equivalent human exposures (inhalation and ingestion). The Monte Carlo iterations with varying mouse and human physiological parameters produced a range of human exposure concentrations producing a 10-6 risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号