首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some program managers share a common belief that adding a redundant component to a system reduces the probability of failure by half. This is true only if the failures of the redundant components are independent events, which is rarely the case. For example, the redundant components may be subjected to the same external loads. There is, however, in general a decrease in the failure probability of the system. Nonetheless, the redundant element comes at a cost, even if it is less than that of developing the first one when both are based on the same design. Identical parts save the most in terms of design costs, but are subjected to common failure modes from possible design errors that limit the effectiveness of the redundancy. In the development of critical systems, managers thus need to decide if the costs of a parallel system are justified by the increase in the system's reliability. NASA, for example, has used redundant spacecraft to increase the chances of mission success, which worked well in the cases of the Viking and Voyager missions. These two successes, however, do not guarantee future ones. We present here a risk analysis framework accounting for dependencies to support the decision to launch at the same time a twin mission of identical spacecraft, given incremental costs and risk-reduction benefits of the second one. We illustrate this analytical approach with the case of the Mars Exploration Rovers launched by NASA in 2003, for which we had performed this assessment in 2001.  相似文献   

2.
Terje Aven 《Risk analysis》2010,30(3):354-360
It is common perspective in risk analysis that there are two kinds of uncertainties: i) variability as resulting from heterogeneity and stochasticity (aleatory uncertainty) and ii) partial ignorance or epistemic uncertainties resulting from systematic measurement error and lack of knowledge. Probability theory is recognized as the proper tool for treating the aleatory uncertainties, but there are different views on what is the best approach for describing partial ignorance and epistemic uncertainties. Subjective probabilities are often used for representing this type of ignorance and uncertainties, but several alternative approaches have been suggested, including interval analysis, probability bound analysis, and bounds based on evidence theory. It is argued that probability theory generates too precise results when the background knowledge of the probabilities is poor. In this article, we look more closely into this issue. We argue that this critique of probability theory is based on a conception of risk assessment being a tool to objectively report on the true risk and variabilities. If risk assessment is seen instead as a method for describing the analysts’ (and possibly other stakeholders’) uncertainties about unknown quantities, the alternative approaches (such as the interval analysis) often fail in providing the necessary decision support.  相似文献   

3.
For some critical applications, successfully accomplishing the mission or surviving the system through aborting the mission and performing a rescue procedure in the event of certain deterioration condition being satisfied are both pivotal. This has motivated considerable studies on mission abort policies (MAPs) to mitigate the risk of system loss in the past several years, especially for standby systems that use one or multiple standby sparing components to continue the mission when the online component fails, improving the mission success probability. The existing MAPs are mainly based on the number of failed online components ignoring the status of the standby components. This article makes contributions by modeling standby systems subject to MAPs that depend not only on the number of failed online components but also on the number of available standby components remaining. Further, dynamic MAPs considering another additional factor, the time elapsed from the mission beginning in the event of the mission abort decision making, are investigated. The solution methodology encompasses an event-transition based numerical algorithm for evaluating the mission success probability and system survival probability of standby systems subject to the considered MAPs. Examples are provided to demonstrate the benefit of considering the state of standby components and elapsed operation time in obtaining more flexible MAPs.  相似文献   

4.
Terje Aven 《Risk analysis》2011,31(10):1515-1525
Few policies for risk management have created more controversy than the precautionary principle. A main problem is the extreme number of different definitions and interpretations. Almost all definitions of the precautionary principle identify “scientific uncertainties” as the trigger or criterion for its invocation; however, the meaning of this concept is not clear. For applying the precautionary principle it is not sufficient that the threats or hazards are uncertain. A stronger requirement is needed. This article provides an in‐depth analysis of this issue. We question how the scientific uncertainties are linked to the interpretation of the probability concept, expected values, the results from probabilistic risk assessments, the common distinction between aleatory uncertainties and epistemic uncertainties, and the problem of establishing an accurate prediction model (cause‐effect relationship). A new classification structure is suggested to define what scientific uncertainties mean.  相似文献   

5.
Probabilistic safety analysis (PSA) has been used in nuclear, chemical, petrochemical, and several other industries. The probability and/or frequency results of most PSAs are based on average component unavailabilities during the mission of interest. While these average results are useful, they provide no indication of the significance of the facility's current status when one or more components are known to be out of service. Recently, several interactive computational models have been developed for nuclear power plants to allow the user to specify the plant's status at a particular time (i.e., to specify equipment known to be out of service) and then to receive updated PSA information. As with conventional PSA results, there are uncertainties associated with the numerical updated results. These uncertainties stem from a number of sources, including parameter uncertainty (uncertainty in equipment failure rates and human error probabilities). This paper presents an analysis of the impact of parameter uncertainty on updated PSA results.  相似文献   

6.
Probabilistic risk assessment is a methodology to assess the probability of failure or success of a mission. Results provided by the risk assessment methodology are used to make decisions concerning choice of upgrades, scheduling of maintenance, decision to launch, etc. However, current PRA neglects the contribution of software to the risk of failure of the mission. Our research has developed a methodology to account for the impact of software to system failure. This article focuses on an element of the approach: a comprehensive taxonomy of software-related failure modes. Application of the taxonomy is discussed in this article. A validation of the taxonomy and conclusions drawn from this validation effort are described. Future research is also summarized.  相似文献   

7.
Operational risk management of autonomous vehicles in extreme environments is heavily dependent on expert judgments and, in particular, judgments of the likelihood that a failure mitigation action, via correction and prevention, will annul the consequences of a specific fault. However, extant research has not examined the reliability of experts in estimating the probability of failure mitigation. For systems operations in extreme environments, the probability of failure mitigation is taken as a proxy of the probability of a fault not reoccurring. Using a priori expert judgments for an autonomous underwater vehicle mission in the Arctic and a posteriori mission field data, we subsequently developed a generalized linear model that enabled us to investigate this relationship. We found that the probability of failure mitigation alone cannot be used as a proxy for the probability of fault not reoccurring. We conclude that it is also essential to include the effort to implement the failure mitigation when estimating the probability of fault not reoccurring. The effort is the time taken by a person (measured in person-months) to execute the task required to implement the fault correction action. We show that once a modicum of operational data is obtained, it is possible to define a generalized linear logistic model to estimate the probability a fault not reoccurring. We discuss how our findings are important to all autonomous vehicle operations and how similar operations can benefit from revising expert judgments of risk mitigation to take account of the effort required to reduce key risks.  相似文献   

8.
多重不确定环境下基于证据理论的NIS安全风险评估模型   总被引:1,自引:0,他引:1  
冯楠  解晶 《管理学报》2011,8(4):614-620,627
以证据理论为基础,构造一种能够适应多重不确定环境的网络信息系统安全风险评估模型。在模型中建立安全风险评估指标体系并对指标权重进行量化;重新定义基本概率赋值函数,以适应安全风险评估过程中证据的不确定性描述;实现证据一致性检验并确定调整方法,从而进一步降低评估过程中专家经验的不确定性;最后,通过实证分析验证该模型的正确性和有效性。  相似文献   

9.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

10.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

11.
The abandoned mine legacy is critical in many countries around the world, where mine cave-ins and surface subsidence disruptions are perpetual risks that can affect the population, infrastructure, historical legacies, land use, and the environment. This article establishes abandoned metal mine failure risk evaluation approaches and quantification techniques based on the Canadian mining experience. These utilize clear geomechanics considerations such as failure mechanisms, which are dependent on well-defined rock mass parameters. Quantified risk is computed using probability of failure (probabilistics using limit-equilibrium factors of safety or applicable numerical modeling factor of safety quantifications) times a consequence impact value. Semi-quantified risk can be based on failure-case-study-based empirical data used in calculating probability of failure, and personal experience can provide qualified hazard and impact consequence assessments. The article provides outlines for land use and selection of remediation measures based on risk.  相似文献   

12.
Terje Aven 《Risk analysis》2013,33(3):462-468
The risk appetite concept has been given considerable attention recently in enterprise risk management contexts. A number of definitions exist, most with a link to risk acceptability, but also values and goals. The usefulness of the concept is, however, disputed; some authors argue that we can in fact do better without it. In this article, we provide a thorough discussion of what the risk appetite concept is actually trying to express and how it best can be used in the relevant decision making. The main purposes of the article are (i) to argue that the risk appetite concept, suitably interpreted, has a role to play in risk management, (ii) to show that the risk appetite concept is well supported by some types of risk perspectives and not by others, and (iii) to show how the risk appetite concept is linked to other related concepts, such as risk seeking and risk acceptability. The risk perspectives studied range from expected value and probability based definitions of risk to views on risk, that are founded on uncertainties.  相似文献   

13.
Louis Anthony Cox  Jr  . 《Risk analysis》2006,26(6):1581-1599
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.  相似文献   

14.
Congress is currently considering adopting a mathematical formula to assign shares in cancer causation to specific doses of radiation, for use in establishing liability and compensation awards. The proposed formula, if it were sound, would allow difficult problems in tort law and public policy to be resolved by reference to tabulated "probabilities of causation." This article examines the statistical and conceptual bases for the proposed methodology. We find that the proposed formula is incorrect as an expression for "probability and causation," that it implies hidden, debatable policy judgments in its treatment of factor interactions and uncertainties, and that it can not in general be quantified with sufficient precision to be useful. Three generic sources of statistical uncertainty are identified--sampling variability, population heterogeneity, and error propagation--that prevent accurate quantification of "assigned shares." These uncertainties arise whenever aggregate epidemiological or risk data are used to draw causal inferences about individual cases.  相似文献   

15.
16.
Few global threats rival global climate change in scale and potential consequence. The principal international authority assessing climate risk is the Intergovernmental Panel on Climate Change (IPCC). Through repeated assessments the IPCC has devoted considerable effort and interdisciplinary competence to articulating a common characterization of climate risk and uncertainties. We have reviewed the assessment and its foundation for the Fifth Assessment Reports published in 2013 and 2014, in particular the guidance note for lead authors of the fifth IPCC assessment report on consistent treatment of uncertainties. Our analysis shows that the work carried out by the ICPP is short of providing a theoretically and conceptually convincing foundation on the treatment of risk and uncertainties. The main reasons for our assessment are: (i) the concept of risk is given a too narrow definition (a function of consequences and probability/likelihood); and (ii) the reports lack precision in delineating their concepts and methods. The goal of this article is to contribute to improving the handling of uncertainty and risk in future IPCC studies, thereby obtaining a more theoretically substantiated characterization as well as enhanced scientific quality for risk analysis in this area. Several suggestions for how to improve the risk and uncertainty treatment are provided.  相似文献   

17.
Quantitative Assessment of Building Fire Risk to Life Safety   总被引:1,自引:0,他引:1  
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.  相似文献   

18.
Ralph F. Miles  Jr. 《Risk analysis》2004,24(2):415-424
This article develops a decision-theoretic methodology for the risk-adjusted mission value (RAMV) for selecting between alternative missions in the presence of uncertainty in the outcomes of the missions. This methodology permits trading off mission risk for mission value, something that probabilistic risk analysis cannot do unless it explicitly incorporates both mission value and risk aversion of the project management. The methodology, in its complete implementation, is consistent with the decision theory known as expected utility theory, although it differs from conventional decision theory in that the probabilities and all but one of the utilities are not those of the decision maker. The article also introduces a new interpretation of risk aversion. The methodology is consistent with the elementary management concept concerning division of labor. An example is presented for selecting between discrete alternatives-four landing sites on Mars. A second example is presented for selecting among a set of continuous alternatives-a comet flyby distance. The methodology is developed within the context of scientific missions, but the methodology is equally applicable to any situation requiring outcome value judgments, probability judgments, and risk aversion judgments by different constituencies.  相似文献   

19.
20.
Probabilistic risk analysis, based on the identification of failure modes, points to technical malfunctions and operator errors that can be direct causes of system failure. Yet component failures and operator errors are often rooted in management decisions and organizational factors. Extending the analysis to identify these factors allows more effective risk management strategies. It also permits a more realistic assessment of the overall failure probability. An implicit assumption that is often made in PRA is that, on the whole, the system has been designed according to specified norms and constructed as designed. Such an analysis tends to overemphasize scenarios in which the system fails because it is subjected to a much higher load than those for which it was designed. In this article, we find that, for the case of jacket-type offshore platforms, this class of scenarios contributes only about 5% of the failure probability. We link the PRA inputs to decisions and errors during the three phases of design, construction, and operation of platforms, and we assess the contribution of different types of error scenarios to the overall probability of platform failure. We compute the benefits of improving the design review, and we find that, given the costs involved, improving the review process is a more efficient way to increase system safety than reinforcing the structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号