首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hybrid Processing of Stochastic and Subjective Uncertainty Data   总被引:1,自引:0,他引:1  
Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective sources of data uncertainty, based on new "hybrid number" approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzily known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective components of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are shown, and graphical results are given.  相似文献   

2.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

3.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

4.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

5.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   

6.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

7.
Terje Aven 《Risk analysis》2010,30(3):354-360
It is common perspective in risk analysis that there are two kinds of uncertainties: i) variability as resulting from heterogeneity and stochasticity (aleatory uncertainty) and ii) partial ignorance or epistemic uncertainties resulting from systematic measurement error and lack of knowledge. Probability theory is recognized as the proper tool for treating the aleatory uncertainties, but there are different views on what is the best approach for describing partial ignorance and epistemic uncertainties. Subjective probabilities are often used for representing this type of ignorance and uncertainties, but several alternative approaches have been suggested, including interval analysis, probability bound analysis, and bounds based on evidence theory. It is argued that probability theory generates too precise results when the background knowledge of the probabilities is poor. In this article, we look more closely into this issue. We argue that this critique of probability theory is based on a conception of risk assessment being a tool to objectively report on the true risk and variabilities. If risk assessment is seen instead as a method for describing the analysts’ (and possibly other stakeholders’) uncertainties about unknown quantities, the alternative approaches (such as the interval analysis) often fail in providing the necessary decision support.  相似文献   

8.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

9.
10.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

11.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

12.
Flood loss modeling is an important component for risk analyses and decision support in flood risk management. Commonly, flood loss models describe complex damaging processes by simple, deterministic approaches like depth‐damage functions and are associated with large uncertainty. To improve flood loss estimation and to provide quantitative information about the uncertainty associated with loss modeling, a probabilistic, multivariable B agging decision T ree F lood L oss E stimation MO del (BT‐FLEMO) for residential buildings was developed. The application of BT‐FLEMO provides a probability distribution of estimated losses to residential buildings per municipality. BT‐FLEMO was applied and validated at the mesoscale in 19 municipalities that were affected during the 2002 flood by the River Mulde in Saxony, Germany. Validation was undertaken on the one hand via a comparison with six deterministic loss models, including both depth‐damage functions and multivariable models. On the other hand, the results were compared with official loss data. BT‐FLEMO outperforms deterministic, univariable, and multivariable models with regard to model accuracy, although the prediction uncertainty remains high. An important advantage of BT‐FLEMO is the quantification of prediction uncertainty. The probability distribution of loss estimates by BT‐FLEMO well represents the variation range of loss estimates of the other models in the case study.  相似文献   

13.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

14.
Asim Roy 《决策科学》1989,20(3):591-601
This paper models the corporate takeover process as a bargaining game under certainty. During the takeover process, an acquirer is generally uncertain about the minimum price the target shareholders will accept. Normally, a takeover is concluded after a sequence of offers have been made. This paper derives optimal offer strategies for the buyer at each stage of this bargaining game under uncertainty. Uncertainty about the target's minimum acceptable price is represented by a probability distribution. Optimal offer strategies depend on the probability distribution of the minimum acceptable price, which can change during the offer process.  相似文献   

15.
We consider a dynamic pricing problem that involves selling a given inventory of a single product over a short, two‐period selling season. There is insufficient time to replenish inventory during this season, hence sales are made entirely from inventory. The demand for the product is a stochastic, nonincreasing function of price. We assume interval uncertainty for demand, that is, knowledge of upper and lower bounds but not a probability distribution, with no correlation between the two periods. We minimize the maximum total regret over the two periods that results from the pricing decisions. We consider a dynamic model where the decision maker chooses the price for each period contingent on the remaining inventory at the beginning of the period, and a static model where the decision maker chooses the prices for both periods at the beginning of the first period. Both models can be solved by a polynomial time algorithm that solves systems of linear inequalities. Our computational study demonstrates that the prices generated by both our models are insensitive to errors in estimating the demand intervals. Our dynamic model outperforms our static model and two classical approaches that do not use demand probability distributions, when evaluated by maximum regret, average relative regret, variability, and risk measures. Further, our dynamic model generates a total expected revenue which closely approximates that of a maximum expected revenue approach which requires demand probability distributions.  相似文献   

16.
We propose a systematic approach that incorporates fuzzy set theory in conjunction with portfolio matrices to assist managers in reaching a better understanding of the overall competitiveness of their business portfolios. Integer linear programming is also accommodated in the proposed integrated approach to help select strategic plans by using the results derived from the previous portfolio analysis and other financial data. The proposed integrated approach is designed from a strategy‐oriented perspective for portfolio management at the corporate level. It has the advantage of dealing with the uncertainty problem of decision makers in doing evaluation, providing a technique that presents the diversity of confidence and optimism levels of decision makers. Furthermore, integer linear programming is used because it offers an effective quantitative method for managers to allocate constrained resources optimally among proposed strategies. An illustration from a real‐world situation demonstrates the integrated approach. Although a particular portfolio matrix model has been adopted in our research, the procedure proposed here can be modified to incorporate other portfolio matrices.  相似文献   

17.
Our concept of nine risk evaluation criteria, six risk classes, a decision tree, and three management categories was developed to improve the effectiveness, efficiency, and political feasibility of risk management procedures. The main task of risk evaluation and management is to develop adequate tools for dealing with the problems of complexity, uncertainty. and ambiguity. Based on the characteristics of different risk types and these three major problems, we distinguished three types of management--risk-based, precaution-based, and discourse-based strategies. The risk-based strategy--is the common solution to risk problems. Once the probabilities and their corresponding damage potentials are calculated, risk managers are required to set priorities according to the severity of the risk, which may be operationalized as a linear combination of damage and probability or as a weighted combination thereof. Within our new risk classification, the two central components have been augmented with other physical and social criteria that still demand risk-based strategies as long as uncertainty is low and ambiguity absent. Risk-based strategies are best solutions to problems of complexity and some components of uncertainty, for example, variation among individuals. If the two most important risk criteria, probability of occurrence and extent of damage, are relatively well known and little uncertainty is left, the traditional risk-based approach seems reasonable. If uncertainty plays a large role, in particular, indeterminacy or lack of knowledge, the risk-based approach becomes counterproductive. Judging the relative severity of risks on the basis of uncertain parameters does not make much sense. Under these circumstances, management strategies belonging to the precautionary management style are required. The precautionary approach has been the basis for much of the European environmental and health protection legislation and regulation. Our own approach to risk management has been guided by the proposition that any conceptualization of the precautionary principle should be (1) in line with established methods of scientific risk assessments, (2) consistent and discriminatory (avoiding arbitrary results) when it comes to prioritization, and (3) at the same time, specific with respect to precautionary measures, such as ALARA or BACT, or the strategy of containing risks in time and space. This suggestion does, however, entail a major problem: looking only to the uncertainties does not provide risk managers with a clue about where to set priorities for risk reduction. Risks vary in their degree of remaining uncertainties. How can one judge the severity of a situation when the potential damage and its probability are unknown or contested? In this dilemma, we advise risk managers to use additional criteria of hazardousness, such as "ubiquity versibility," and "pervasiveness over time," as proxies for judging severity. Our approach also distinguishes clearly between uncertainty and ambiguity. Uncertainty refers to a situation of being unclear about factual statements; ambiguity to a situation of contested views about the desirability or severity of a given hazard. Uncertainty can be resolved in principle by more cognitive advances (with the exception of indeterminacy). ambiguity only by discourse. Discursive procedures include legal deliberations as well as novel participatory approaches. In addition, discursive methods of planning and conflict resolution can be used. If ambiguities are associated with a risk problem, it is not enough to demonstrate that risk regulators are open to public concerns and address the issues that many people wish them to take care ot The process of risk evaluation itself needs to be open to public input and new forms of deliberation. We have recommended a tested set of deliberative processes that are, at least in principle, capable of resolving ambiguities in risk debates (for a review, see Renn, Webler, & Wiedemaun. 1995). Deliberative processes are needed, however, for ail three types of management. Risk-based management relies on epistemiological, uncertainty-based management on reflective, and discourse-based management on participatory discourse forms. These three types of discourse could be labeled as an analytic-deliberative procedure for risk evaluation and management. We see the advantage of a deliberative style of regulation and management in a dynamic balance between procedure and outcome. Procedure should not have priority over the outcome; outcome should not have priority over the procedure. An intelligent combination of both can elaborate the required prerequisites of democratic deliberation and its substantial outcomes to enhance the legitimacy of political decisions (Guttman & Thompson, 1996; Bohman, 1997. 1998).  相似文献   

18.
A Probabilistic Framework for the Reference Dose (Probabilistic RfD)   总被引:5,自引:0,他引:5  
Determining the probabilistic limits for the uncertainty factors used in the derivation of the Reference Dose (RfD) is an important step toward the goal of characterizing the risk of noncarcinogenic effects from exposure to environmental pollutants. If uncertainty factors are seen, individually, as "upper bounds" on the dose-scaling factor for sources of uncertainty, then determining comparable upper bounds for combinations of uncertainty factors can be accomplished by treating uncertainty factors as distributions, which can be combined by probabilistic techniques. This paper presents a conceptual approach to probabilistic uncertainty factors based on the definition and use of RfDs by the US. EPA. The approach does not attempt to distinguish one uncertainty factor from another based on empirical data or biological mechanisms but rather uses a simple displaced lognormal distribution as a generic representation of all uncertainty factors. Monte Carlo analyses show that the upper bounds for combinations of this distribution can vary by factors of two to four when compared to the fixed-value uncertainty factor approach. The probabilistic approach is demonstrated in the comparison of Hazard Quotients based on RfDs with differing number of uncertainty factors.  相似文献   

19.
Risk evaluation of investment projects   总被引:1,自引:0,他引:1  
Charles P Bonini 《Omega》1975,3(6):735-750
A survey is given of techniques for evaluation of risk in individual capital investment projects. The paper identifies the four types of relationships affecting project uncertainty: (1) Accounting-type relationships defining cash flow; (2) Statistical relationships among variables in a given time period; (3) Autocorrelation relationships among cash flows over time; and (4) Uncertainty about project life. Two types of decisions also can affect project profitability and uncertainty: (1) Strategy decisions; and (2) Abandonment decisions. Four types of models for risk evaluation are identified: (1) Certainty model; (2) Hillier model; (3) Monte Carlo model; and (4) Decision Tree model. These four types of models are compared and evaluated in terms of how easily they can incorporate the relationships and decisions mentioned above. Computational issues are also discussed. Suggestions are made for further research.  相似文献   

20.
Governments are responsible for making policy decisions, often in the face of severe uncertainty about the factors involved. Expert elicitation can be used to fill information gaps where data are not available, cannot be obtained, or where there is no time for a full‐scale study and analysis. Various features of distributions for variables may be elicited, for example, the mean, standard deviation, and quantiles, but uncertainty about these values is not always recorded. Distributional and dependence assumptions often have to be made in models and although these are sometimes elicited from experts, modelers may also make assumptions for mathematical convenience (e.g., assuming independence between variables). Probability boxes (p‐boxes) provide a flexible methodology to analyze elicited quantities without having to make assumptions about the distribution shape. If information about distribution shape(s) is available, p‐boxes can provide bounds around the results given these possible input distributions. P‐boxes can also be used to combine variables without making dependence assumptions. This article aims to illustrate how p‐boxes may help to improve the representation of uncertainty for analyses based on elicited information. We focus on modeling elicited quantiles with nonparametric p‐boxes, modeling elicited quantiles with parametric p‐boxes where the elicited quantiles do not match the elicited distribution shape, and modeling elicited interval information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号