首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A method for combining multiple expert opinions that are encoded in a Bayesian Belief Network (BBN) model is presented and applied to a problem involving the cleanup of hazardous chemicals at a site with contaminated groundwater. The method uses Bayes Rule to update each expert model with the observed evidence, then uses it again to compute posterior probability weights for each model. The weights reflect the consistency of each model with the observed evidence, allowing the aggregate model to be tailored to the particular conditions observed in the site-specific application of the risk model. The Bayesian update is easy to implement, since the likelihood for the set of evidence (observations for selected nodes of the BBN model) is readily computed by sequential execution of the BBN model. The method is demonstrated using a simple pedagogical example and subsequently applied to a groundwater contamination problem using an expert-knowledge BBN model. The BBN model in this application predicts the probability that reductive dechlorination of the contaminant trichlorethene (TCE) is occurring at a site--a critical step in the demonstration of the feasibility of monitored natural attenuation for site cleanup--given information on 14 measurable antecedent and descendant conditions. The predictions for the BBN models for 21 experts are weighted and aggregated using examples of hypothetical and actual site data. The method allows more weight for those expert models that are more reflective of the site conditions, and is shown to yield an aggregate prediction that differs from that of simple model averaging in a potentially significant manner.  相似文献   

2.
This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.  相似文献   

3.
As part of its preparation to review a potential license application from the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission (NRC) is examining the performance of the proposed Yucca Mountain nuclear waste repository. In this regard, we evaluated postclosure repository performance using Monte Carlo analyses with an NRC-developed system model that has 950 input parameters, of which 330 are sampled to represent system uncertainties. The quantitative compliance criterion for dose was established by NRC to protect inhabitants who might be exposed to any releases from the repository. The NRC criterion limits the peak-of-the-mean dose, which in our analysis is estimated by averaging the potential exposure at any instant in time for all Monte Carlo realizations, and then determining the maximum value of the mean curve within 10000 years, the compliance period. This procedure contrasts in important ways with a more common measure of risk based on the mean of the ensemble of peaks from each Monte Carlo realization. The NRC chose the former (peak-of-the-mean) because it more correctly represents the risk to an exposed individual. Procedures for calculating risk in the expected case of slow repository degradation differ from those for low-probability cases of disruption by external forces such as volcanism. We also explored the possibility of risk dilution (i.e., lower calculated risk) that could result from arbitrarily defining wide probability distributions for certain parameters. Finally, our sensitivity analyses to identify influential parameters used two approaches: (1). the ensemble of doses from each Monte Carlo realization at the time of the peak risk (i.e., peak-of-the-mean) and (2). the ensemble of peak doses calculated from each realization within 10000 years. The latter measure appears to have more discriminatory power than the former for many parameters (based on the greater magnitude of the sensitivity coefficient), but can yield different rankings, especially for parameters that influence the timing of releases.  相似文献   

4.
Various approaches have been proposed for determining scenario probabilities to facilitate long-range planning and decision making. These include microlevel approaches based on the analysis of relevant underlying events and their interrelations and direct macrolevel examination of the scenarios. The determination of a unique solution demands excessive consistency and time requirements on the part of the expert and often is not guaranteed by these procedures. We propose an interactive information maximizing scenario probability query procedure (IMQP) that exploits the desirable features of existing methods while circumventing their drawbacks. The approach requires elicitation of cardinal probability assessments and bounds for only marginal and first-order conditional events, as well as ordinal probability comparisons (probability orderings or rankings) of carefully selected scenario subsets determined using concepts of information theory. Guidelines for implementation based on simulation results are also developed. A goal program for handling inconsistent ordinal probability responses is also integrated into the procedure. The results of behavioral experimentation (which compared our approach to Expert Choice and showed that the IMQP was viable) compared favorably in terms of ease of use and time requirements, and works best for problems with a large number of scenarios. Design modifications to IMQP learned from the experiments, such as incorporating interactive graphics, are also in progress.  相似文献   

5.
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback‐Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed.  相似文献   

6.
A structured expert judgment study was organized to obtain input data for a microbial risk-assessment model describing the transmission of campylobacter during broiler-chicken processing in the Netherlands. More specially, the expert study was aimed at quantifying the uncertainty on input parameters of this model and focused on the contamination of broiler-chicken carcasses with campylobacter during processing. Following the protocol for structured expert judgment studies, expert assessments were elicited individually through subjective probability distribution functions. The classical model was used to aggregate the individual experts' distributions in order to obtain a single combined distribution per variable. Three different weighting schemes were applied, including equal weighting and performance-based weighting with and without optimalization of the combined distributions. The individual experts' weights were based on their performance on the seed variables. Results of the various weighting schemes are presented in terms of performance, robustness, and combined distributions of the seed variables and some of the query variables. All three weighting schemes had adequate performance, with the optimized combined distributions significantly outperforming both the equal weight and the nonoptimized combined distributions. Hence, this weighting scheme, having adequate robustness, was chosen for further processing of the results.  相似文献   

7.
In knowledge acquisition, it is often desirable to aggregate the judgments of multiple experts into a single system. In some cases this takes the form of averaging the judgments of those experts. In these situations it is desirable to determine if the experts have different views of the world before their individual judgments are aggregated. In validation, multiple experts often are employed to compare the performance of expert systems and other human actors. Often those judgments are then averaged to establish performance quality of the expert system. An important part of the comparison process should be determining if the experts have a similar view of the world. If the experts do not have similar views, their evaluations of performance may differ, resulting in a meaningless average performance measure. Alternatively, if all the validating experts do have similar views of the world then the validation process may result in paradigm myopia.  相似文献   

8.
West  R. Webster  Kodell  Ralph L. 《Risk analysis》1999,19(3):453-459
Methods of quantitative risk assessment for toxic responses that are measured on a continuous scale are not well established. Although risk-assessment procedures that attempt to utilize the quantitative information in such data have been proposed, there is no general agreement that these procedures are appreciably more efficient than common quantal dose–response procedures that operate on dichotomized continuous data. This paper points out an equivalence between the dose–response models of the nonquantal approach of Kodell and West(1) and a quantal probit procedure, and provides results from a Monte Carlo simulation study to compare coverage probabilities of statistical lower confidence limits on dose corresponding to specified additional risk based on applying the two procedures to continuous data from a dose–response experiment. The nonquantal approach is shown to be superior, in terms of both statistical validity and statistical efficiency.  相似文献   

9.
Matthew Revie 《Risk analysis》2011,31(7):1120-1132
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data‐dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with , where n is the number of trials.  相似文献   

10.
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small‐scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1–10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform.  相似文献   

11.
This study examined what lay people mean when they judge the "risk" of activities that involve the potential for accidental fatalities (e.g., hang gliding, living near a nuclear reactor). A sample of German and American students rated the "overall risk" of 14 such activities and provided 3 fatality estimates: the number of fatalities in an "average year," the individual yearly fatality probability (or odds), and the number of fatalities in a "disastrous accident." Subjects' fatality estimates were reasonably accurate and only moderately influenced by attitudes towards nuclear energy. Individual fatality probability correlated most highly with intuitive risk ratings. Disaster estimates correlated positively with risk ratings for those activities that had a low fatality probability and a relatively high disaster potential. Annual average fatality rates did not correlate with risk ratings at all. These findings were interpreted in terms of a two-dimensional cognitive structure. Subjective notions of risk were determined primarily by the personal chance of death; for some activities, "disaster potential" played a secondary role in shaping risk perception.  相似文献   

12.
Our concept of nine risk evaluation criteria, six risk classes, a decision tree, and three management categories was developed to improve the effectiveness, efficiency, and political feasibility of risk management procedures. The main task of risk evaluation and management is to develop adequate tools for dealing with the problems of complexity, uncertainty. and ambiguity. Based on the characteristics of different risk types and these three major problems, we distinguished three types of management--risk-based, precaution-based, and discourse-based strategies. The risk-based strategy--is the common solution to risk problems. Once the probabilities and their corresponding damage potentials are calculated, risk managers are required to set priorities according to the severity of the risk, which may be operationalized as a linear combination of damage and probability or as a weighted combination thereof. Within our new risk classification, the two central components have been augmented with other physical and social criteria that still demand risk-based strategies as long as uncertainty is low and ambiguity absent. Risk-based strategies are best solutions to problems of complexity and some components of uncertainty, for example, variation among individuals. If the two most important risk criteria, probability of occurrence and extent of damage, are relatively well known and little uncertainty is left, the traditional risk-based approach seems reasonable. If uncertainty plays a large role, in particular, indeterminacy or lack of knowledge, the risk-based approach becomes counterproductive. Judging the relative severity of risks on the basis of uncertain parameters does not make much sense. Under these circumstances, management strategies belonging to the precautionary management style are required. The precautionary approach has been the basis for much of the European environmental and health protection legislation and regulation. Our own approach to risk management has been guided by the proposition that any conceptualization of the precautionary principle should be (1) in line with established methods of scientific risk assessments, (2) consistent and discriminatory (avoiding arbitrary results) when it comes to prioritization, and (3) at the same time, specific with respect to precautionary measures, such as ALARA or BACT, or the strategy of containing risks in time and space. This suggestion does, however, entail a major problem: looking only to the uncertainties does not provide risk managers with a clue about where to set priorities for risk reduction. Risks vary in their degree of remaining uncertainties. How can one judge the severity of a situation when the potential damage and its probability are unknown or contested? In this dilemma, we advise risk managers to use additional criteria of hazardousness, such as "ubiquity versibility," and "pervasiveness over time," as proxies for judging severity. Our approach also distinguishes clearly between uncertainty and ambiguity. Uncertainty refers to a situation of being unclear about factual statements; ambiguity to a situation of contested views about the desirability or severity of a given hazard. Uncertainty can be resolved in principle by more cognitive advances (with the exception of indeterminacy). ambiguity only by discourse. Discursive procedures include legal deliberations as well as novel participatory approaches. In addition, discursive methods of planning and conflict resolution can be used. If ambiguities are associated with a risk problem, it is not enough to demonstrate that risk regulators are open to public concerns and address the issues that many people wish them to take care ot The process of risk evaluation itself needs to be open to public input and new forms of deliberation. We have recommended a tested set of deliberative processes that are, at least in principle, capable of resolving ambiguities in risk debates (for a review, see Renn, Webler, & Wiedemaun. 1995). Deliberative processes are needed, however, for ail three types of management. Risk-based management relies on epistemiological, uncertainty-based management on reflective, and discourse-based management on participatory discourse forms. These three types of discourse could be labeled as an analytic-deliberative procedure for risk evaluation and management. We see the advantage of a deliberative style of regulation and management in a dynamic balance between procedure and outcome. Procedure should not have priority over the outcome; outcome should not have priority over the procedure. An intelligent combination of both can elaborate the required prerequisites of democratic deliberation and its substantial outcomes to enhance the legitimacy of political decisions (Guttman & Thompson, 1996; Bohman, 1997. 1998).  相似文献   

13.
Old industrial landfills are important sources of environmental contamination in Europe, including Finland. In this study, we demonstrated the combination of TRIAD procedure, multicriteria decision analysis (MCDA), and statistical Monte Carlo analysis for assessing the risks to terrestrial biota in a former landfill site contaminated by petroleum hydrocarbons (PHCs) and metals. First, we generated hazard quotients by dividing the concentrations of metals and PHCs in soil by the corresponding risk‐based ecological benchmarks. Then we conducted ecotoxicity tests using five plant species, earthworms, and potworms, and determined the abundance and diversity of soil invertebrates from additional samples. We aggregated the results in accordance to the methods used in the TRIAD procedure, conducted rating of the assessment methods based on their performance in terms of specific criteria, and weighted the criteria using two alternative weighting techniques to produce performance scores for each method. We faced problems in using the TRIAD procedure, for example, the results from the animal counts had to be excluded from the calculation of integrated risk estimates (IREs) because our reference soil sample showed the lowest biodiversity and abundance of soil animals. In addition, hormesis hampered the use of the results from the ecotoxicity tests. The final probabilistic IREs imply significant risks at all sampling locations. Although linking MCDA with TRIAD provided a useful means to study and consider the performance of the alternative methods in predicting ecological risks, some uncertainties involved still remained outside the quantitative analysis.  相似文献   

14.
一种基于TOPSIS的混合型多属性群决策方法   总被引:4,自引:0,他引:4  
本文针对具有语言型和直觉模糊数两种评价信息的混合型多属性群决策问题,提出了一种基于TOPSIS的决策方法。首先,定义了新的函数,可将不同粒度的语言评价信息转换成直觉模糊数。其次,在直觉模糊数熵值的基础上,提出了一种新的专家权重确定模型。再次,利用IFWA算子在把个体决策矩阵集结为群体决策矩阵后,基于TOPSIS分别计算群体评价值到正理想解和负理想解的距离,从而得到方案集的排序。最后,在ERP选优问题中的应用,验证了方法的有效性。  相似文献   

15.
提出一种基于小波域隐马尔可夫模型的时间序列分析方法.首先介绍了离散小波变换;并针对小波系数进行统计建模,分别讨论了单个小波系数的混合高斯模型、不同尺度小波系数之间的隐马尔可夫树结构、模型训练及似然计算等问题;其次,提出了关于时间序列插值、平滑和预测的统一数学模型,并运用极大后验概率估计和贝叶斯原理,将小波域隐马尔可夫模型作为先验知识给出了一种分析时间序列的新方法;然后,详细推导了时间序列重建问题的Euler-Lagrange方程及对数似然的导数计算,将时间序列的插值、平滑和预测归结为一个简单线性方程的求解;最后通过期望极大化(EM)算法和共扼梯度算法进行交替迭代来计算小波域隐马尔可夫模型参数和重建时间序列.实验结果表明该方法在经济领域时间序列分析中的有效性.  相似文献   

16.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

17.
This paper focuses on qualitative multi-attribute group decision making (MAGDM) with linguistic information in terms of single linguistic terms and/or flexible linguistic expressions. To do so, we propose a new linguistic decision rule based on the concepts of random preference and stochastic dominance, by a probability based interpretation of weight information. The importance weights and the concept of fuzzy majority are incorporated into both the multi-attribute and collective decision rule by the so-called weighted ordered weighted averaging operator with the input parameters expressed as probability distributions over a linguistic term set. Moreover, a probability based method is proposed to measure the consensus degree between individual and collective overall random preferences based on the concept of stochastic dominance, which also takes both the importance weights and the fuzzy majority into account. As such, our proposed approaches are based on the ordinal semantics of linguistic terms and voting statistics. By this, on one hand, the strict constraint of the uniform linguistic term set in linguistic decision making can be released; on the other hand, the difference and variation of individual opinions can be captured. The proposed approaches can deal with qualitative MAGDM with single linguistic terms and flexible linguistic expressions. Two application examples taken from the literature are used to illuminate the proposed techniques by comparisons with existing studies. The results show that our proposed approaches are comparable with existing studies.  相似文献   

18.
This paper is concerned with robust estimation under moment restrictions. A moment restriction model is semiparametric and distribution‐free; therefore it imposes mild assumptions. Yet it is reasonable to expect that the probability law of observations may have some deviations from the ideal distribution being modeled, due to various factors such as measurement errors. It is then sensible to seek an estimation procedure that is robust against slight perturbation in the probability measure that generates observations. This paper considers local deviations within shrinking topological neighborhoods to develop its large sample theory, so that both bias and variance matter asymptotically. The main result shows that there exists a computationally convenient estimator that achieves optimal minimax robust properties. It is semiparametrically efficient when the model assumption holds, and, at the same time, it enjoys desirable robust properties when it does not.  相似文献   

19.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

20.
Theodor J Stewart 《Omega》1984,12(2):175-184
Rivett [5] has proposed that an approximate preference ordering may be deduced from statements of pairwise indifferences between decision alternatives by using multi-dimensional scaling. In this paper it is demonstrated that much stronger evaluations of preference are possible by applying formal statistical inferential procedures to a simple parametric model, relating indifference to closeness on a scale defined by a linear function of attribute values. This can be used to screen out a considerable proportion of less desirable decision alternatives. The method is illustrated by application to Rivett's problem of the hypothetical Town of Brove, for which a satisfactory matching with Rivett's utilities is obtained. It is also shown that the method can provide useful preference orderings on the basis of less than 20% of all possible pairwise comparisons between alternatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号