首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 341 毫秒
1.
Yifan Zhang 《Risk analysis》2013,33(1):109-120
Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject‐matter experts about the value of a decision‐relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal‐weight combination (the average of the experts’ distributions). We evaluate the properties of five combination methods (equal‐weight, best‐expert, performance, frequentist, and copula) using simulated expert‐judgment data for which we know the process generating the experts’ distributions. We examine cases in which two well‐calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best‐expert approaches perform better and the equal‐weight combination method performs worse than the alternative approaches.  相似文献   

2.
A Distributional Approach to Characterizing Low-Dose Cancer Risk   总被引:2,自引:0,他引:2  
Since cancer risk at very low doses cannot be directly measured in humans or animals, mathematical extrapolation models and scientific judgment are required. This article demonstrates a probabilistic approach to carcinogen risk assessment that employs probability trees, subjective probabilities, and standard bootstrapping procedures. The probabilistic approach is applied to the carcinogenic risk of formaldehyde in environmental and occupational settings. Sensitivity analyses illustrate conditional estimates of risk for each path in the probability tree. Fundamental mechanistic uncertainties are characterized. A strength of the analysis is the explicit treatment of alternative beliefs about pharmacokinetics and pharmacodynamics. The resulting probability distributions on cancer risk are compared with the point estimates reported by federal agencies. Limitations of the approach are discussed as well as future research directions.  相似文献   

3.
A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens's data are used to demonstrate performance of this method in updating parameters of the chicken processing line model.  相似文献   

4.
本文提出了基于贝叶斯神经网络(BNN)短期负荷预测模型。根据气象影响因素和电力负荷的样本数据,针对权向量参数的先验分布分别为正态分布和柯西分布两种情况,应用混合蒙特卡洛(HMC)算法学习了BNN的权向量参数。由HMC算法和Laplace算法学习的贝叶斯神经网络以及BP算法学习的传统神经网络分别对4月 (春)、8月 (夏)、10月 (秋)和1月(冬)每月25天的每个整点时刻的负荷进行了预测。这些神经网络的输入层有11个节点,它们分别与每个整点时刻和的气象因素、上一个整点时刻的气象因素和时间变量相对应,输出层只有一个节点,它与负荷变量对应。试验结果表明HMC算法学习的BNN的预测结果的百分比平均绝对误差( MAPE)和平方根平均误差( RSME )取值远远小于由Laplace 算法学习的BNN和BP算法学习的人工神经网络的 MAPE和RMSE。 而且,HMC算法学习的BNN在测试集和训练集上的预测误差MAPE和RMSE的相差很小。 实验结果充分说明HMC算法学习的BNN具有较高的预测精度和较强的泛化能力。  相似文献   

5.
Robert Fildes  Edward J Lusk 《Omega》1984,12(5):427-435
The major purpose of studies of forecasting accuracy is to help forecasters select the ‘best’ forecasting method. This paper examines accuracy studies in particular that of Makridakis et al. [20] with a view to establishing how they contribute to model choice. It is concluded that they affect the screening that most forecasters go through in selecting a range of methods to analyze—in Bayesian terms they are a major determinant of ‘prior knowledge’. This general conclusion is illustrated in the specific case of the Makridakis Competition (M-Competition). A survey of expert forecasters was made in both the UK and US. The respondents were asked about their familiarity with eight methods of univariate time series forecasting, and their perceived accuracy in three different forecasting situations. The results, similar for both the UK and US, were that the forecasters were relatively familiar with all the techniques included except Holt-Winters and Bayesian. For short horizons Box-Jenkins was seen as most accurate while trend curves was perceived as most suitable for the long horizons. These results are contrasted with those of the M-Competition, and conclusions drawn on how the results of the M-Competition should influence model screening and model choice.  相似文献   

6.
A method is developed for estimating a probability distribution using estimates of its percentiles provided by experts. The analyst's judgment concerning the credibility of these expert opinions is quantified in the likelihood function of Bayes'Theorem. The model considers explicitly the random variability of each expert estimate, the dependencies among the estimates of each expert, the dependencies among experts, and potential systematic biases. The relation between the results of the formal methods of this paper and methods used in practice is explored. A series of sensitivity studies provides insights into the significance of the parameters of the model. The methodology is applied to the problem of estimation of seismic fragility curves (i.e., the conditional probability of equipment failure given a seismically induced stress).  相似文献   

7.
This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.  相似文献   

8.
Probabilistic risk analysis (PRA) can be an effective tool to assess risks and uncertainties and to set priorities among safety policy options. Based on systems analysis and Bayesian probability, PRA has been applied to a wide range of cases, three of which are briefly presented here: the maintenance of the tiles of the space shuttle, the management of patient risk in anesthesia, and the choice of seismic provisions of building codes for the San Francisco Bay Area. In the quantification of a risk, a number of problems arise in the public sector where multiple stakeholders are involved. In this article, I describe different approaches to the treatments of uncertainties in risk analysis, their implications for risk ranking, and the role of risk analysis results in the context of a safety decision process. I also discuss the implications of adopting conservative hypotheses before proceeding to what is, in essence, a conditional uncertainty analysis, and I explore some implications of different levels of "conservatism" for the ranking of risk mitigation measures.  相似文献   

9.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

10.
The regulation and management of hazardous industrial activities increasingly rely on formal expert judgment processes to provide wisdom in areas of science and technology where traditional "good science" is, in practice, unable to supply unambiguous "facts." Expert judgment has always played a significant, if often unrecognized, role in analysis; however, recent trends are to make it formal, explicit, and documented so it can be identified and reviewed by others. We propose four categories of expert judgment and present three case studies which illustrate some of the pitfalls commonly encountered in its use. We conclude that there will be an expanding policy role for formal expert judgment and that the openness, transparency, and documentation that it requires have implications for enhanced public involvement in scientific and technical affairs.  相似文献   

11.
A large‐sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by an identifiable finite‐dimensional reduced‐form parameter vector. It is used to analyze the differences between Bayesian credible sets and frequentist confidence sets. We define a plug‐in estimator of the identified set and show that asymptotically Bayesian highest‐posterior‐density sets exclude parts of the estimated identified set, whereas it is well known that frequentist confidence sets extend beyond the boundaries of the estimated identified set. We recommend reporting estimates of the identified set and information about the conditional prior along with Bayesian credible sets. A numerical illustration for a two‐player entry game is provided.  相似文献   

12.
Adverse outcome pathway Bayesian networks (AOPBNs) are a promising avenue for developing predictive toxicology and risk assessment tools based on adverse outcome pathways (AOPs). Here, we describe a process for developing AOPBNs. AOPBNs use causal networks and Bayesian statistics to integrate evidence across key events. In this article, we use our AOPBN to predict the occurrence of steatosis under different chemical exposures. Since it is an expert-driven model, we use external data (i.e., data not used for modeling) from the literature to validate predictions of the AOPBN model. The AOPBN accurately predicts steatosis for the chemicals from our external data. In addition, we demonstrate how end users can utilize the model to simulate the confidence (based on posterior probability) associated with predicting steatosis. We demonstrate how the network topology impacts predictions across the AOPBN, and how the AOPBN helps us identify the most informative key events that should be monitored for predicting steatosis. We close with a discussion of how the model can be used to predict potential effects of mixtures and how to model susceptible populations (e.g., where a mutation or stressor may change the conditional probability tables in the AOPBN). Using this approach for developing expert AOPBNs will facilitate the prediction of chemical toxicity, facilitate the identification of assay batteries, and greatly improve chemical hazard screening strategies.  相似文献   

13.
A method for combining multiple expert opinions that are encoded in a Bayesian Belief Network (BBN) model is presented and applied to a problem involving the cleanup of hazardous chemicals at a site with contaminated groundwater. The method uses Bayes Rule to update each expert model with the observed evidence, then uses it again to compute posterior probability weights for each model. The weights reflect the consistency of each model with the observed evidence, allowing the aggregate model to be tailored to the particular conditions observed in the site-specific application of the risk model. The Bayesian update is easy to implement, since the likelihood for the set of evidence (observations for selected nodes of the BBN model) is readily computed by sequential execution of the BBN model. The method is demonstrated using a simple pedagogical example and subsequently applied to a groundwater contamination problem using an expert-knowledge BBN model. The BBN model in this application predicts the probability that reductive dechlorination of the contaminant trichlorethene (TCE) is occurring at a site--a critical step in the demonstration of the feasibility of monitored natural attenuation for site cleanup--given information on 14 measurable antecedent and descendant conditions. The predictions for the BBN models for 21 experts are weighted and aggregated using examples of hypothetical and actual site data. The method allows more weight for those expert models that are more reflective of the site conditions, and is shown to yield an aggregate prediction that differs from that of simple model averaging in a potentially significant manner.  相似文献   

14.
Bayesian Forecasting via Deterministic Model   总被引:1,自引:0,他引:1  
Rational decision making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of two Bayesian methods for producing a probabilistic forecast via any deterministic model. The Bayesian Processor of Forecast (BPF) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution. The BFS is compared with Monte Carlo simulation and ensemble forecasting technique, none of which can alone produce a probabilistic forecast that quantifies the total uncertainty, but each can serve as a component of the BFS.  相似文献   

15.
We consider a cross‐calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert—one informed of the true distribution of the process—is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category I) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross‐calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts.  相似文献   

16.
This paper considers a panel data model for predicting a binary outcome. The conditional probability of a positive response is obtained by evaluating a given distribution function (F) at a linear combination of the predictor variables. One of the predictor variables is unobserved. It is a random effect that varies across individuals but is constant over time. The semiparametric aspect is that the conditional distribution of the random effect, given the predictor variables, is unrestricted. This paper has two results. If the support of the observed predictor variables is bounded, then identification is possible only in the logistic case. Even if the support is unbounded, so that (from Manski (1987)) identification holds quite generally, the information bound is zero unless F is logistic. Hence consistent estimation at the standard pn rate is possible only in the logistic case.  相似文献   

17.
Li R  Englehardt JD  Li X 《Risk analysis》2012,32(2):345-359
Multivariate probability distributions, such as may be used for mixture dose‐response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose‐response biomarker and genetic information. In this article, a new two‐stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn‐in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose‐response function (DRF). Results are shown for the five‐parameter common‐mode and seven‐parameter dissimilar‐mode models, based on published data for eight benzene–toluene dose pairs. The common mode conditional DRF is obtained with a 21‐fold reduction in data requirement versus MCMC. Example common‐mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126‐PCB 153 mixture. Applicability is analyzed and discussed. Matlab® computer programs are provided.  相似文献   

18.
Shahid Suddle 《Risk analysis》2009,29(7):1024-1040
Buildings above roads, railways, and existing buildings themselves are examples of multifunctional urban locations. The construction stage of those buildings is in general extremely complicated. Safety is one of the critical issues during the construction stage. Because the traffic on the infrastructure must continue during the construction of the building above the infrastructure, falling objects due to construction activities form a major hazard for third parties, i.e., people present on the infrastructure or beneath it, such as car drivers and passengers. This article outlines a systematic approach to conduct quantitative risk assessment (QRA) and risk management of falling elements for third parties during the construction stage of the building above the infrastructure in multifunctional urban locations. In order to set up a QRA model, quantifiable aspects influencing the risk for third parties were determined. Subsequently, the conditional probabilities of these aspects were estimated by historical data or engineering judgment. This was followed by integrating those conditional probabilities, now used as input parameters for the QRA, into a Bayesian network representing the relation and the conditional dependence between the quantified aspects. The outcome of the Bayesian network—the calculation of both the human and financial risk in quantitative terms—is compared with the risk acceptance criteria as far as possible. Furthermore, the effect of some safety measures were analyzed and optimized in relation with decision making. Finally, the possibility of integration of safety measures in the functional and structural building design above the infrastructure are explored.  相似文献   

19.
This article presents methodology of applying probabilistic inversion in combination with expert judgment in priority setting problem. Experts rank scenarios according to severity. A linear multi‐criteria analysis model underlying the expert preferences is posited. Using probabilistic inversion, a distribution over attribute weights is found that optimally reproduces the expert rankings. This model is validated in three ways. First, consistency of expert rankings is checked, second, a complete model fitted using all expert data is found to adequately reproduce observed expert rankings, and third, the model is fitted to subsets of the expert data and used to predict rankings in out‐of‐sample expert data.  相似文献   

20.
In human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES‐based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号