首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian networks (BNs) are graphical modeling tools that are generally recommended for exploring what‐if scenarios, visualizing systems and problems, and for communication between stakeholders during decision making. In this article, we investigate their potential for exploring different perspectives in trade disputes. To do so, we draw on a specific case study that was arbitrated by the World Trade Organization (WTO): the Australia‐New Zealand apples dispute. The dispute centered on disagreement about judgments contained within Australia's 2006 import risk analysis (IRA). We built a range of BNs of increasing complexity that modeled various approaches to undertaking IRAs, from the basic qualitative and semi‐quantitative risk analyses routinely performed in government agencies, to the more complex quantitative simulation undertaken by Australia in the apples dispute. We found the BNs useful for exploring disagreements under uncertainty because they are probabilistic and transparently represent steps in the analysis. Different scenarios and evidence can easily be entered. Specifically, we explore the sensitivity of the risk output to different judgments (particularly volume of trade). Thus, we explore how BNs could usefully aid WTO dispute settlement. We conclude that BNs are preferable to basic qualitative and semi‐quantitative risk analyses because they offer an accessible interface and are mathematically sound. However, most current BN modeling tools are limited compared with complex simulations, as was used in the 2006 apples IRA. Although complex simulations may be more accurate, they are a black box for stakeholders. BNs have the potential to be a transparent aid to complex decision making, but they are currently computationally limited. Recent technological software developments are promising.  相似文献   

2.
Risk assessors often use different probability plots as a way to assess the fit of a particular distribution or model by comparing the plotted points to a straight line and to obtain estimates of the parameters in parametric distributions or models. When empirical data do not fall in a sufficiently straight line on a probability plot, and when no other single parametric distribution provides an acceptable (graphical) fit to the data, the risk assessor may consider a mixture model with two component distributions. Animated probability plots are a way to visualize the possible behaviors of mixture models with two component distributions. When no single parametric distribution provides an adequate fit to an empirical dataset, animated probability plots can help an analyst pick some plausible mixture models for the data based on their qualitative fit. After using animations during exploratory data analysis, the analyst must then use other statistical tools, including but not limited to: Maximum Likelihood Estimation (MLE) to find the optimal parameters, Goodness of Fit (GoF) tests, and a variety of diagnostic plots to check the adequacy of the fit. Using a specific example with two LogNormal components, we illustrate the use of animated probability plots as a tool for exploring the suitability of a mixture model with two component distributions. Animations work well with other types of probability plots, and they may be extended to analyze mixture models with three or more component distributions.  相似文献   

3.
陆静 《管理工程学报》2012,26(3):136-145
尽管高级计量法由于具有计算精确和节约监管资本等优点而被多数商业银行所青睐,但对于采用哪一种方法来刻画低频高危的操作风险尾部数据却没有一致认识。本文根据巴塞尔委员会关于操作风险计量的原则,采用分块极大值方法和概率加权矩参数估计法,对中国商业银行1990—2009年间的操作风险数据进行了实证。从图形检验和数值检验结果来看,该模型估计的参数具有较高的拟合优度,能够较好地拟合操作风险极端值的尾部分布,为商业银行计量操作风险资本提供了较高的参考价值。  相似文献   

4.
A novel approach to the quantitative assessment of food-borne risks is proposed. The basic idea is to use Bayesian techniques in two distinct steps: first by constructing a stochastic core model via a Bayesian network based on expert knowledge, and second, using the data available to improve this knowledge. Unlike the Monte Carlo simulation approach as commonly used in quantitative assessment of food-borne risks where data sets are used independently in each module, our consistent procedure incorporates information conveyed by data throughout the chain. It allows "back-calculation" in the food chain model, together with the use of data obtained "downstream" in the food chain. Moreover, the expert knowledge is introduced more simply and consistently than with classical statistical methods. Other advantages of this approach include the clear framework of an iterative learning process, considerable flexibility enabling the use of heterogeneous data, and a justified method to explore the effects of variability and uncertainty. As an illustration, we present an estimation of the probability of contracting a campylobacteriosis as a result of broiler contamination, from the standpoint of quantitative risk assessment. Although the model thus constructed is oversimplified, it clarifies the principles and properties of the method proposed, which demonstrates its ability to deal with quite complex situations and provides a useful basis for further discussions with different experts in the food chain.  相似文献   

5.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(12):1664-1671
Do pollution emissions from livestock operations increase infant mortality rate (IMR)? A recent regression analysis of changes in IMR against changes in aggregate “animal units” (a weighted sum of cattle, pig, and poultry numbers) over time, for counties throughout the United States, suggested the provocative conclusion that they do: “[A] doubling of production leads to a 7.4% increase in infant mortality.” Yet, we find that regressing IMR changes against changes in specific components of “animal units” (cattle, pigs, and broilers) at the state level reveals statistically significant negative associations between changes in livestock production (especially, cattle production) and changes in IMR. We conclude that statistical associations between livestock variables and IMR variables are very sensitive to modeling choices (e.g., selection of explanatory variables, and use of specific animal types vs. aggregate “animal units). Such associations, whether positive or negative, do not warrant causal interpretation. We suggest that standard methods of quantitative risk assessment (QRA), including emissions release (source) models, fate and transport modeling, exposure assessment, and dose-response modeling, really are important—and indeed, perhaps, essential—for drawing valid causal inferences about health effects of exposures to guide sound, well-informed public health risk management policy. Reduced-form regression models, which skip most or all of these steps, can only quantify statistical associations (which may be due to model specification, variable selection, residual confounding, or other noncausal factors). Sound risk management requires the extra work needed to identify and model valid causal relations.  相似文献   

6.
Adverse outcome pathway Bayesian networks (AOPBNs) are a promising avenue for developing predictive toxicology and risk assessment tools based on adverse outcome pathways (AOPs). Here, we describe a process for developing AOPBNs. AOPBNs use causal networks and Bayesian statistics to integrate evidence across key events. In this article, we use our AOPBN to predict the occurrence of steatosis under different chemical exposures. Since it is an expert-driven model, we use external data (i.e., data not used for modeling) from the literature to validate predictions of the AOPBN model. The AOPBN accurately predicts steatosis for the chemicals from our external data. In addition, we demonstrate how end users can utilize the model to simulate the confidence (based on posterior probability) associated with predicting steatosis. We demonstrate how the network topology impacts predictions across the AOPBN, and how the AOPBN helps us identify the most informative key events that should be monitored for predicting steatosis. We close with a discussion of how the model can be used to predict potential effects of mixtures and how to model susceptible populations (e.g., where a mutation or stressor may change the conditional probability tables in the AOPBN). Using this approach for developing expert AOPBNs will facilitate the prediction of chemical toxicity, facilitate the identification of assay batteries, and greatly improve chemical hazard screening strategies.  相似文献   

7.
We examined drivers of article citations using 776 articles that were published from 1990 to 2012 in a broad-based and high-impact social sciences journal, The Leadership Quarterly. These articles had 1191 unique authors having published and received in total (at the time of their most recent article published in our dataset) 16,817 articles and 284,777 citations, respectively. Our models explained 66.6% of the variance in citations and showed that quantitative, review, method, and theory articles were significantly more cited than were qualitative articles or agent-based simulations. As concerns quantitative articles, which constituted the majority of the sample, our model explained 80.3% of the variance in citations; some methods (e.g., use of SEM) and designs (e.g., meta-analysis), as well as theoretical approaches (e.g., use of transformational, charismatic, or visionary type-leadership theories) predicted higher article citations. Regarding statistical conclusion validity of quantitative articles, articles having endogeneity threats received significantly fewer citations than did those using a more robust design or an estimation procedure that ensured correct causal estimation. We make several general recommendations on how to improve research practice and article citations.  相似文献   

8.
When they do not use formal quantitative risk assessment methods, many scientists (like other people) make mistakes and exhibit biases in reasoning about causation, if‐then relations, and evidence. Decision‐related conclusions or causal explanations are reached prematurely based on narrative plausibility rather than adequate factual evidence. Then, confirming evidence is sought and emphasized, but disconfirming evidence is ignored or discounted. This tendency has serious implications for health‐related public policy discussions and decisions. We provide examples occurring in antimicrobial health risk assessments, including a case study of a recently reported positive relation between virginiamycin (VM) use in poultry and risk of resistance to VM‐like (streptogramin) antibiotics in humans. This finding has been used to argue that poultry consumption causes increased resistance risks, that serious health impacts may result, and therefore use of VM in poultry should be restricted. However, the original study compared healthy vegetarians to hospitalized poultry consumers. Our examination of the same data using conditional independence tests for potential causality reveals that poultry consumption acted as a surrogate for hospitalization in this study. After accounting for current hospitalization status, no evidence remains supporting a causal relationship between poultry consumption and increased streptogramin resistance. This example emphasizes both the importance and the practical possibility of analyzing and presenting quantitative risk information using data analysis techniques (such as Bayesian model averaging (BMA) and conditional independence tests) that are as free as possible from potential selection, confirmation, and modeling biases.  相似文献   

9.
10.
《Risk analysis》2018,38(7):1474-1489
Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation‐related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation‐related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions.  相似文献   

11.
In quantitative uncertainty analysis, it is essential to define rigorously the endpoint or target of the assessment. Two distinctly different approaches using Monte Carlo methods are discussed: (1) the end point is a fixed but unknown value (e.g., the maximally exposed individual, the average individual, or a specific individual) or (2) the end point is an unknown distribution of values (e.g., the variability of exposures among unspecified individuals in the population). In the first case, values are sampled at random from distributions representing various "degrees of belief" about the unknown "fixed" values of the parameters to produce a distribution of model results. The distribution of model results represents a subjective confidence statement about the true but unknown assessment end point. The important input parameters are those that contribute most to the spread in the distribution of the model results. In the second case, Monte Carlo calculations are performed in two dimensions producing numerous alternative representations of the true but unknown distribution. These alternative distributions permit subject confidence statements to be made from two perspectives: (1) for the individual exposure occurring at a specified fractile of the distribution or (2) for the fractile of the distribution associated with a specified level of individual exposure. The relative importance of input parameters will depend on the fractile or exposure level of interest. The quantification of uncertainty for the simulation of a true but unknown distribution of values represents the state-of-the-art in assessment modeling.  相似文献   

12.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

13.
Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables.  相似文献   

14.
多源信息集结对提高自然灾害环境下统计数据可信度具有重要作用,但信息渠道的多源性极易导致集结信息数据类型不一致、不兼容,形成灰色异构数据序列。本文应用灰色系统建模技术对灰色异构数据预测建模方法展开研究,首先,基于"核"和"灰度"对灰色异构数据进行规范化处理;然后,建立灰色异构数据"核"序列的DGM(1,1)模型,并以"核"为基础,根据灰度不减公理,以灰色异构数据序列中最大灰度值所对应的信息域作为预测结果之信息域,推导并构建了灰色异构数据预测模型;最后,将该模型应用于某地震帐篷需求量的预测。本文研究成果将传统灰色模拟及预测模型建模对象从"同质数据"拓展至"异构数据",对丰富与完善灰色模拟及预测模型理论体系,提高自然灾害救援效率具有积极意义。  相似文献   

15.
Use of similar or identical antibiotics in both human and veterinary medicine has come under increasing scrutiny by regulators concerned that bacteria resistant to animal antibiotics will infect people and resist treatment with similar human antibiotics, leading to excess illnesses and deaths. Scientists, regulators, and interest groups in the United States and Europe have urged bans on nontherapeutic and some therapeutic uses of animal antibiotics to protect human health. Many regulators and public health experts have also expressed dissatisfaction with the perceived limitations of quantitative risk assessment and have proposed alternative qualitative and judgmental approaches ranging from "attributable fraction" estimates to risk management recommendations based on the precautionary principle or on expert judgments about the importance of classes of compounds in human medicine. This article presents a more traditional quantitative risk assessment of the likely human health impacts of continuing versus withdrawing use of fluoroquinolones and macrolides in production of broiler chickens in the United States. An analytic framework is developed and applied to available data. It indicates that withdrawing animal antibiotics can cause far more human illness-days than it would prevent: the estimated human BENEFIT:RISK health ratio for human health impacts of continued animal antibiotic use exceeds 1,000:1 in many cases. This conclusion is driven by a hypothesized causal sequence in which withdrawing animal antibiotic use increases illnesses rates in animals, microbial loads in servings from the affected animals, and hence human health risks. This potentially important aspect of human health risk assessment for animal antibiotics has not previously been quantified.  相似文献   

16.
Weight of Evidence: A Review of Concept and Methods   总被引:1,自引:0,他引:1  
Douglas L. Weed 《Risk analysis》2005,25(6):1545-1557
"Weight of evidence" (WOE) is a common term in the published scientific and policy-making literature, most often seen in the context of risk assessment (RA). Its definition, however, is unclear. A systematic review of the scientific literature was undertaken to characterize the concept. For the years 1994 through 2004, PubMed was searched for publications in which "weight of evidence" appeared in the abstract and/or title. Of the 276 papers that met these criteria, 92 were selected for review: 71 papers published in 2003 and 2004 (WOE appeared in abstract/title) and 21 from 1994 through 2002 (WOE appeared in title). WOE has three characteristic uses in this literature: (1) metaphorical, where WOE refers to a collection of studies or to an unspecified methodological approach; (2) methodological, where WOE points to established interpretative methodologies (e.g., systematic narrative review, meta-analysis, causal criteria, and/or quality criteria for toxicological studies) or where WOE means that "all" rather than some subset of the evidence is examined, or rarely, where WOE points to methods using quantitative weights for evidence; and (3) theoretical, where WOE serves as a label for a conceptual framework. Several problems are identified: the frequent lack of definition of the term "weight of evidence," multiple uses of the term and a lack of consensus about its meaning, and the many different kinds of weights, both qualitative and quantitative, which can be used in RA. A practical recommendation emerges: the WOE concept and its associated methods should be fully described when used. A research agenda should examine the advantages of quantitative versus qualitative weighting schemes, how best to improve existing methods, and how best to combine those methods (e.g., epidemiology's causal criteria with toxicology's quality criteria).  相似文献   

17.
A manufacturing optimization strategy is developed and demonstrated, which combines an asset utilization model and a process optimization framework with multivariate statistical analysis in a systematic manner to focus and drive process improvement activities. Although this manufacturing strategy is broadly applicable, the approach is discussed with respect to a polymer sheet manufacturing operation. The asset utilization (AU) model demonstrates that efficient equipment utilization can be monitored quantitatively and improvement opportunities identified so that the greatest benefit to the operation can be obtained. The process optimization framework, comprised of three parallel activities and a designed experiment, establishes the process-product relationship. The overall strategy of predictive model development provided from the parallel activities comprising the optimization framework is to synthesize a model based on existing data, both qualitative and quantitative, using canonical discriminant analysis, to identify main effect variables affecting the principal efficiency constraints identified using AU, operator knowledge and order-of-magni-tude calculations are then employed to refine this model using designed experiments, where appropriate, to facilitate the development of a quantitative, proactive optimization strategy for eliminating the constraints. Most importantly, this overall strategy plays a significant role in demonstrating, and facilitating employee acceptance, that the manufacturing operation has evolved from an experienced-based process to one based on quantifiable science.  相似文献   

18.
基于操作风险呈厚尾分布的特征,本文按照巴塞尔协议的要求,采用POT极值模型分别估计了多个操作风险单元的边缘分布,然后用多元Copula函数来刻画这些操作风险单元之间的关联性并计算在险价值。通过对中国商业银行1990-2010年操作风险数据的实证分析表明,Clayton Copula能更好地反映各操作风险单元之间的相关性结构,且采用Copula考虑操作风险相关性下的VaR值要比简单加总下的VaR值减少约32.3%。因此,应用Copula函数计量操作风险相关性,不仅可以提高估计的准确性,还能够达到资产组合的风险分散化效应,减少操作风险资本要求,为商业银行提升盈利能力创造条件。  相似文献   

19.
This article presents an iterative six‐step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号