首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Matthew Revie 《Risk analysis》2011,31(7):1120-1132
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data‐dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with , where n is the number of trials.  相似文献   

2.
Adverse outcome pathway Bayesian networks (AOPBNs) are a promising avenue for developing predictive toxicology and risk assessment tools based on adverse outcome pathways (AOPs). Here, we describe a process for developing AOPBNs. AOPBNs use causal networks and Bayesian statistics to integrate evidence across key events. In this article, we use our AOPBN to predict the occurrence of steatosis under different chemical exposures. Since it is an expert-driven model, we use external data (i.e., data not used for modeling) from the literature to validate predictions of the AOPBN model. The AOPBN accurately predicts steatosis for the chemicals from our external data. In addition, we demonstrate how end users can utilize the model to simulate the confidence (based on posterior probability) associated with predicting steatosis. We demonstrate how the network topology impacts predictions across the AOPBN, and how the AOPBN helps us identify the most informative key events that should be monitored for predicting steatosis. We close with a discussion of how the model can be used to predict potential effects of mixtures and how to model susceptible populations (e.g., where a mutation or stressor may change the conditional probability tables in the AOPBN). Using this approach for developing expert AOPBNs will facilitate the prediction of chemical toxicity, facilitate the identification of assay batteries, and greatly improve chemical hazard screening strategies.  相似文献   

3.
A Flexible Count Data Regression Model for Risk Analysis   总被引:1,自引:0,他引:1  
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.  相似文献   

4.
This study presents a tree‐based logistic regression approach to assessing work zone casualty risk, which is defined as the probability of a vehicle occupant being killed or injured in a work zone crash. First, a decision tree approach is employed to determine the tree structure and interacting factors. Based on the Michigan M‐94\I‐94\I‐94BL\I‐94BR highway work zone crash data, an optimal tree comprising four leaf nodes is first determined and the interacting factors are found to be airbag, occupant identity (i.e., driver, passenger), and gender. The data are then split into four groups according to the tree structure. Finally, the logistic regression analysis is separately conducted for each group. The results show that the proposed approach outperforms the pure decision tree model because the former has the capability of examining the marginal effects of risk factors. Compared with the pure logistic regression method, the proposed approach avoids the variable interaction effects so that it significantly improves the prediction accuracy.  相似文献   

5.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

6.
Operational risk management of autonomous vehicles in extreme environments is heavily dependent on expert judgments and, in particular, judgments of the likelihood that a failure mitigation action, via correction and prevention, will annul the consequences of a specific fault. However, extant research has not examined the reliability of experts in estimating the probability of failure mitigation. For systems operations in extreme environments, the probability of failure mitigation is taken as a proxy of the probability of a fault not reoccurring. Using a priori expert judgments for an autonomous underwater vehicle mission in the Arctic and a posteriori mission field data, we subsequently developed a generalized linear model that enabled us to investigate this relationship. We found that the probability of failure mitigation alone cannot be used as a proxy for the probability of fault not reoccurring. We conclude that it is also essential to include the effort to implement the failure mitigation when estimating the probability of fault not reoccurring. The effort is the time taken by a person (measured in person-months) to execute the task required to implement the fault correction action. We show that once a modicum of operational data is obtained, it is possible to define a generalized linear logistic model to estimate the probability a fault not reoccurring. We discuss how our findings are important to all autonomous vehicle operations and how similar operations can benefit from revising expert judgments of risk mitigation to take account of the effort required to reduce key risks.  相似文献   

7.
采用期权及标的资产价格数据, 基于离散时间EGARCH模型和连续时间GARCH扩散模型分别估计了客观与风险中性密度, 进而推导了经验定价核. 在此基础上, 基于等级依赖期望效用模型, 在标准的效应函数形式下构建了相应的概率权重函数. 采用香港恒生指数及其指数权证价格数据进行实证研究, 结果表明: (1) 经验定价核不是单调递减的, 而是展现出驼峰(非单调性), 即“定价核之谜”;(2) 经验概率权重函数展现S型, 表明市场投资者低估尾部概率事件, 高估中、高概率事件;(3) “定价核之谜”可以由具有标准效用函数与S型概率权重函数的等级依赖期望效用模型解释。  相似文献   

8.
李庆  张虎 《中国管理科学》2020,28(10):43-53
本文建立一种改进的非参数期权定价模型,称为单指标非参数期权定价模型。相比现有非参数回归期权定价模型是期权价格关于各个因素的多元回归函数,本模型通过变量变换把期权价格多个因素指标转换为一个综合变量——单指标,得到期权价格关于单指标的一元非参数回归方程。改进的模型实现了多元非参数期权定价模型的降维和简化了模型计算;还通过多个期限期权的单指标组合解决了非参数估计的样本数量问题;以及通过期限平滑解决了现有非参数定价模型中的日历效应问题。选取上证50ETF期权数据实证分析表明,无论是样本内的估计结果还是样本外的预测结果都比传统的Black-Scholes模型、半参数Black-Scholes模型和多元非参数回归期权定价模型估计效果有提高。  相似文献   

9.
Louis Anthony Cox  Jr  . 《Risk analysis》2006,26(6):1581-1599
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.  相似文献   

10.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

11.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

12.
Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose–response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure–response quantile relationship, which gives the model flexibility to estimate the quantal dose–response function. We describe this methodology and apply it to both epidemiology and toxicology data.  相似文献   

13.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

14.
A Bayesian forecasting model is developed to quantify uncertainty about the postflight state of a field-joint primary O-ring (not damaged or damaged), given the O-ring temperature at the time of launch of the space shuttle Challenger in 1986. The crux of this problem is the enormous extrapolation that must be performed: 23 previous shuttle flights were launched at temperatures between 53 °F and 81 °F, but the next launch is planned at 31 °F. The fundamental advantage of the Bayesian model is its theoretic structure, which remains correct over the entire sample space of the predictor and that affords flexibility of implementation. A novel approach to extrapolating the input elements based on expert judgment is presented; it recognizes that extrapolation is equivalent to changing the conditioning of the model elements. The prior probability of O-ring damage can be assessed subjectively by experts following a nominal-interacting process in a group setting. The Bayesian model can output several posterior probabilities of O-ring damage, each conditional on the given temperature and on a different strength of the temperature effect hypothesis. A lower bound on, or a value of, the posterior probability can be selected for decision making consistently with expert judgment, which encapsulates engineering information, knowledge, and experience. The Bayesian forecasting model is posed as a replacement for the logistic regression and the nonparametric approach advocated in earlier analyses of the Challenger O-ring data. A comparison demonstrates the inherent deficiency of the generalized linear models for risk analyses that require (1) forecasting an event conditional on a predictor value outside the sampling interval, and (2) combining empirical evidence with expert judgment.  相似文献   

15.
In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within‐subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience‐based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience‐based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience‐based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus‐response compatibility.  相似文献   

16.
《Omega》2005,33(1):85-91
This paper proposes a quadratic interval logit model (or quadratic interval logistic regression analysis) based on a quadratic programming approach to deal with binary response variables. This model combines the advantages of logit (or logistic regression) and Tanaka's quadratic interval regression model. As a demonstration, we applied this model to forecasting corporate distress in the UK. The results show that this model can support the logit model to discriminate between groups, and it provides more information to researchers.  相似文献   

17.
黄履珺  佘廉 《中国管理科学》2018,26(12):146-157
突发事件是"低概率、高损失"的极端事件。突发事件风险客观存在,公众的应对准备意愿及行为对减缓突发事件风险及降低突发事件损失有着重要影响。但实际上,公众普遍表现出较弱的准备意愿,很少主动采取准备措施及行为。为了解释公众为避免突发事件损失而事先采取应对准备的行为差异,本文基于实证调查数据,将保护动机理论(PMT)应用于公众准备意愿的预测研究,建立突发事件公众认知与准备意愿理论模型,运用多元回归分析及结构方程模型方法,验证公众风险认知、应对认知与准备意愿之间的路径关系。其中风险认知包括可能性和严重性认知两个变量,应对认知包括应对效能、自我效能和应对成本认知三个变量。实证数据通过问卷调查的方式获取,调查对象涉及湖北省武汉市七大行政区域共405位城区居民。数据分析结果表明,公众的准备意愿受到风险认知及应对认知的共同影响,相比于风险认知,应对认知对准备意愿有更大的解释效力;年龄、教育程度、收入等三类人口统计学特征对准备意愿有一定的解释效力。研究结论显示,为了促使公众形成准备意愿,风险沟通不仅应考虑突发事件风险发生的可能性及后果的严重性,更需要关注应对效能、自我效能以及应对成本等对公众准备意愿有着显著影响的因素。  相似文献   

18.
Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.  相似文献   

19.
Epidemiology textbooks often interpret population attributable fractions based on 2 x 2 tables or logistic regression models of exposure-response associations as preventable fractions, i.e., as fractions of illnesses in a population that would be prevented if exposure were removed. In general, this causal interpretation is not correct, since statistical association need not indicate causation; moreover, it does not identify how much risk would be prevented by removing specific constituents of complex exposures. This article introduces and illustrates an approach to calculating useful bounds on preventable fractions, having valid causal interpretations, from the types of partial but useful molecular epidemiological and biological information often available in practice. The method applies probabilistic risk assessment concepts from systems reliability analysis, together with bounding constraints for the relationship between event probabilities and causation (such as that the probability that exposure X causes response Y cannot exceed the probability that exposure X precedes response Y, or the probability that both X and Y occur) to bound the contribution to causation from specific causal pathways. We illustrate the approach by estimating an upper bound on the contribution to lung cancer risk made by a specific, much-discussed causal pathway that links smoking to a polycyclic aromatic hydrocarbon (PAH) (specifically, benzo(a)pyrene diol epoxide-DNA) adducts at hot spot codons at p53 in lung cells. The result is a surprisingly small preventable fraction (of perhaps 7% or less) for this pathway, suggesting that it will be important to consider other mechanisms and non-PAH constituents of tobacco smoke in designing less risky tobacco-based products.  相似文献   

20.
The market share of Tietê–Paraná inland waterway (TPIW) in the transport matrix of the São Paulo state, Brazil, is currently only 0.6%, but it is expected to increase to 6% over the next 20 years. In this scenario, to identify and explore potential undesired events a risk assessment is necessary. Part of this involves assigning the probability of occurrence of events, which usually is accomplished by a frequentist approach. However, in many cases, this approach is not possible due to unavailable or nonrepresentative data. This is the case of the TPIW that even though an expressive accident history is available, a frequentist approach is not suitable due to differences between current operational conditions and those met in the past. Therefore, a subjective assessment is an option as allows for working independently of the historical data, thus delivering more reliable results. In this context, this article proposes a methodology for assessing the probability of occurrence of undesired events based on expert opinion combined with fuzzy analysis. This methodology defines a criterion to weighting the experts and, using the fuzzy logic, evaluates the similarities among the experts’ beliefs to be used in the aggregation process before the defuzzification that quantifies the probability of occurrence of the events based on the experts’ opinion. Moreover, the proposed methodology is applied to the real case of the TPIW and the results obtained from the elicited experts are compared with a frequentist approach evidencing the impact on the results when considering different interpretations of the probability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号