首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study presents a tree‐based logistic regression approach to assessing work zone casualty risk, which is defined as the probability of a vehicle occupant being killed or injured in a work zone crash. First, a decision tree approach is employed to determine the tree structure and interacting factors. Based on the Michigan M‐94\I‐94\I‐94BL\I‐94BR highway work zone crash data, an optimal tree comprising four leaf nodes is first determined and the interacting factors are found to be airbag, occupant identity (i.e., driver, passenger), and gender. The data are then split into four groups according to the tree structure. Finally, the logistic regression analysis is separately conducted for each group. The results show that the proposed approach outperforms the pure decision tree model because the former has the capability of examining the marginal effects of risk factors. Compared with the pure logistic regression method, the proposed approach avoids the variable interaction effects so that it significantly improves the prediction accuracy.  相似文献   

2.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

3.
Dan Gorton 《Risk analysis》2014,34(9):1763-1774
The article introduces the use of probabilistic risk assessment for modeling the incident response process of online financial services. The main contribution is the creation of incident response trees, using event tree analysis, which provides us with a visual tool and a systematic way to estimate the probability of a successful incident response process against the currently known risk landscape, making it possible to measure the balance between front‐end and back‐end security measures. The model is presented using an illustrative example, and is then applied to the incident response process of a Swedish bank. Access to relevant data is verified and the applicability and usability of the proposed model is verified using one year of historical data. Potential advantages and possible shortcomings are discussed, referring to both the design phase and the operational phase, and future work is presented.  相似文献   

4.
Quantitative Assessment of Building Fire Risk to Life Safety   总被引:1,自引:0,他引:1  
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.  相似文献   

5.
This article presents a regression‐tree‐based meta‐analysis of rodent pulmonary toxicity studies of uncoated, nonfunctionalized carbon nanotube (CNT) exposure. The resulting analysis provides quantitative estimates of the contribution of CNT attributes (impurities, physical dimensions, and aggregation) to pulmonary toxicity indicators in bronchoalveolar lavage fluid: neutrophil and macrophage count, and lactate dehydrogenase and total protein concentrations. The method employs classification and regression tree (CART) models, techniques that are relatively insensitive to data defects that impair other types of regression analysis: high dimensionality, nonlinearity, correlated variables, and significant quantities of missing values. Three types of analysis are presented: the RT, the random forest (RF), and a random‐forest‐based dose‐response model. The RT shows the best single model supported by all the data and typically contains a small number of variables. The RF shows how much variance reduction is associated with every variable in the data set. The dose‐response model is used to isolate the effects of CNT attributes from the CNT dose, showing the shift in the dose‐response caused by the attribute across the measured range of CNT doses. It was found that the CNT attributes that contribute the most to pulmonary toxicity were metallic impurities (cobalt significantly increased observed toxicity, while other impurities had mixed effects), CNT length (negatively correlated with most toxicity indicators), CNT diameter (significantly positively associated with toxicity), and aggregate size (negatively correlated with cell damage indicators and positively correlated with immune response indicators). Increasing CNT N2‐BET‐specific surface area decreased toxicity indicators.  相似文献   

6.
由于错误分类代价差异和不同价值客户数量的不平衡分布,基于总体准确率的数据挖掘方法不能体现由于客户价值不同对分类效果带来的影响.为了解决错误分类不平衡的数据分类问题,利用代价敏感学习技术扩展现有决策树模型,将这一方法应用在客户价值细分,建立基于客户价值的错分代价矩阵,以分类代价最小化作为决策树分支的标准,建立分类的期望损失函数作为分类效果的评价标准,采用中国某银行的信用卡客户数据进行实验.实验结果表明,与传统决策树方法相比,代价敏感决策树对客户价值细分问题有更好的分类效果,可以更精确地控制代价敏感性和不同种分类错误的分布,降低总体的错误分类代价,使模型能更准确反映分类的代价,有效识别客户价值  相似文献   

7.
This study proposes an integrated analytical framework for effective management of project risks using combined multiple criteria decision-making technique and decision tree analysis. First, a conceptual risk management model was developed through thorough literature review. The model was then applied through action research on a petroleum oil refinery construction project in the Central part of India in order to demonstrate its effectiveness. Oil refinery construction projects are risky because of technical complexity, resource unavailability, involvement of many stakeholders and strict environmental requirements. Although project risk management has been researched extensively, practical and easily adoptable framework is missing. In the proposed framework, risks are identified using cause and effect diagram, analysed using the analytic hierarchy process and responses are developed using the risk map. Additionally, decision tree analysis allows modelling various options for risk response development and optimises selection of risk mitigating strategy. The proposed risk management framework could be easily adopted and applied in any project and integrated with other project management knowledge areas.  相似文献   

8.
Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort‐wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case‐control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow‐up period 1955–1998. After applying the proposed imputation technique, a biologically‐based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.  相似文献   

9.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

10.
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback‐Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed.  相似文献   

11.
邓立治  刘希宋 《管理学报》2009,6(7):885-889
针对油田开发污染源存在危险度,提出一种将灰关联度分析与故障树分析相结合的危险度评价模型.该模型运用灰关联度分析方法进行事故模式的识别,并根据某一具体的故障树,分析和导出顶事件发生的各种故障模式的可能性大小.然后,选取大庆油田工程有限公司典型案例进行实证研究,结果表明该方法对油田开发污染源危险度评价具备可行性和操作性.  相似文献   

12.
Kun Xie  Kaan Ozbay  Hong Yang  Di Yang 《Risk analysis》2019,39(6):1342-1357
The widely used empirical Bayes (EB) and full Bayes (FB) methods for before–after safety assessment are sometimes limited because of the extensive data needs from additional reference sites. To address this issue, this study proposes a novel before–after safety evaluation methodology based on survival analysis and longitudinal data as an alternative to the EB/FB method. A Bayesian survival analysis (SARE) model with a random effect term to address the unobserved heterogeneity across sites is developed. The proposed survival analysis method is validated through a simulation study before its application. Subsequently, the SARE model is developed in a case study to evaluate the safety effectiveness of a recent red‐light‐running photo enforcement program in New Jersey. As demonstrated in the simulation and the case study, the survival analysis can provide valid estimates using only data from treated sites, and thus its results will not be affected by the selection of defective or insufficient reference sites. In addition, the proposed approach can take into account the censored data generated due to the transition from the before period to the after period, which has not been previously explored in the literature. Using individual crashes as units of analysis, survival analysis can incorporate longitudinal covariates such as the traffic volume and weather variation, and thus can explicitly account for the potential temporal heterogeneity.  相似文献   

13.
Data mining (DM) has been applied in many advanced science and technology fields, but it has still not been used for domino effect risk management to explore minimum risk scenarios. This work investigates the feasibility of DM in minimizing the risk of fire-induced domino effects in chemical processing facilities. Based on DM, an evidential failure mode and effects analysis (E-FMEA), which could bridge chemical facilities’ operational reliability and domino effect risk, is combined with fault tree analysis (FTA) for the occurrence risk modeling of loss of containment (LOC) event of chemical facilities, which is often the triggering point of fire-induced domino effects. Industry specific data such as reliability data, inspection records, and maintenance records are of great value to model the potential occurrence criticality of LOC. The data are used to characterize the LOC risk priority number (RPN) of chemical facilities through FTA and E-FMEA, search and statistics rules are proposed to mine inspection records to assess LOC risk factors. According to the RPN scores of facilities, inherent safety strategies to minimize risk via inventory control are proposed, and their effectiveness is tested using a well-known probit model. In this way, the approach proposes a unit-specific evidence-based risk minimization strategy for fire-induced domino effects. A case study demonstrates the capability of DM in the risk minimization of fire-induced domino effects.  相似文献   

14.
已有的财务困境预警研究一般基于财务指标,或在财务指标基础上引入单一效率指标,而多维效率指标能够更加全面有效地反映不同行业、不同资产规模的上市公司整体状况,从而对上市公司财务困境产生更好的预警效果。本文从经营效率、财务效率、融资效率和人力资本效率这四个维度分别提出相对应的投入产出指标体系,并采用数据包络分析对上市公司各个维度的相对有效性进行评价。在此基础上,将得到的多维效率指标与财务指标相融合,建立上市公司财务困境预警模型。为了验证所提出模型的有效性,采用支持向量机、人工神经网络和决策树这三种常用的财务困境预警技术,并基于不同的财务指标体系对我国上市公司进行实证研究。结果表明,考虑多维效率指标的上市公司财务困境预警模型能够有效提高预测准确度。  相似文献   

15.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

16.
While the event tree is a useful diagrammatic aid to an appreciation of various event sequence possibilities, it is of a nature that suggests no obvious manner in which the associated probability data may be compiled as computer input. Here, we propose a complementary numerical representation of scenario possibilities which incorporates probability data in a succinct fashion. While its mnemonic properties facilitate the logical development of a system's characteristics, its compactness and unambiguity permit its utilization directly as computer input.  相似文献   

17.
《Risk analysis》2018,38(8):1585-1600
Historical data analysis shows that escalation accidents, so‐called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent‐based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent‐based modeling technique explains the domino effects from a bottom‐up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher‐level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large‐scale complicated cases.  相似文献   

18.
The methodology presented here identifies an approach to accurately and economically analyze the effects on risk of various containment performance issues. Although this method facilitates the evaluation of potential containment improvements, it does so while utilizing the significant amount of information accumulated by the U.S. NRC Reactor Risk Reference Program. The use of hindsight and the acceptance of point estimate quantifications of risks allows the proposed methodology to be scrutable and understandable to the community as well as relatively simple and inexpensive to apply. A study of containment venting strategies was used to demonstrate the capabilities of the simplified containment event tree methodology. However, the methodology is flexible enough for a wide range of risk evaluations.  相似文献   

19.
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed.  相似文献   

20.
As flood risks grow worldwide, a well‐designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood‐loss‐sharing program involving private insurance based on location‐specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS‐based flood model and a stochastic optimization procedure with respect to location‐specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile‐related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号