首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
A substantial body of empirical accounting, finance, management, and marketing research utilizes single equation models with discrete dependent variables. Generally, the interpretation of the coefficients of the exogenous variables is limited to the sign and relative magnitude. This paper presents three methods of interpreting the coefficients in these models. The first method interprets the coefficients as marginal probabilities and the second method interprets the coefficients as elasticities of probability. The third method utilizes sensitivity analysis and examines the effect of hypothetical changes in exogenous variables on the probability of choice. This paper applies these methods to a published research study.  相似文献   

2.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

3.
Methods of engineering risk analysis are based on a functional analysis of systems and on the probabilities (generally Bayesian) of the events and random variables that affect their performances. These methods allow identification of a system's failure modes, computation of its probability of failure or performance deterioration per time unit or operation, and of the contribution of each component to the probabilities and consequences of failures. The model has been extended to include the human decisions and actions that affect components' performances, and the management factors that affect behaviors and can thus be root causes of system failures. By computing the risk with and without proposed measures, one can then set priorities among different risk management options under resource constraints. In this article, I present briefly the engineering risk analysis method, then several illustrations of risk computations that can be used to identify a system's weaknesses and the most cost-effective way to fix them. The first example concerns the heat shield of the space shuttle orbiter and shows the relative risk contribution of the tiles in different areas of the orbiter's surface. The second application is to patient risk in anesthesia and demonstrates how the engineering risk analysis method can be used in the medical domain to rank the benefits of risk mitigation measures, in that case, mostly organizational. The third application is a model of seismic risk analysis and mitigation, with application to the San Francisco Bay area for the assessment of the costs and benefits of different seismic provisions of building codes. In all three cases, some aspects of the results were not intuitively obvious. The probabilistic risk analysis (PRA) method allowed identifying system weaknesses and the most cost-effective way to fix them.  相似文献   

4.
Quantitative Assessment of Building Fire Risk to Life Safety   总被引:1,自引:0,他引:1  
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.  相似文献   

5.
In decision-making under uncertainty, a decision-maker is required to specify, possibly with the help of decision analysts, point estimates of the probabilities of uncertain events. In this setting, it is often difficult to obtain very precise measurements of the decision-maker׳s probabilities on the states of nature particularly when little information is available to evaluate probabilities, available information is not specific enough, or we have to model the conflict case where several information sources are available.In this paper, imprecise probabilities are considered for representing the decision-maker׳s perception or past experience about the states of nature, to be specific, interval probabilities, which can be further categorized as (a) intervals of individual probabilities, (b) intervals of probability differences, and (c) intervals of ratio probabilities. We present a heuristic approach to modeling a wider range of types of probabilities as well as three types of interval probabilities. In particular, the intervals of ratio probabilities, which are widely used in the uncertain AHP context, are analyzed to find extreme points by the use of change of variables, not to mention the first two types of interval probabilities. Finally, we examine how these extreme points can be used to determine an ordering or partial ordering of the expected values of strategies.  相似文献   

6.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

7.
The catastrophic nature of seismic risk is attributed to spatiotemporal correlation of seismic losses of buildings and infrastructure. For seismic risk management, such correlated seismic effects must be adequately taken into account, since they affect the probability distribution of aggregate seismic losses of spatially distributed structures significantly, and its upper tail behavior can be of particular importance. To investigate seismic loss dependence for two closely located portfolios of buildings, simulated seismic loss samples, which are obtained from a seismic risk model of spatially distributed buildings by taking spatiotemporally correlated ground motions into account, are employed. The characterization considers a loss frequency model that incorporates one dependent random component acting as a common shock to all buildings, and a copula‐based loss severity model, which facilitates the separate construction of marginal loss distribution functions and nonlinear copula function with upper tail dependence. The proposed method is applied to groups of wood‐frame buildings located in southwestern British Columbia. Analysis results indicate that the dependence structure of aggregate seismic losses can be adequately modeled by the right heavy tail copula or Gumbel copula, and that for the considered example, overall accuracy of the proposed method is satisfactory at probability levels of practical interest (at most 10% estimation error of fractiles of aggregate seismic loss). The developed statistical seismic loss model may be adopted in dynamic financial analysis for achieving faster evaluation with reasonable accuracy.  相似文献   

8.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

9.
Shangde Gao  Yan Wang 《Risk analysis》2023,43(6):1222-1234
Climate change and rapid urban development have intensified the impact of hurricanes, especially on the Southeastern Coasts of the United States. Localized and timely risk assessments can facilitate coastal communities’ preparedness and response to imminent hurricanes. Existing assessment methods focused on hurricane risks at large spatial scales, which were not specific or could not provide actionable knowledge for residents or property owners. Fragility functions and other widely utilized assessment methods cannot model the complex relationships between building features and hurricane risk levels effectively. Therefore, we develop and test a building-level hurricane risk assessment with deep feedforward neural network (DFNN) models. The input features of DFNN models cover the meta building characteristics, fine-grained meteorological, and hydrological environmental parameters. The assessment outcomes, that is, risk levels, include the probability and intensity of building/property damages induced by wind and surge hazards. We interpret the DFNN models with local interpretable model-agnostic explanations (LIME). We apply the DFNN models to a case building in Cameron County, Louisiana in response to a hypothetical imminent hurricane to illustrate how the building's risk levels can be timely assessed with the updating weather forecast. This research shows the potential of deep-learning models in integrating multi-sourced features and accurately predicting buildings’ risks of weather extremes for property owners and households. The AI-powered risk assessment model can help coastal populations form appropriate and updating perceptions of imminent hurricanes and inform actionable knowledge for proactive risk mitigation and long-term climate adaptation.  相似文献   

10.
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately.  相似文献   

11.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   

12.
《Risk analysis》2018,38(4):666-679
We test here the risk communication proposition that explicit expert acknowledgment of uncertainty in risk estimates can enhance trust and other reactions. We manipulated such a scientific uncertainty message, accompanied by probabilities (20%, 70%, implicit [“will occur”] 100%) and time periods (10 or 30 years) in major (≥magnitude 8) earthquake risk estimates to test potential effects on residents potentially affected by seismic activity on the San Andreas fault in the San Francisco Bay Area (n = 750). The uncertainty acknowledgment increased belief that these specific experts were more honest and open, and led to statistically (but not substantively) significant increases in trust in seismic experts generally only for the 20% probability (vs. certainty) and shorter versus longer time period. The acknowledgment did not change judged risk, preparedness intentions, or mitigation policy support. Probability effects independent of the explicit admission of expert uncertainty were also insignificant except for judged risk, which rose or fell slightly depending upon the measure of judged risk used. Overall, both qualitative expressions of uncertainty and quantitative probabilities had limited effects on public reaction. These results imply that both theoretical arguments for positive effects, and practitioners’ potential concerns for negative effects, of uncertainty expression may have been overblown. There may be good reasons to still acknowledge experts’ uncertainties, but those merit separate justification and their own empirical tests.  相似文献   

13.
Flood loss modeling is an important component for risk analyses and decision support in flood risk management. Commonly, flood loss models describe complex damaging processes by simple, deterministic approaches like depth‐damage functions and are associated with large uncertainty. To improve flood loss estimation and to provide quantitative information about the uncertainty associated with loss modeling, a probabilistic, multivariable B agging decision T ree F lood L oss E stimation MO del (BT‐FLEMO) for residential buildings was developed. The application of BT‐FLEMO provides a probability distribution of estimated losses to residential buildings per municipality. BT‐FLEMO was applied and validated at the mesoscale in 19 municipalities that were affected during the 2002 flood by the River Mulde in Saxony, Germany. Validation was undertaken on the one hand via a comparison with six deterministic loss models, including both depth‐damage functions and multivariable models. On the other hand, the results were compared with official loss data. BT‐FLEMO outperforms deterministic, univariable, and multivariable models with regard to model accuracy, although the prediction uncertainty remains high. An important advantage of BT‐FLEMO is the quantification of prediction uncertainty. The probability distribution of loss estimates by BT‐FLEMO well represents the variation range of loss estimates of the other models in the case study.  相似文献   

14.
Technical Research Centre of Finland (VTT) and Studsvik AB, Sweden, have simulated decision making of the Swedish Nuclear Power Inspectorate and a power company by applying decision models in a benchmark study. Based on the experience from the benchmark study, a decision analysis framework to be used in safety related problems is outlined. By this framework both the power companies and the safety authorities could be provided with a more rigorous, systematic approach in their decision making. A decision analytic approach provides a structure for identifying the information requirements of the problem solving. Thus it could serve as a discussion forum between the authorities and the utilities. In this context, probabilistic safety assessment (PSA) has a crucial role of expressing the plant safety status in terms of reactor core damage accident probability and of risk contributions from various accident precursors. However, a decision under uncertainty should not be based solely on probabilities, particularly when the event in question is a rare one and its probability of occurrence is estimated by means of different kinds of approximations.  相似文献   

15.
In this work, we study the effect of epistemic uncertainty in the ranking and categorization of elements of probabilistic safety assessment (PSA) models. We show that, while in a deterministic setting a PSA element belongs to a given category univocally, in the presence of epistemic uncertainty, a PSA element belongs to a given category only with a certain probability. We propose an approach to estimate these probabilities, showing that their knowledge allows to appreciate " the sensitivity of component categorizations to uncertainties in the parameter values " (U.S. NRC Regulatory Guide 1.174). We investigate the meaning and utilization of an assignment method based on the expected value of importance measures. We discuss the problem of evaluating changes in quality assurance, maintenance activities prioritization, etc. in the presence of epistemic uncertainty. We show that the inclusion of epistemic uncertainly in the evaluation makes it necessary to evaluate changes through their effect on PSA model parameters. We propose a categorization of parameters based on the Fussell-Vesely and differential importance (DIM) measures. In addition, issues in the calculation of the expected value of the joint importance measure are present when evaluating changes affecting groups of components. We illustrate that the problem can be solved using DIM. A numerical application to a case study concludes the work.  相似文献   

16.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

17.
Shahid Suddle 《Risk analysis》2009,29(7):1024-1040
Buildings above roads, railways, and existing buildings themselves are examples of multifunctional urban locations. The construction stage of those buildings is in general extremely complicated. Safety is one of the critical issues during the construction stage. Because the traffic on the infrastructure must continue during the construction of the building above the infrastructure, falling objects due to construction activities form a major hazard for third parties, i.e., people present on the infrastructure or beneath it, such as car drivers and passengers. This article outlines a systematic approach to conduct quantitative risk assessment (QRA) and risk management of falling elements for third parties during the construction stage of the building above the infrastructure in multifunctional urban locations. In order to set up a QRA model, quantifiable aspects influencing the risk for third parties were determined. Subsequently, the conditional probabilities of these aspects were estimated by historical data or engineering judgment. This was followed by integrating those conditional probabilities, now used as input parameters for the QRA, into a Bayesian network representing the relation and the conditional dependence between the quantified aspects. The outcome of the Bayesian network—the calculation of both the human and financial risk in quantitative terms—is compared with the risk acceptance criteria as far as possible. Furthermore, the effect of some safety measures were analyzed and optimized in relation with decision making. Finally, the possibility of integration of safety measures in the functional and structural building design above the infrastructure are explored.  相似文献   

18.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

19.
《Risk analysis》2018,38(7):1455-1473
Recently, growing earthquake activity in the northeastern Netherlands has aroused considerable concern among the 600,000 provincial inhabitants. There, at 3 km deep, the rich Groningen gas field extends over 900 km2 and still contains about 600 of the original 2,800 billion cubic meters (bcm). Particularly after 2001, earthquakes have increased in number, magnitude (M, on the logarithmic Richter scale), and damage to numerous buildings. The man‐made nature of extraction‐induced earthquakes challenges static notions of risk, complicates formal risk assessment, and questions familiar conceptions of acceptable risk. Here, a 26‐year set of 294 earthquakes with M ≥ 1.5 is statistically analyzed in relation to increasing cumulative gas extraction since 1963. Extrapolations from a fast‐rising trend over 2001–2013 indicate that—under “business as usual”—around 2021 some 35 earthquakes with M ≥ 1.5 might occur annually, including four with M ≥ 2.5 (ten‐fold stronger), and one with M ≥ 3.5 every 2.5 years. Given this uneasy prospect, annual gas extraction has been reduced from 54 bcm in 2013 to 24 bcm in 2017. This has significantly reduced earthquake activity, so far. However, when extraction is stabilized at 24 bcm per year for 2017–2021 (or 21.6 bcm, as judicially established in Nov. 2017), the annual number of earthquakes would gradually increase again, with an expected all‐time maximum M ≈ 4.5. Further safety management may best follow distinct stages of seismic risk generation, with moderation of gas extraction and massive (but late and slow) building reinforcement as outstanding strategies. Officially, “acceptable risk” is mainly approached by quantification of risk (e.g., of fatal building collapse) for testing against national safety standards, but actual (local) risk estimation remains problematic. Additionally important are societal cost–benefit analysis, equity considerations, and precautionary restraint. Socially and psychologically, deliberate attempts are made to improve risk communication, reduce public anxiety, and restore people's confidence in responsible experts and policymakers.  相似文献   

20.
This paper presents a methodology for analyzing Analytic Hierarchy Process (AHP) rankings if the pairwise preference judgments are uncertain (stochastic). If the relative preference statements are represented by judgment intervals, rather than single values, then the rankings resulting from a traditional (deterministic) AHP analysis based on single judgment values may be reversed, and therefore incorrect. In the presence of stochastic judgments, the traditional AHP rankings may be stable or unstable, depending on the nature of the uncertainty. We develop multivariate statistical techniques to obtain both point estimates and confidence intervals of the rank reversal probabilities, and show how simulation experiments can be used as an effective and accurate tool for analyzing the stability of the preference rankings under uncertainty. If the rank reversal probability is low, then the rankings are stable and the decision maker can be confident that the AHP ranking is correct. However, if the likelihood of rank reversal is high, then the decision maker should interpret the AHP rankings cautiously, as there is a subtantial probability that these rankings are incorrect. High rank reversal probabilities indicate a need for exploring alternative problem formulations and methods of analysis. The information about the extent to which the ranking of the alternatives is sensitive to the stochastic nature of the pairwise judgments should be valuable information into the decision-making process, much like variability and confidence intervals are crucial tools for statistical inference. We provide simulation experiments and numerical examples to evaluate our method. Our analysis of rank reversal due to stochastic judgments is not related to previous research on rank reversal that focuses on mathematical properties inherent to the AHP methodology, for instance, the occurrence of rank reversal if a new alternative is added or an existing one is deleted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号