首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The maritime industry is moving toward a "goal-setting" risk-based regime. This opens the way to safety engineers to explore and exploit flexible and advanced risk modeling and decision-making approaches in the design and operation processes. In this article, following a brief review of the current status of maritime risk assessment, a design/operation selection framework and a design/operation optimization framework are outlined. A general discussion of control engineering techniques and their application to risk modeling and decision making is given. Four novel risk modeling and decision-making approaches are then outlined with illustrative examples to demonstrate their use. Such approaches may be used as alternatives to facilitate risk modeling and decision making in situations where conventional techniques cannot be appropriately applied. Finally, recommendations on further exploitation of advances in general engineering and technology are suggested with respect to risk modeling and decision making.  相似文献   

2.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

3.
This paper is a tutorial which demonstrates the current state-of-the-art methods for incorporating risk into project selection decision making. The projects under consideration might be R&D, IT, or other capital expenditure programs. We will show six decision making methods: 1. mean-variance (MV), 2. mean-semivariance, 3. mean-critical probability, 4. stochastic dominance, 5. almost stochastic dominance (ASD), and 6. mean-Gini. We will also describe the assumptions about the risk attitudes of the decision maker which are associated with each of the techniques. While all these methods have been previously applied elsewhere, this is the first paper which shows all of their applications in the project selection context, together with their interrelationships, strengths and weaknesses. We have applied all six techniques to the same group of five hypothetical projects and evaluated the resulting nondominated sets. Among the methods reviewed here, stochastic dominance is recommended because it requires the least restrictive assumptions. ASD and mean-Gini are recommended when stochastic dominance is not practical or when it does not yield definitive choices. MV, mean-semivariance, and mean-critical probability are shown to be flawed.  相似文献   

4.
Machine Learning (ML) techniques offer exciting new avenues for leadership research. In this paper we discuss how ML techniques can be used to inform predictive and causal models of leadership effects and clarify why both types of model are important for leadership research. We propose combining ML and experimental designs to draw causal inferences by introducing a recently developed technique to isolate “heterogeneous treatment effects.” We provide a step-by-step guide on how to design studies that combine field experiments with the application of ML to establish causal relationships with maximal predictive power. Drawing on examples in the leadership literature, we illustrate how the suggested approach can be applied to examine the impact of, for example, leadership behavior on follower outcomes. We also discuss how ML can be used to advance leadership research from theoretical, methodological and practical perspectives and consider limitations.  相似文献   

5.
Security risk management is essential for ensuring effective airport operations. This article introduces AbSRiM, a novel agent‐based modeling and simulation approach to perform security risk management for airport operations that uses formal sociotechnical models that include temporal and spatial aspects. The approach contains four main steps: scope selection, agent‐based model definition, risk assessment, and risk mitigation. The approach is based on traditional security risk management methodologies, but uses agent‐based modeling and Monte Carlo simulation at its core. Agent‐based modeling is used to model threat scenarios, and Monte Carlo simulations are then performed with this model to estimate security risks. The use of the AbSRiM approach is demonstrated with an illustrative case study. This case study includes a threat scenario in which an adversary attacks an airport terminal with an improvised explosive device. The approach provides a promising way to include important elements, such as human aspects and spatiotemporal aspects, in the assessment of risk. More research is still needed to better identify the strengths and weaknesses of the AbSRiM approach in different case studies, but results demonstrate the feasibility of the approach and its potential.  相似文献   

6.
We review approaches to dose-response modeling and risk assessment for binary data from developmental toxicity studies. In particular, we focus on jointly modeling fetal death and malformation and use a continuation ratio formulation of the multinomial distribution to provide a model for risk. Generalized estimating equations are used to account for clustering of animals within litters. The fitted model is then used to calculate doses corresponding to a specified level of excess risk. Two methods of arriving at a lower confidence limit or Benchmark dose are illustrated and compared. We also discuss models based on single binary end points and compare our approach to a binary analysis of whether or not the animal was 'affected' (either dead or malformed). The models are illustrated using data from four developmental toxicity studies in EG, DEHP, TGDM, and DYME conducted through the National Toxicology Program.  相似文献   

7.
By building on a genetic‐inspired attribute‐based conceptual framework for safety risk analysis, we propose a novel approach to define, model, and simulate univariate and bivariate construction safety risk at the situational level. Our fully data‐driven techniques provide construction practitioners and academicians with an easy and automated way of getting valuable empirical insights from attribute‐based data extracted from unstructured textual injury reports. By applying our methodology on a data set of 814 injury reports, we first show the frequency‐magnitude distribution of construction safety risk to be very similar to that of many natural phenomena such as precipitation or earthquakes. Motivated by this observation, and drawing on state‐of‐the‐art techniques in hydroclimatology and insurance, we then introduce univariate and bivariate nonparametric stochastic safety risk generators based on kernel density estimators and copulas. These generators enable the user to produce large numbers of synthetic safety risk values faithful to the original data, allowing safety‐related decision making under uncertainty to be grounded on extensive empirical evidence. One of the implications of our study is that like natural phenomena, construction safety may benefit from being studied quantitatively by leveraging empirical data rather than strictly being approached through a managerial perspective using subjective data, which is the current industry standard. Finally, a side but interesting finding is that in our data set, attributes related to high energy levels (e.g., machinery, hazardous substance) and to human error (e.g., improper security of tools) emerge as strong risk shapers.  相似文献   

8.
Legionnaires' disease (LD), first reported in 1976, is an atypical pneumonia caused by bacteria of the genus Legionella, and most frequently by L. pneumophila (Lp). Subsequent research on exposure to the organism employed various animal models, and with quantitative microbial risk assessment (QMRA) techniques, the animal model data may provide insights on human dose-response for LD. This article focuses on the rationale for selection of the guinea pig model, comparison of the dose-response model results, comparison of projected low-dose responses for guinea pigs, and risk estimates for humans. Based on both in vivo and in vitro comparisons, the guinea pig (Cavia porcellus) dose-response data were selected for modeling human risk. We completed dose-response modeling for the beta-Poisson (approximate and exact), exponential, probit, logistic, and Weibull models for Lp inhalation, mortality, and infection (end point elevated body temperature) in guinea pigs. For mechanistic reasons, including low-dose exposure probability, further work on human risk estimates for LD employed the exponential and beta-Poisson models. With an exposure of 10 colony-forming units (CFU) (retained dose), the QMRA model predicted a mild infection risk of 0.4 (as evaluated by seroprevalence) and a clinical severity LD case (e.g., hospitalization and supportive care) risk of 0.0009. The calculated rates based on estimated human exposures for outbreaks used for the QMRA model validation are within an order of magnitude of the reported LD rates. These validation results suggest the LD QMRA animal model selection, dose-response modeling, and extension to human risk projections were appropriate.  相似文献   

9.
Extrapolation relationships are of keen interest to chemical risk assessment in which they play a prominent role in translating experimentally derived (usually in animals) toxicity estimates into estimates more relevant to human populations. A standard approach for characterizing each extrapolation relies on ratios of pre-existing toxicity estimates. Applications of this "ratio approach" have overlooked several sources of error. This article examines the case of ratios of benchmark doses, trying to better understand their informativeness. The approach involves mathematically modeling the process by which the ratios are generated in practice. Both closed form and simulation-based models of this "data-generating process" (DGP) are developed, paying special attention to the influence of experimental design. The results show the potential for significant limits to informativeness, and revealing dependencies. Future applications of the ratio approach should take imprecision and bias into account. Bootstrap techniques are recommended for gauging imprecision, but more complicated techniques will be required for gauging bias (and capturing dependencies). Strategies for mitigating the errors are suggested.  相似文献   

10.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

11.
倪志凌  周好文 《管理学报》2009,6(7):890-894
对业务流程进行建模和性能分析是流程银行研究的重要课题.通过对广义随机时间Petri网进行扩展,将其用于对流程银行的业务流程建模.同时,基于活动执行时间为正态分布的假设,给出了4种流程元模式的等价性能分析方法.在此基础上,进一步把活动执行时间扩展到任意分布的情况,给出了通用的等价性能分析方法.  相似文献   

12.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

13.
Prediction error identification methods have been recently the objects of much study, and have wide applicability. The maximum likelihood (ML) identification methods for Gaussian models and the least squares prediction error method (LSPE) are special cases of the general approach. In this paper, we investigate conditions for distinguishability or identifiability of multivariate random processes, for both continuous and discrete observation time T. We consider stationary stochastic processes, for the ML and LSPE methods, and for large observation interval T, we resolve the identifiability question. Our analysis begins by considering stationary autoregressive moving average models, but the conclusions apply for general stationary, stable vector models. The limiting value for T → ∞ of the criterion function is evaluated, and it is viewed as a distance measure in the parameter space of the model. The main new result of this paper is to specify the equivalence classes of stationary models that achieve the global minimization of the above distance measure, and hence to determine precisely the classes of models that are not identifiable from each other. The new conclusions are useful for parameterizing multivariate stationary models in system identification problems. Relationships to previously discovered identifiability conditions are discussed.  相似文献   

14.
Floods are a natural hazard evolving in space and time according to meteorological and river basin dynamics, so that a single flood event can affect different regions over the event duration. This physical mechanism introduces spatio‐temporal relationships between flood records and losses at different locations over a given time window that should be taken into account for an effective assessment of the collective flood risk. However, since extreme floods are rare events, the limited number of historical records usually prevents a reliable frequency analysis. To overcome this limit, we move from the analysis of extreme events to the modeling of continuous stream flow records preserving spatio‐temporal correlation structures of the entire process, and making a more efficient use of the information provided by continuous flow records. The approach is based on the dynamic copula framework, which allows for splitting the modeling of spatio‐temporal properties by coupling suitable time series models accounting for temporal dynamics, and multivariate distributions describing spatial dependence. The model is applied to 490 stream flow sequences recorded across 10 of the largest river basins in central and eastern Europe (Danube, Rhine, Elbe, Oder, Waser, Meuse, Rhone, Seine, Loire, and Garonne). Using available proxy data to quantify local flood exposure and vulnerability, we show that the temporal dependence exerts a key role in reproducing interannual persistence, and thus magnitude and frequency of annual proxy flood losses aggregated at a basin‐wide scale, while copulas allow the preservation of the spatial dependence of losses at weekly and annual time scales.  相似文献   

15.
资产负债管理多阶段模型研究   总被引:3,自引:0,他引:3  
金秀  黄小原 《管理学报》2007,4(1):118-126
综述了资产负债管理多阶段模型及应用的研究状况,评述了相关的研究文献。分别从资产负债管理多阶段模型、银行资产负债管理、养老金与保险公司、金融计划、动态的资产配置以及情景生成几个方面进行了探讨。最后结合我国有关的研究与应用现状,提出一些待研究的方向。  相似文献   

16.
We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm‐exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the “tail” (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit‐cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long‐term benefit‐cost tradeoff is positive for virtually every entity, we acknowledge short‐term disincentives to such an effort.  相似文献   

17.
假设债价扩散函数v(t,T)为时间t的二次函数,是利用风险中性方法建立随机期限结构模型的关键;而随机期限结构模型又是建立债券定价模型的基础。本文不但介绍了有关的理论模型,而且利用我国国债市场的价格数据进行实证研究,建立了具体的瞬态年利率随机期限和国债961的定价模型。  相似文献   

18.
Flood loss modeling is an important component for risk analyses and decision support in flood risk management. Commonly, flood loss models describe complex damaging processes by simple, deterministic approaches like depth‐damage functions and are associated with large uncertainty. To improve flood loss estimation and to provide quantitative information about the uncertainty associated with loss modeling, a probabilistic, multivariable B agging decision T ree F lood L oss E stimation MO del (BT‐FLEMO) for residential buildings was developed. The application of BT‐FLEMO provides a probability distribution of estimated losses to residential buildings per municipality. BT‐FLEMO was applied and validated at the mesoscale in 19 municipalities that were affected during the 2002 flood by the River Mulde in Saxony, Germany. Validation was undertaken on the one hand via a comparison with six deterministic loss models, including both depth‐damage functions and multivariable models. On the other hand, the results were compared with official loss data. BT‐FLEMO outperforms deterministic, univariable, and multivariable models with regard to model accuracy, although the prediction uncertainty remains high. An important advantage of BT‐FLEMO is the quantification of prediction uncertainty. The probability distribution of loss estimates by BT‐FLEMO well represents the variation range of loss estimates of the other models in the case study.  相似文献   

19.
Many tests of asset‐pricing models address only the pricing predictions, but these pricing predictions rest on portfolio choice predictions that seem obviously wrong. This paper suggests a new approach to asset pricing and portfolio choices based on unobserved heterogeneity. This approach yields the standard pricing conclusions of classical models but is consistent with very different portfolio choices. Novel econometric tests link the price and portfolio predictions and take into account the general equilibrium effects of sample‐size bias. This paper works through the approach in detail for the case of the classical capital asset pricing model (CAPM), producing a model called CAPM+ε. When these econometric tests are applied to data generated by large‐scale laboratory asset markets that reveal both prices and portfolio choices, CAPM+εis not rejected.  相似文献   

20.
The purpose of this research study is to analyze sustainable supply chain (SSC) management practices for Indian automobile industry and to identify the critical factors for its successful implementation. Despite the fact that SSC has been frequently promoted as a means of improving business competitiveness, little empirical evidence exists in the literature validating its positive link with organizational performance. Sustainable supply chain practices (SSCP) not only help in reducing environmental degradation but it also has social and economic implications (as per tipple bottom line approach). For this purpose, empirical data is collected to measure the SSCP prevailing in Indian automobile industry. A structural equation modeling technique is used to build the measurement and structural models. Later, statistical estimates are used to validate the model that has been built. The data analysis helps to determine whether to accept or reject the hypothesis that has been stated based on the structural model. The result shows how SSCP are correlated and help in improving the supply chain performance among the industries being surveyed. It is also observed that environmental and social performance have a positive relationship with economic performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号