首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

2.
Measures of sensitivity and uncertainty have become an integral part of risk analysis. Many such measures have a conditional probabilistic structure, for which a straightforward Monte Carlo estimation procedure has a double‐loop form. Recently, a more efficient single‐loop procedure has been introduced, and consistency of this procedure has been demonstrated separately for particular measures, such as those based on variance, density, and information value. In this work, we give a unified proof of single‐loop consistency that applies to any measure satisfying a common rationale. This proof is not only more general but invokes less restrictive assumptions than heretofore in the literature, allowing for the presence of correlations among model inputs and of categorical variables. We examine numerical convergence of such an estimator under a variety of sensitivity measures. We also examine its application to a published medical case study.  相似文献   

3.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

4.
In risk analysis problems, the decision‐making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one‐parameter‐at‐a‐time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.  相似文献   

5.
本文利用全国各省区及七十个大中城市房地产价格数据,匹配以上市公司的房屋和土地使用权数据,从微观层面考察融资约束在房地产价格传导机制中所发挥的作用及其经济后果.本文研究表明房地产价格波动对我国企业融资和投资行为具有显著的传导效应,其效应扩散程度主要取决于企业的融资约束状况,当企业所拥有的房地产价值上涨时,融资约束程度高的企业外部债务融资更多,具有更高的投资水平,表现出更强的过度投资倾向.本文进一步考察传导效应的资源配置后果,发现企业的绩效并未有实质性的改善,说明抵押资产价格传导效应可能存在负面经济后果.本文还发现,随着房地产价格的波动,融资约束程度高的企业也体现出更大的投资波动幅度,这说明融资约束有可能放大经济周期性波动程度,证实了Kiyotaki和Moore(1997)的理论推断.  相似文献   

6.
Moment independent methods for the sensitivity analysis of model output are attracting growing attention among both academics and practitioners. However, the lack of benchmarks against which to compare numerical strategies forces one to rely on ad hoc experiments in estimating the sensitivity measures. This article introduces a methodology that allows one to obtain moment independent sensitivity measures analytically. We illustrate the procedure by implementing four test cases with different model structures and model input distributions. Numerical experiments are performed at increasing sample size to check convergence of the sensitivity estimates to the analytical values.  相似文献   

7.
This paper considers the optimization of linearly constrained stochastic problem which only noisy measurements of the loss function are available. We propose a method which combines genetic algorithm (GA) with simultaneous perturbation stochastic approximation (SPSA) to solve linearly constrained stochastic problems. The hybrid method uses GA to search for optimum over the whole feasible region, and SPSA to search for optimum at local region. During the GA and SPSA search process, the hybrid method generates new solutions according to gradient projection direction, which is calculated based on active constraints. Because the gradient projection method projects the search direction into the subspace at a tangent to the active constraints, it ensures new solutions satisfy all constraints strictly. This paper applies the hybrid method to nine typical constrained optimization problems and the results coincide with the ideal solutions cited in the references. The numerical results reveal that the hybrid method is suitable for multimodal constrained stochastic optimization problem. Moreover, each solution generated by the hybrid method satisfies all linear constraints strictly.  相似文献   

8.
The parameters in a physiologically based pharmacokinetic (PBPK) model of methylene chloride were varied systematically, and the resulting variation in a number of model outputs was determined as a function of time for mice and humans at several exposure concentrations. The importance of the various parameters in the model was highly dependent on the conditions (concentration, species) for which the simulation was performed and the model output (dose surrogate) being considered. Model structure also had a significant impact on the results. For sensitivity analysis, particular attention must be paid to conservation equations to ensure that the variational calculations do not alter mass balance, introducing extraneous effects into the model. All of the normalized sensitivity coefficients calculated in this study ranged between −1.12 and 1, and most were much less than 1 in absolute value, indicating that individual input errors are not greatly amplified in the outputs. In addition to ranking parameters in terms of their impact on model predictions, time-dependent sensitivity analysis can also be used as an aid in the design of experiments to estimate parameters by predicting the experimental conditions and sampling points which will maximize parameter identifiability.  相似文献   

9.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

10.
Introduction of classical swine fever virus (CSFV) is a continuing threat to the pig production sector in the European Union. A scenario tree model was developed to obtain more insight into the main risk factors determining the probability of CSFV introduction (P(CSFV)). As this model contains many uncertain input parameters, sensitivity analysis was used to indicate which of these parameters influence model results most. Group screening combined with the statistical techniques of design of experiments and meta-modeling was applied to detect the most important uncertain input parameters among a total of 257 parameters. The response variable chosen was the annual P(CSFV) into the Netherlands. Only 128 scenario calculations were needed to specify the final meta-model. A consecutive one-at-a-time sensitivity analysis was performed with the main effects of this meta-model to explore their impact on the ranking of risk factors contributing most to the annual P(CSFV). The results indicated that model outcome is most sensitive to the uncertain input parameters concerning the expected number of classical swine fever epidemics in Germany, Belgium, and the United Kingdom and the probability that CSFV survives in an empty livestock truck traveling over a distance of 0-900 km.  相似文献   

11.
This paper examines whether a bank exercises a monitoring role when a banker is represented on a firm’s board. Bank monitoring reduces information asymmetries, and hence lessens firm’s financial constraints—phenomenon frequently measured by investment-cash flow sensitivity in the sample of all non-financial companies listed during 1999–2002 on the Polish stock exchange. I find that firms with a banker on the board rely more heavily on bank loans than on internal capital in their investment activities. In contrast, firms with no banker on the board finance to a larger extent their investment with internal capital than with credit. However, firms with the bank-lender representation on the board are almost as much financially constrained as firms without a bank-lender representative on the board. Hence, the presence of bankers on boards is not associated with bank monitoring. They rather promote their employer’s business. The findings show that investment of firms with a banker on the board is less sensitive to cash flow than investment of firms without bank representatives on the board. This result suggests that bankers on the board provide financial expertise that help those firm to reduce financial constraints.  相似文献   

12.
资源受限项目调度问题(简称RCPSP)是最具代表性的项目调度问题之一,调度过程可理解为,将受资源约束的平行工序调整为顺序工序。本文针对实际中广泛存在的资源局域、而非全局受限的情况,研究局域性RCPSP,并重点考虑一类问题:项目某环节的一系列平行工序,可用资源量只有一半,各资源可重复利用且具有相应多功能,但最多能承担2个工序,需将这些工序两两排列成对,实现项目工期最短。本文首先探索问题“局域性”特征,量化局域调度对项目工期的影响;基于此,构建只涵盖“局域调度工序”的0-1规划模型;再者,发展整数规划强对偶理论,结合Dangzig-Wolfe分解等方法,提出多项式时间的精确算法;最后通过算例测试,验证算法优势,例如,计算大规模算例的最优解,运用该算法比常规精确方法可快数万倍以上。  相似文献   

13.
This study develops a modified two-stage model to evaluate productive efficiency, occupancy, and catering service effectiveness of Taiwan׳s international tourist hotels. The difference between the modified and original two-stage model is that the modified two-stage model allows for multiple efficiencies to be calculated in the unique stage and the concept of intermediate input is introduced.The modified model was tested using 58 Taiwanese international hotels and the results show the modified model offers a more efficient and effective approach in calculating all the efficiencies in a single data envelopment analysis (DEA) implementation as opposed to independent efficiency calculations.  相似文献   

14.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

15.
The implications of constrained dependent and independent variables for model parameters are examined. In the context of linear model systems, it is shown that polyhedral constraints on the dependent variables will hold over the domain of the independent variables when a set of polyhedral constraints is satisfied by the model parameters. This result may be used in parameter estimation, in which case all predicted values of the dependent variables are consistent with constraints on the actual values. Also, the implicit constraints that define the set of parameters for many commonly used linear stochastic models with an error term yield values of the dependent variables consistent with the explicit constraints. Models possessing these properties are termed “logically consistent”.  相似文献   

16.
In this article, we propose an integrated direct and indirect flood risk model for small‐ and large‐scale flood events, allowing for dynamic modeling of total economic losses from a flood event to a full economic recovery. A novel approach is taken that translates direct losses of both capital and labor into production losses using the Cobb‐Douglas production function, aiming at improved consistency in loss accounting. The recovery of the economy is modeled using a hybrid input‐output model and applied to the port region of Rotterdam, using six different flood events (1/10 up to 1/10,000). This procedure allows gaining a better insight regarding the consequences of both high‐ and low‐probability floods. The results show that in terms of expected annual damage, direct losses remain more substantial relative to the indirect losses (approximately 50% larger), but for low‐probability events the indirect losses outweigh the direct losses. Furthermore, we explored parameter uncertainty using a global sensitivity analysis, and varied critical assumptions in the modeling framework related to, among others, flood duration and labor recovery, using a scenario approach. Our findings have two important implications for disaster modelers and practitioners. First, high‐probability events are qualitatively different from low‐probability events in terms of the scale of damages and full recovery period. Second, there are substantial differences in parameter influence between high‐probability and low‐probability flood modeling. These findings suggest that a detailed approach is required when assessing the flood risk for a specific region.  相似文献   

17.
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies.  相似文献   

18.
Genetic algorithm (GA) approach is developed for solving the P-model of chance constrained data envelopment analysis (CCDEA) problems, which include the concept of “Satisficing”. Problems here include cases in which inputs and outputs are stochastic, as well as cases in which only the outputs are stochastic. The basic solution technique for the above has so far been deriving “deterministic equivalents”, which is difficult for all stochastic parameters as there are no compact methods available. In the proposed approach, the stochastic objective function and chance constraints are directly used within the genetic process. The feasibility of chance constraints are checked by stochastic simulation techniques. A case of Indian banking sector has been presented to illustrate the above approach.  相似文献   

19.
20.
Sensitivity analysis (SA) methods are a valuable tool for identifying critical control points (CCPs), which is one of the important steps in the hazard analysis and CCP approach that is used to ensure safe food. There are many SA methods used across various disciplines. Furthermore, food safety process risk models pose challenges because they often are highly nonlinear, contain thresholds, and have discrete inputs. Therefore, it is useful to compare and evaluate SA methods based upon applications to an example food safety risk model. Ten SA methods were applied to a draft Vibrio parahaemolyticus (Vp) risk assessment model developed by the Food and Drug Administration. The model was modified so that all inputs were independent. Rankings of key inputs from different methods were compared. Inputs such as water temperature, number of oysters per meal, and the distributional assumption for the unrefrigerated time were the most important inputs, whereas time on water, fraction of pathogenic Vp, and the distributional assumption for the weight of oysters were the least important inputs. Most of the methods gave a similar ranking of key inputs even though the methods differed in terms of being graphical, mathematical, or statistical, accounting for individual effects or joint effect of inputs, and being model dependent or model independent. A key recommendation is that methods be further compared by application on different and more complex food safety models. Model independent methods, such as ANOVA, mutual information index, and scatter plots, are expected to be more robust than others evaluated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号