首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.  相似文献   

2.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

3.
This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11–12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food‐safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food‐safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food‐safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.  相似文献   

4.
Identification and Review of Sensitivity Analysis Methods   总被引:8,自引:0,他引:8  
Identification and qualitative comparison of sensitivity analysis methods that have been used across various disciplines, and that merit consideration for application to food-safety risk assessment models, are presented in this article. Sensitivity analysis can help in identifying critical control points, prioritizing additional data collection or research, and verifying and validating a model. Ten sensitivity analysis methods, including four mathematical methods, five statistical methods, and one graphical method, are identified. The selected methods are compared on the basis of their applicability to different types of models, computational issues such as initial data requirement and complexity of their application, representation of the sensitivity, and the specific uses of these methods. Applications of these methods are illustrated with examples from various fields. No one method is clearly best for food-safety risk models. In general, use of two or more methods, preferably with dissimilar theoretical foundations, may be needed to increase confidence in the ranking of key inputs.  相似文献   

5.
考虑多种安全设置策略的物流网络的选址-库存问题,不仅是选址、订货、运输和库存的集成优化,还需要考虑多种不同的安全库存设置和转运策略。因此,本文深入讨论了二级物流网络中的六种安全库存设置策略,构建了六种考虑不同安全库存设置的选址-库存模型。在考虑集中设置安全库存时,集中安全库存需要通过转载运输实现,因此需要将转载运输成本引入选址-库存模型之中,使新的选址-库存模型更加科学合理。另外,针对六种新的选址-库存模型,提出了基于个体成本差异分配的遗传算法,迭代搜索选址、分配、库存设置策略的优化组合。最后,通过数据实验验证了模型的有效性:(1)安全库存与转载运输之间存在此消彼长的背反关系;(2)安全库存设置和转载运输策略对总成本的影响取决于两种费率权重情况。本文的研究可以为二级物流网络的选址、订货和安全库存策略集成优化决策提供参考依据。  相似文献   

6.
On Modeling Correlated Random Variables in Risk Assessment   总被引:1,自引:0,他引:1  
Haas  Charles N. 《Risk analysis》1999,19(6):1205-1214
Monte Carlo methods in risk assessment are finding increasingly widespread application. With the recognition that inputs may be correlated, the incorporation of such correlations into the simulation has become important. Most implementations rely upon the method of Iman and Conover for generating correlated random variables. In this work, alternative methods using copulas are presented for deriving correlated random variables. It is further shown that the particular algorithm or assumption used may have a substantial effect on the output results, due to differences in higher order bivariate moments.  相似文献   

7.
Microbiological food safety is an important economic and health issue in the context of globalization and presents food business operators with new challenges in providing safe foods. The hazard analysis and critical control point approach involve identifying the main steps in food processing and the physical and chemical parameters that have an impact on the safety of foods. In the risk‐based approach, as defined in the Codex Alimentarius, controlling these parameters in such a way that the final products meet a food safety objective (FSO), fixed by the competent authorities, is a big challenge and of great interest to the food business operators. Process risk models, issued from the quantitative microbiological risk assessment framework, provide useful tools in this respect. We propose a methodology, called multivariate factor mapping (MFM), for establishing a link between process parameters and compliance with a FSO. For a stochastic and dynamic process risk model of in soft cheese made from pasteurized milk with many uncertain inputs, multivariate sensitivity analysis and MFM are combined to (i) identify the critical control points (CCPs) for throughout the food chain and (ii) compute the critical limits of the most influential process parameters, located at the CCPs, with regard to the specific process implemented in the model. Due to certain forms of interaction among parameters, the results show some new possibilities for the management of microbiological hazards when a FSO is specified.  相似文献   

8.
We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.  相似文献   

9.
A Note on Compounded Conservatism   总被引:1,自引:0,他引:1  
Compounded conservatism (or "creeping safety") describes the impact of using conservative, upper-bound estimates of the values of multiple input variates to obtain a conservative estimate of risk modeled as an increasing function of those variates. In a simple multiplicative model of risk, for example, if upper p -fractile (100 p th percentile) values are used for each of several statistically independent input variates, the resulting risk estimate will be the upper p' -fractile of risk predicted according to that multiplicative model, where p' > p . The amount of compounded conservativism reflected by the difference between p' and p may be substantial, depending on the number of inputs, their relative uncertainties, and the value of p selected. Particular numerical examples of compounded conservatism are often cited, but an analytic approach may better serve to conceptualize and communicate its potential quantitative impact. This note briefly outlines such an approach and illustrates its application to the case of risk modeled as a product of lognormally distributed inputs.  相似文献   

10.
In practice, systems are often composed of a group of sub-units. Each sub-unit has a set of performance metrics that are classified as inputs and outputs in data envelopment analysis (DEA). Conventional DEA views such a system as a “black-box”, other DEA-based models are developed to investigate the inner structure, either with a serial structure where components are connected by intermediate products, or with a parallel system under the key assumption that all sub-units are associated with the same type of inputs and outputs (in differing amounts) without the links. In many applications, however, this property of identical input/output factors may not hold. For example, factories may have various manufacturing lines whose inputs and outputs differ from one another. The current paper proposes a series of DEA models to accommodate settings where non-homogenous sub-units operate in parallel network structures with intermediate measures or links. Both the overall performance of the entire parallel network system and efficiency decomposition for each sub-unit can be evaluated through our method.  相似文献   

11.
M. C. Kennedy 《Risk analysis》2011,31(10):1597-1609
Two‐dimensional Monte Carlo simulation is frequently used to implement probabilistic risk models, as it allows for uncertainty and variability to be quantified separately. In many cases, we are interested in the proportion of individuals from a variable population exceeding a critical threshold, together with uncertainty about this proportion. In this article we introduce a new method that can accurately estimate these quantities much more efficiently than conventional algorithms. We also show how those model parameters having the greatest impact on the probabilities of rare events can be quickly identified via this method. The algorithm combines elements from well‐established statistical techniques in extreme value theory and Bayesian analysis of computer models. We demonstrate the practical application of these methods with a simple example, in which the true distributions are known exactly, and also with a more realistic model of microbial contamination of milk with seven parameters. For the latter, sensitivity analysis (SA) is shown to identify the two inputs explaining the majority of variation in distribution tail behavior. In the subsequent prediction of probabilities of large contamination events, similar results are obtained using the new approach taking 43 seconds or the conventional simulation that requires more than 3 days.  相似文献   

12.
Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.  相似文献   

13.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

14.
Felicia Wu    Joseph V. Rodricks 《Risk analysis》2020,40(Z1):2218-2230
Before the founding of the Society for Risk Analysis (SRA) in 1980, food safety in the United States had long been a concern, but there was a lack of systematic methods to assess food-related risks. In 1906, the U.S. Congress passed, and President Roosevelt signed, the Pure Food and Drug Act and the Meat Inspection Act to regulate food safety at the federal level. This Act followed the publication of multiple reports of food contamination, culminating in Upton Sinclair's novel The Jungle, which highlighted food and worker abuses in the meatpacking industry. Later in the 20th century, important developments in agricultural and food technology greatly increased food production. But chemical exposures from agricultural and other practices resulted in major amendments to federal food laws, including the Delaney Clause, aimed specifically at cancer-causing chemicals. Later in the 20th century, when quantitative risk assessment methods were given greater scientific status in a seminal National Research Council report, food safety risk assessment became more systematized. Additionally, in these last 40 years, food safety research has resulted in increased understanding of a range of health effects from foodborne chemicals, and technological developments have improved U.S. food safety from farm to fork by offering new ways to manage risks. We discuss the history of food safety and the role risk analysis has played in its evolution, starting from over a century ago, but focusing on the last 40 years. While we focus on chemical risk assessment in the U.S., we also discuss microbial risk assessment and international food safety.  相似文献   

15.
Increasing evidence suggests that persistence of Listeria monocytogenes in food processing plants has been the underlying cause of a number of human listeriosis outbreaks. This study extracts criteria used by food safety experts in determining bacterial persistence in the environment, using retail delicatessen operations as a model. Using the Delphi method, we conducted an expert elicitation with 10 food safety experts from academia, industry, and government to classify L. monocytogenes persistence based on environmental sampling results collected over six months for 30 retail delicatessen stores. The results were modeled using variations of random forest, support vector machine, logistic regression, and linear regression; variable importance values of random forest and support vector machine models were consolidated to rank important variables in the experts’ classifications. The duration of subtype isolation ranked most important across all expert categories. Sampling site category also ranked high in importance and validation errors doubled when this covariate was removed. Support vector machine and random forest models successfully classified the data with average validation errors of 3.1% and 2.2% (n = 144), respectively. Our findings indicate that (i) the frequency of isolations over time and sampling site information are critical factors for experts determining subtype persistence, (ii) food safety experts from different sectors may not use the same criteria in determining persistence, and (iii) machine learning models have potential for future use in environmental surveillance and risk management programs. Future work is necessary to validate the accuracy of expert and machine classification against biological measurement of L. monocytogenes persistence.  相似文献   

16.
Public risk perceptions and demand for safer food are important factors shaping agricultural production practices in the United States. Despite documented food safety concerns, little attempt has been made to elicit consumers' subjective risk judgments for a range of food safety hazards or to identify factors most predictive of perceived food safety risks. In this study, over 700 conventional and organic fresh produce buyers in the Boston area were surveyed for their perceived food safety risks. Survey results showed that consumers perceived relatively high risks associated with the consumption and production of conventionally grown produce compared with other public health hazards. For example, conventional and organic food buyers estimated the median annual fatality rate due to pesticide residues on conventionally grown food to be about 50 per million and 200 per million, respectively, which is similar in magnitude to the annual mortality risk from motor vehicle accidents in the United States. Over 90% of survey respondents also perceived a reduction in pesticide residue risk associated with substituting organically grown produce for conventionally grown produce, and nearly 50% perceived a risk reduction due to natural toxins and microbial pathogens. Multiple regression analyses indicate that only a few factors are consistently predictive of higher risk perceptions, including feelings of distrust toward regulatory agencies and the safety of the food supply. A variety of factors were found to be significant predictors of specific categories of food hazards, suggesting that consumers may view food safety risks as dissimilar from one another. Based on study findings, it is recommended that future agricultural policies and risk communication efforts utilize a comparative risk approach that targets a range of food safety hazards.  相似文献   

17.
An assumption of multivariate normality for a decision model is validated in this paper. Measurements for the independent variables of a bond rating model were taken from a sample of municipal bonds. Three methods for examining both univariate and multivariate normality (including normal probability plots) are described and applied to the bond data. The results imply, after applying normalizing transformations to four of the variables, that the data reasonably approximate multivariate normality, thereby validating a distributional requirement of the discriminant-analysis-based decision model. The methods described in the paper may also be used by others interested in examining multivariate normality assumptions of decision models.  相似文献   

18.
The classic newsvendor model was developed under the assumption that period‐to‐period demand is independent over time. In real‐life applications, the notion of independent demand is often challenged. In this article, we examine the newsvendor model in the presence of correlated demands. Specifically under a stationary AR(1) demand, we study the performance of the traditional newsvendor implementation versus a dynamic forecast‐based implementation. We demonstrate theoretically that implementing a minimum mean square error (MSE) forecast model will always have improved performance relative to the traditional implementation in terms of cost savings. In light of the widespread usage of all‐purpose models like the moving‐average method and exponential smoothing method, we compare the performance of these popular alternative forecasting methods against both the MSE‐optimal implementation and the traditional newsvendor implementation. If only alternative forecasting methods are being considered, we find that under certain conditions it is best to ignore the correlation and opt out of forecasting and to simply implement the traditional newsvendor model.   相似文献   

19.
Estimation of uncertainties associated with model predictions is an important component of the application of environmental and biological models. "Traditional" methods for propagating uncertainty, such as standard Monte Carlo and Latin Hypercube Sampling, however, often require performing a prohibitive number of model simulations, especially for complex, computationally intensive models. Here, a computationally efficient method for uncertainty propagation, the Stochastic Response Surface Method (SRSM) is coupled with another method, the Automatic Differentiation of FORTRAN (ADIFOR). The SRSM is based on series expansions of model inputs and outputs in terms of a set of "well-behaved" standard random variables. The ADIFOR method is used to transform the model code into one that calculates the derivatives of the model outputs with respect to inputs or transformed inputs. The calculated model outputs and the derivatives at a set of sample points are used to approximate the unknown coefficients in the series expansions of outputs. A framework for the coupling of the SRSM and ADIFOR is developed and presented here. Two case studies are presented, involving (1) a physiologically based pharmacokinetic model for perchloroethylene for humans, and (2) an atmospheric photochemical model, the Reactive Plume Model. The results obtained agree closely with those of traditional Monte Carlo and Latin hypercube sampling methods, while reducing the required number of model simulations by about two orders of magnitude.  相似文献   

20.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号