首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Modeling Microbial Growth Within Food Safety Risk Assessments   总被引:5,自引:0,他引:5  
Risk estimates for food-borne infection will usually depend heavily on numbers of microorganisms present on the food at the time of consumption. As these data are seldom available directly, attention has turned to predictive microbiology as a means of inferring exposure at consumption. Codex guidelines recommend that microbiological risk assessment should explicitly consider the dynamics of microbiological growth, survival, and death in foods. This article describes predictive models and resources for modeling microbial growth in foods, and their utility and limitations in food safety risk assessment. We also aim to identify tools, data, and knowledge sources, and to provide an understanding of the microbial ecology of foods so that users can recognize model limits, avoid modeling unrealistic scenarios, and thus be able to appreciate the levels of confidence they can have in the outputs of predictive microbiology models. The microbial ecology of foods is complex. Developing reliable risk assessments involving microbial growth in foods will require the skills of both microbial ecologists and mathematical modelers. Simplifying assumptions will need to be made, but because of the potential for apparently small errors in growth rate to translate into very large errors in the estimate of risk, the validity of those assumptions should be carefully assessed. Quantitative estimates of absolute microbial risk within narrow confidence intervals do not yet appear to be possible. Nevertheless, the expression of microbial ecology knowledge in "predictive microbiology" models does allow decision support using the tools of risk assessment.  相似文献   

2.
Sensitivity analysis (SA) methods are a valuable tool for identifying critical control points (CCPs), which is one of the important steps in the hazard analysis and CCP approach that is used to ensure safe food. There are many SA methods used across various disciplines. Furthermore, food safety process risk models pose challenges because they often are highly nonlinear, contain thresholds, and have discrete inputs. Therefore, it is useful to compare and evaluate SA methods based upon applications to an example food safety risk model. Ten SA methods were applied to a draft Vibrio parahaemolyticus (Vp) risk assessment model developed by the Food and Drug Administration. The model was modified so that all inputs were independent. Rankings of key inputs from different methods were compared. Inputs such as water temperature, number of oysters per meal, and the distributional assumption for the unrefrigerated time were the most important inputs, whereas time on water, fraction of pathogenic Vp, and the distributional assumption for the weight of oysters were the least important inputs. Most of the methods gave a similar ranking of key inputs even though the methods differed in terms of being graphical, mathematical, or statistical, accounting for individual effects or joint effect of inputs, and being model dependent or model independent. A key recommendation is that methods be further compared by application on different and more complex food safety models. Model independent methods, such as ANOVA, mutual information index, and scatter plots, are expected to be more robust than others evaluated.  相似文献   

3.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

4.
The food industry faces two paradoxical demands: on the one hand, foods need to be microbiologically safe for consumption and on the other hand, consumers want fresh, minimally processed foods. To meet these demands, more insight into the mechanisms of microbial growth is needed, which includes, among others, the microbial lag phase. This is the time needed by bacterial cells to adapt to a new environment (for example, after food product contamination) before starting an exponential growth regime. Since food products are often contaminated with low amounts of pathogenic microorganisms, it is important to know the distribution of these individual cell lag times to make accurate predictions concerning food safety. More precisely, cells with the shortest lag times (i.e., appearing in the left tail of the distribution) are largely decisive for the outgrowth of the population. In this study, an integrated modeling approach is proposed and applied to an existing data set of individual cell lag time measurements of Listeria monocytogenes. In a first step, a logistic modeling approach is applied to predict the fraction of zero-lag cells (which start growing immediately) as a function of temperature, pH, and water activity. For the nonzero-lag cells, the mean and variance of the lag time distribution are modeled with a hyperbolic-type model structure. This mean and variance allow identification of the parameters of a two-parameter Weibull distribution, representing the nonzero-lag cell lag time distribution. The integration of the developed models allows prediction of a global distribution of individual cell lag times for any combination of environmental conditions in the interpolation domain of the original temperature, pH, and water activity settings. The global fitting quality of the model is quantified using several measures indicating that the model gives accurate predictions, erring slightly on the fail-safe side when predicting the shortest lag times.  相似文献   

5.
We used an agent‐based modeling (ABM) framework and developed a mathematical model to explain the complex dynamics of microbial persistence and spread within a food facility and to aid risk managers in identifying effective mitigation options. The model explicitly considered personal hygiene practices by food handlers as well as their activities and simulated a spatially explicit dynamic system representing complex interaction patterns among food handlers, facility environment, and foods. To demonstrate the utility of the model in a decision‐making context, we created a hypothetical case study and used it to compare different risk mitigation strategies for reducing contamination and spread of Listeria monocytogenes in a food facility. Model results indicated that areas with no direct contact with foods (e.g., loading dock and restroom) can serve as contamination niches and recontaminate areas that have direct contact with food products. Furthermore, food handlers’ behaviors, including, for example, hygiene and sanitation practices, can impact the persistence of microbial contamination in the facility environment and the spread of contamination to prepared foods. Using this case study, we also demonstrated benefits of an ABM framework for addressing food safety in a complex system in which emergent system‐level responses are predicted using a bottom‐up approach that observes individual agents (e.g., food handlers) and their behaviors. Our model can be applied to a wide variety of pathogens, food commodities, and activity patterns to evaluate efficacy of food‐safety management practices and quantify contamination reductions associated with proposed mitigation strategies in food facilities.  相似文献   

6.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

7.
Efficient food safety monitoring should achieve optimal resource allocation. In this article, a methodology is presented to optimize the use of resources for food safety monitoring aimed at identifying noncompliant samples and estimating background level of hazards in food products. A Bayesian network (BN) model and an optimization model were combined in a single framework. The framework was applied to monitoring dioxins and dioxin-like polychlorinated biphenyls (DL-PCBs) in primary animal-derived food products in the Netherlands. The BN model was built using a national dataset with monitoring results of dioxins and DL-PCBs in animal-derived food products over a 10-year period (2008–2017). These data were used to estimate the probability of detecting suspect samples with dioxins and DL-PCBs levels above preset thresholds, given certain sample conditions. The results of the BN model were then inserted into the optimization model to compute an optimal monitoring scheme. Model estimates showed that the probability of dioxins and DL-PCBs exceeding threshold limits was higher in laying hen eggs and sheep meat than in other animal-derived food (except deer meat). Compared with the monitoring scheme used in the Netherlands in 2018, the optimal monitoring scheme would save around 10,000 EUR per year. This could be obtained by reallocating monitoring resources from products with lower probability of dioxin and DL-PCBs exceeding threshold limits (e.g., pig meat) to products with higher probability (e.g., bovine animal meat), and by shifting sample collection from the last quarter of the year toward the first three quarters of the year.  相似文献   

8.
Public risk perceptions and demand for safer food are important factors shaping agricultural production practices in the United States. Despite documented food safety concerns, little attempt has been made to elicit consumers' subjective risk judgments for a range of food safety hazards or to identify factors most predictive of perceived food safety risks. In this study, over 700 conventional and organic fresh produce buyers in the Boston area were surveyed for their perceived food safety risks. Survey results showed that consumers perceived relatively high risks associated with the consumption and production of conventionally grown produce compared with other public health hazards. For example, conventional and organic food buyers estimated the median annual fatality rate due to pesticide residues on conventionally grown food to be about 50 per million and 200 per million, respectively, which is similar in magnitude to the annual mortality risk from motor vehicle accidents in the United States. Over 90% of survey respondents also perceived a reduction in pesticide residue risk associated with substituting organically grown produce for conventionally grown produce, and nearly 50% perceived a risk reduction due to natural toxins and microbial pathogens. Multiple regression analyses indicate that only a few factors are consistently predictive of higher risk perceptions, including feelings of distrust toward regulatory agencies and the safety of the food supply. A variety of factors were found to be significant predictors of specific categories of food hazards, suggesting that consumers may view food safety risks as dissimilar from one another. Based on study findings, it is recommended that future agricultural policies and risk communication efforts utilize a comparative risk approach that targets a range of food safety hazards.  相似文献   

9.
Topics in Microbial Risk Assessment: Dynamic Flow Tree Process   总被引:5,自引:0,他引:5  
Microbial risk assessment is emerging as a new discipline in risk assessment. A systematic approach to microbial risk assessment is presented that employs data analysis for developing parsimonious models and accounts formally for the variability and uncertainty of model inputs using analysis of variance and Monte Carlo simulation. The purpose of the paper is to raise and examine issues in conducting microbial risk assessments. The enteric pathogen Escherichia coli O157:H7 was selected as an example for this study due to its significance to public health. The framework for our work is consistent with the risk assessment components described by the National Research Council in 1983 (hazard identification; exposure assessment; dose-response assessment; and risk characterization). Exposure assessment focuses on hamburgers, cooked a range of temperatures from rare to well done, the latter typical for fast food restaurants. Features of the model include predictive microbiology components that account for random stochastic growth and death of organisms in hamburger. For dose-response modeling, Shigella data from human feeding studies were used as a surrogate for E. coli O157:H7. Risks were calculated using a threshold model and an alternative nonthreshold model. The 95% probability intervals for risk of illness for product cooked to a given internal temperature spanned five orders of magnitude for these models. The existence of even a small threshold has a dramatic impact on the estimated risk.  相似文献   

10.
本文综合考虑食品市场竞争环境,消费者食品安全风险规避程度及食品追溯水平等因素建立了食品厂商的双寡头竞争博弈模型,探讨了食品的追溯水平与消费者风险规避程度对食品厂商价格、安全努力水平以及利润的影响。研究发现:食品厂商的最优价格和食品的安全努力水平随其食品追溯水平的提高而增加,随竞争厂商的食品追溯水平的提高而降低;厂商食品追溯水平较高时,其安全努力水平随消费者风险规避程度的增大而提高,其最优价格随高风险规避型消费者比例的增大而增加;而厂商食品追溯水平较低时,其安全努力水平随消费者风险规避程度的增大先增加而后下降,其最优价格随高风险规避型消费者比例的增大总是下降;随高风险规避型消费者比例的增大,各厂商的食品追溯水平对其决策的影响产生不同的变化。  相似文献   

11.
Microbial food safety has been the focus of research across various disciplines within the risk analysis community. Natural scientists involved in food microbiology and related disciplines work on the identification of health hazards, and the detection of pathogenic microorganisms. To perform risk assessment, research activities are increasingly focused on the quantification of microbial contamination of food products at various stages in the food chain, and modeling the impact of this contamination on human health. Social scientists conduct research into how consumers perceive food risks, and how best to develop effective risk communication with consumers in order to improve public health through improved food handling practices. The two approaches converge at the end of the food chain, where the activities regarding food preparation and food consumption are considered. Both natural and social sciences may benefit from input and expertise from the perspective of the alternative discipline, although, to date, the integration of social and natural sciences has been somewhat limited. This article therefore explores the potential of a transdisciplinary approach to food risk analysis in terms of delivering additional improvements to public health. Developing knowledge arising from research in both the natural and social sciences, we present a novel framework involving the integration of the two approaches that might provide the most effective way to improve the consumer health associated with food-borne illness.  相似文献   

12.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

13.
Steven M. Quiring 《Risk analysis》2011,31(12):1897-1906
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out‐of‐sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy.  相似文献   

14.
We propose a new modeling approach for inspection data that provides a more useful interpretation of the patterns of detections of invasive pests, using cargo inspection as a motivating example. Methods that are currently in use generally classify shipments according to their likelihood of carrying biosecurity risk material, given available historical and contextual data. Ideally, decisions regarding which cargo containers to inspect should be made in real time, and the models used should be able to focus efforts when the risk is higher. In this study, we propose a dynamic approach that treats the data as a time series in order to detect periods of high risk. A regulatory organization will respond differently to evidence of systematic problems than evidence of random problems, so testing for serial correlation is of major interest. We compare three models that account for various degrees of serial dependence within the data. First is the independence model where the prediction of the arrival of a risky shipment is made solely on the basis of contextual information. We also consider a Markov chain that allows dependence between successive observations, and a hidden Markov model that allows further dependence on past data. The predictive performance of the models is then evaluated using ROC and leakage curves. We illustrate this methodology on two sets of real inspection data.  相似文献   

15.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

16.
Decision making in food safety is a complex process that involves several criteria of different nature like the expected reduction in the number of illnesses, the potential economic or health-related cost, or even the environmental impact of a given policy or intervention. Several multicriteria decision analysis (MCDA) algorithms are currently used, mostly individually, in food safety to rank different options in a multifactorial environment. However, the selection of the MCDA algorithm is a decision problem on its own because different methods calculate different rankings. The aim of this study was to compare the impact of different uncertainty sources on the rankings of MCDA problems in the context of food safety. For that purpose, a previously published data set on emerging zoonoses in the Netherlands was used to compare different MCDA algorithms: MMOORA, TOPSIS, VIKOR, WASPAS, and ELECTRE III. The rankings were calculated with and without considering uncertainty (using fuzzy sets), to assess the importance of this factor. The rankings obtained differed between algorithms, emphasizing that the selection of the MCDA method had a relevant impact in the rankings. Furthermore, considering uncertainty in the ranking had a high influence on the results. Both factors were more relevant than the weights associated with each criterion in this case study. A hierarchical clustering method was suggested to aggregate results obtained by the different algorithms. This complementary step seems to be a promising way to decrease extreme difference among algorithms and could provide a strong added value in the decision-making process.  相似文献   

17.
In this paper, we investigate the relationship between external auditor characteristics and the likelihood of bankruptcy. We use a sample of US public companies to analyse whether auditor attributes are associated with default. We also test whether the inclusion of such attributes in bankruptcy prediction models improves their predictive ability. We find that firms audited by industry-expert auditors, large audit firms and long-tenured auditors are less likely to default. Firms with higher audit fees are more likely to default. Our results also show that the inclusion of auditor attributes significantly increases the predictive ability of bankruptcy prediction models. This paper contributes to the literature about auditing and bankruptcy prediction. Our results suggest that the auditor attributes can provide predictive signals concerning a default risk and that an external audit can play a relevant role in early warnings of financial distress. Our study also suggests that bankruptcy prediction models can become more effective if they are complemented with audit data. Our results are of interest to market participants, auditors, regulating authorities, banks and other financial institutions that are interested in credit risk assessment.  相似文献   

18.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

19.
When analyzing a reorder point, order quantity (r, Q) inventory systems, one important question that often gets very little, if any, attention is: When a stockout occurs, how large is it? This paper is directed at researchers and practicing inventory planners with two objectives. First, we provide several models and algorithms to compute the Expected Shortages When a Stockout Occurs (ESWSO) for a variety of stochastic environments. We show that when ESWSO, is used in conjunction with the traditional fill rate measures it greatly enhances a planners ability to plan for shortages. Second, we develop two cost‐minimizing inventory models—one addressing the backorder and the other the shortage scenario—to show how the ESWSO can be seamlessly integrated into an inventory‐cost framework to specify lot sizes and safety stocks.  相似文献   

20.
Food safety objectives (FSOs) are established in order to minimize the risk of foodborne illnesses to consumers, but these have not yet been incorporated into regulatory policy. An FSO states the maximum frequency and/or concentration of a microbiological hazard in a food at the time of consumption that provides an acceptable level of protection to the public and leads to a performance criterion for industry. However, in order to be implemented as a regulation, this criterion has to be achievable by the affected industry. In order to determine an FSO, the steps to produce and store that food need to be known, especially where they have an impact on contamination, growth, and destruction. This article uses existing models for growth of Listeria monocytogenes in conjunction with calculations of FSOs to approximate the outcome of more than one introduction of the foodborne organism throughout the food-processing path from the farm to the consumer. Most models for the growth and reduction of foodborne illnesses are logarithmic in nature, which fits the nature of the growth of microorganisms, spanning many orders of magnitude. However, these logarithmic models are normally limited to a single introduction step and a single reduction step. The model presented as part of this research addresses more than one introduction of food contamination, each of which can be separated by a substantial amount of time. The advantage of treating the problem this way is the accommodation of multiple introductions of foodborne pathogens over a range of time durations and conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号