首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A quantitative assessment of the exposure to Listeria monocytogenes from cold-smoked salmon (CSS) consumption in France is developed. The general framework is a second-order (or two-dimensional) Monte Carlo simulation, which characterizes the uncertainty and variability of the exposure estimate. The model takes into account the competitive bacterial growth between L. monocytogenes and the background competitive flora from the end of the production line to the consumer phase. An original algorithm is proposed to integrate this growth in conditions of varying temperature. As part of a more general project led by the French Food Safety Agency (Afssa), specific data were acquired and modeled for this quantitative exposure assessment model, particularly time-temperature profiles, prevalence data, and contamination-level data. The sensitivity analysis points out the main influence of the mean temperature in household refrigerators and the prevalence of contaminated CSS on the exposure level. The outputs of this model can be used as inputs for further risk assessment.  相似文献   

2.
Consumer Phase Risk Assessment for Listeria monocytogenes in Deli Meats   总被引:1,自引:0,他引:1  
The foodborne disease risk associated with the pathogen Listeria monocytogenes has been the subject of recent efforts in quantitative microbial risk assessment. Building upon one of these efforts undertaken jointly by the U.S. Food and Drug Administration and the U.S. Department of Agriculture (USDA), the purpose of this work was to expand on the consumer phase of the risk assessment to focus on handling practices in the home. One-dimensional Monte Carlo simulation was used to model variability in growth and cross-contamination of L. monocytogenes during food storage and preparation of deli meats. Simulations approximated that 0.3% of the servings were contaminated with >10(4) CFU/g of L. monocytogenes at the time of consumption. The estimated mean risk associated with the consumption of deli meats for the intermediate-age population was approximately 7 deaths per 10(11) servings. Food handling in homes increased the estimated mean mortality by 10(6)-fold. Of all the home food-handling practices modeled, inadequate storage, particularly refrigeration temperatures, provided the greatest contribution to increased risk. The impact of cross-contamination in the home was considerably less. Adherence to USDA Food Safety and Inspection Service recommendations for consumer handling of ready-to-eat foods substantially reduces the risk of listeriosis.  相似文献   

3.
In this study, a variance‐based global sensitivity analysis method was first applied to a contamination assessment model of Listeria monocytogenes in cold smoked vacuum packed salmon at consumption. The impact of the choice of the modeling approach (populational or cellular) of the primary and secondary models as well as the effect of their associated input factors on the final contamination level was investigated. Results provided a subset of important factors, including the food water activity, its storage temperature, and duration in the domestic refrigerator. A refined sensitivity analysis was then performed to rank the important factors, tested over narrower ranges of variation corresponding to their current distributions, using three techniques: ANOVA, Spearman correlation coefficient, and partial least squares regression. Finally, the refined sensitivity analysis was used to rank the important factors.  相似文献   

4.
Currently, there is a growing preference for convenience food products, such as ready-to-eat (RTE) foods, associated with long refrigerated shelf-lives, not requiring a heat treatment prior to consumption. Because Listeria monocytogenes is able to grow at refrigeration temperatures, inconsistent temperatures during production, distribution, and at consumer's household may allow for the pathogen to thrive, reaching unsafe limits. L. monocytogenes is the causative agent of listeriosis, a rare but severe human illness, with high fatality rates, transmitted almost exclusively by food consumption. With the aim of assessing the quantitative microbial risk of L. monocytogenes in RTE chicken salads, a challenge test was performed. Salads were inoculated with a three-strain mixture of cold-adapted L. monocytogenes and stored at 4, 12, and 16 °C for eight days. Results revealed that the salad was able to support L. monocytogenes’ growth, even at refrigeration temperatures. The Baranyi primary model was fitted to microbiological data to estimate the pathogen's growth kinetic parameters. Temperature effect on the maximum specific growth rate (μmax) was modeled using a square-root-type model. Storage temperature significantly influenced μmax of L. monocytogenes (p < 0.05). These predicted growth models for L. monocytogenes were subsequently used to develop a quantitative microbial risk assessment, estimating a median number of 0.00008726 listeriosis cases per year linked to the consumption of these RTE salads. Sensitivity analysis considering different time–temperature scenarios indicated a very low median risk per portion (<−7 log), even if the assessed RTE chicken salad was kept in abuse storage conditions.  相似文献   

5.
Recently, the lag phase research in predictive microbiology is focusing more on the individual cell variability, especially for pathogenic microorganisms that typically occur in very low contamination levels, like Listeria monocytogenes. In this study, the effect of this individual cell lag phase variability was introduced in an exposure assessment study for L. monocytogenes in a liver paté. A basic framework was designed to estimate the contamination level of paté at the time of consumption, taking into account the frequency of contamination and the initial contamination levels of paté at retail. Growth was calculated on paté units of 150 g, comparing an individual-based approach with a classical population-based approach. The two different protocols were compared using simulations. If only the individual cell lag variability was taken into account, important differences were observed in cell density at the time of consumption between the individual-based approach and the classical approach, especially at low inoculum levels, resulting in high variability when using the individual-based approach. Although, when all variable factors were taken into account, no significant differences were observed between the different approaches, allowing the conclusion that the individual cell lag phase variability was overruled by the global variability of the exposure assessment framework. Even in more extreme conditions like a low inoculum level or a low water activity, no differences were created in cell density at the time of consumption between the individual-based approach and the classical approach. This means that the individual cell lag phase variability of L. monocytogenes has important consequences when studying specific growth cases, especially when the applied inoculum levels are low, but when performing more general exposure assessment studies, the variability between the individual cell lag phases is too limited to have a major impact on the total exposure assessment.  相似文献   

6.
Evaluations of Listeria monocytogenes dose‐response relationships are crucially important for risk assessment and risk management, but are complicated by considerable variability across population subgroups and L. monocytogenes strains. Despite difficulties associated with the collection of adequate data from outbreak investigations or sporadic cases, the limitations of currently available animal models, and the inability to conduct human volunteer studies, some of the available data now allow refinements of the well‐established exponential L. monocytogenes dose response to more adequately represent extremely susceptible population subgroups and highly virulent L. monocytogenes strains. Here, a model incorporating adjustments for variability in L. monocytogenes strain virulence and host susceptibility was derived for 11 population subgroups with similar underlying comorbidities using data from multiple sources, including human surveillance and food survey data. In light of the unique inherent properties of L. monocytogenes dose response, a lognormal‐Poisson dose‐response model was chosen, and proved able to reconcile dose‐response relationships developed based on surveillance data with outbreak data. This model was compared to a classical beta‐Poisson dose‐response model, which was insufficiently flexible for modeling the specific case of L. monocytogenes dose‐response relationships, especially in outbreak situations. Overall, the modeling results suggest that most listeriosis cases are linked to the ingestion of food contaminated with medium to high concentrations of L. monocytogenes. While additional data are needed to refine the derived model and to better characterize and quantify the variability in L. monocytogenes strain virulence and individual host susceptibility, the framework derived here represents a promising approach to more adequately characterize the risk of listeriosis in highly susceptible population subgroups.  相似文献   

7.
A pragmatic quantitative risk assessment (QRA) of the risks of waterborne Cryptosporidium parvum infection and cryptosporidiosis in immunocompetent and immunodeficient French populations is proposed. The model takes into account French specificities such as the French technique for oocyst enumeration performance and tap water consumption. The proportion of infective oocysts is based on literature review and expert knowledge. The probability of infection for a given number of ingested viable oocysts is modeled using the exponential dose-response model applied on published data from experimental infections in immunocompetent human volunteers challenged with the IOWA strain. Second-order Monte Carlo simulations are used to characterize the uncertainty and variability of the risk estimates. Daily risk of infection and illness for the immunocompetent and the immunodeficient populations are estimated according to the number of oocysts observed in a single storage reservoir water sample. As an example, the mean daily risk of infection in the immunocompetent population is estimated to be 1.08 x 10(-4) (95% confidence interval: [0.20 x 10(-4); 6.83 x 10(-4)]) when five oocysts are observed in a 100 L storage reservoir water sample. Annual risks of infection and disease are estimated from a set of oocyst enumeration results from distributed water samples, assuming a negative binomial distribution of day-to-day contamination variation. The model and various assumptions used in the model are fully explained and discussed. While caveats of this model are well recognized, this pragmatic QRA could represent a useful tool for the French Food Safety Agency (AFSSA) to define recommendations in case of water resource contamination by C. parvum whose infectivity is comparable to the IOWA strain.  相似文献   

8.
Modeling Logistic Performance in Quantitative Microbial Risk Assessment   总被引:1,自引:0,他引:1  
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times—mutually dependent in successive steps in the chain—cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for  Listeria monocytogenes  in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.  相似文献   

9.
Listeria monocytogenes is a leading cause of hospitalization, fetal loss, and death due to foodborne illnesses in the United States. A quantitative assessment of the relative risk of listeriosis associated with the consumption of 23 selected categories of ready‐to‐eat foods, published by the U.S. Department of Health and Human Services and the U.S. Department of Agriculture in 2003, has been instrumental in identifying the food products and practices that pose the greatest listeriosis risk and has guided the evaluation of potential intervention strategies. Dose‐response models, which quantify the relationship between an exposure dose and the probability of adverse health outcomes, were essential components of the risk assessment. However, because of data gaps and limitations in the available data and modeling approaches, considerable uncertainty existed. Since publication of the risk assessment, new data have become available for modeling L. monocytogenes dose‐response. At the same time, recent advances in the understanding of L. monocytogenes pathophysiology and strain diversity have warranted a critical reevaluation of the published dose‐response models. To discuss strategies for modeling L. monocytogenes dose‐response, the Interagency Risk Assessment Consortium (IRAC) and the Joint Institute for Food Safety and Applied Nutrition (JIFSAN) held a scientific workshop in 2011 (details available at http://foodrisk.org/irac/events/ ). The main findings of the workshop and the most current and relevant data identified during the workshop are summarized and presented in the context of L. monocytogenes dose‐response. This article also discusses new insights on dose‐response modeling for L. monocytogenes and research opportunities to meet future needs.  相似文献   

10.
This article presents a Listeria monocytogenes growth model in milk at the farm bulk tank stage. The main objective was to judge the feasibility and value to risk assessors of introducing a complex model, including a complete thermal model, within a microbial quantitative risk assessment scheme. Predictive microbiology models are used under varying temperature conditions to predict bacterial growth. Input distributions are estimated based on data in the literature, when it is available. If not, reasonable assumptions are made for the considered context. Previously published results based on a Bayesian analysis of growth parameters are used. A Monte Carlo simulation that forecasts bacterial growth is the focus of this study. Three scenarios that take account of the variability and uncertainty of growth parameters are compared. The effect of a sophisticated thermal model taking account of continuous variations in milk temperature was tested by comparison with a simplified model where milk temperature was considered as constant. Limited multiplication of bacteria within the farm bulk tank was modeled. The two principal factors influencing bacterial growth were found to be tank thermostat regulation and bacterial population growth parameters. The dilution phenomenon due to the introduction of new milk was the main factor affecting the final bacterial concentration. The results show that a model that assumes constant environmental conditions at an average temperature should be acceptable for this process. This work may constitute a first step toward exposure assessment for L. monocytogenes in milk. In addition, this partly conceptual work provides guidelines for other risk assessments where continuous variation of a parameter needs to be taken into account.  相似文献   

11.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

12.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

13.
Peanut allergy is a public health concern, owing to the high prevalence in France and the severity of the reactions. Despite peanut-containing product avoidance diets, a risk may exist due to the adventitious presence of peanut allergens in a wide range of food products. Peanut is not mentioned in their ingredients list, but precautionary labeling is often present. A method of quantifying the risk of allergic reactions following the consumption of such products is developed, taking the example of peanut in chocolate tablets. The occurrence of adventitious peanut proteins in chocolate and the dose-response relationship are estimated with a Bayesian approach using available published data. The consumption pattern is described by the French individual consumption survey INCA2. Risk simulations are performed using second-order Monte Carlo simulations, which separately propagates variability and uncertainty of the model input variables. Peanut allergens occur in approximately 36% of the chocolates, leading to a mean exposure level of 0.2 mg of peanut proteins per eating occasion. The estimated risk of reaction averages 0.57% per eating occasion for peanut-allergic adults. The 95% values of the risk stand between 0 and 3.61%, which illustrates the risk variability. The uncertainty, represented by the 95% credible intervals, is concentrated around these risk estimates. Children have similar results. The conclusion is that adventitious peanut allergens induce a risk of reaction for a part of the French peanut-allergic population. The method developed can be generalized to assess the risk due to the consumption of every foodstuff potentially contaminated by allergens.  相似文献   

14.
The uncertainty associated with estimates should be taken into account in quantitative risk assessment. Each input's uncertainty can be characterized through a probabilistic distribution for use under Monte Carlo simulations. In this study, the sampling uncertainty associated with estimating a low proportion on the basis of a small sample size was considered. A common application in microbial risk assessment is the estimation of a prevalence, proportion of contaminated food products, on the basis of few tested units. Three Bayesian approaches (based on beta(0, 0), beta(1/2, 1/2), and beta(l, 1)) and one frequentist approach (based on the frequentist confidence distribution) were compared and evaluated on the basis of simulations. For small samples, we demonstrated some differences between the four tested methods. We concluded that the better method depends on the true proportion of contaminated products, which is by definition unknown in common practice. When no prior information is available, we recommend the beta (1/2, 1/2) prior or the confidence distribution. To illustrate the importance of these differences, the four methods were used in an applied example. We performed two-dimensional Monte Carlo simulations to estimate the proportion of cold smoked salmon packs contaminated by Listeria monocytogenes, one dimension representing within-factory uncertainty, modeled by each of the four studied methods, and the other dimension representing variability between companies.  相似文献   

15.
Whether and to what extent contaminated sites harm ecologic and human health are topics of considerable interest, but also considerable uncertainty. Several federal and state agencies have approved the use of some or many aspects of probabilistic risk assessment (PRA), but its site-specific application has often been limited to high-profile sites and large projects. Nonetheless, times are changing: newly developed software tools, and recent federal and state guidance documents formalizing PRA procedures, now make PRA a readily available method of analysis for even small-scale projects. This article presents and discusses a broad review of PRA literature published since 2000.  相似文献   

16.
Increasing evidence suggests that persistence of Listeria monocytogenes in food processing plants has been the underlying cause of a number of human listeriosis outbreaks. This study extracts criteria used by food safety experts in determining bacterial persistence in the environment, using retail delicatessen operations as a model. Using the Delphi method, we conducted an expert elicitation with 10 food safety experts from academia, industry, and government to classify L. monocytogenes persistence based on environmental sampling results collected over six months for 30 retail delicatessen stores. The results were modeled using variations of random forest, support vector machine, logistic regression, and linear regression; variable importance values of random forest and support vector machine models were consolidated to rank important variables in the experts’ classifications. The duration of subtype isolation ranked most important across all expert categories. Sampling site category also ranked high in importance and validation errors doubled when this covariate was removed. Support vector machine and random forest models successfully classified the data with average validation errors of 3.1% and 2.2% (n = 144), respectively. Our findings indicate that (i) the frequency of isolations over time and sampling site information are critical factors for experts determining subtype persistence, (ii) food safety experts from different sectors may not use the same criteria in determining persistence, and (iii) machine learning models have potential for future use in environmental surveillance and risk management programs. Future work is necessary to validate the accuracy of expert and machine classification against biological measurement of L. monocytogenes persistence.  相似文献   

17.
Although there has been nearly complete agreement in the scientific community that Monte Carlo techniques represent a significant improvement in the exposure assessment process, virtually all state and federal risk assessments still rely on the traditional point estimate approach. One of the rate-determining steps to a timely implementation of Monte Carlo techniques to regulatory decision making is the development of "standard" data distributions that are considered applicable to any setting. For many exposure variables, there is no need to wait any longer to adopt Monte Carlo techniques into regulatory policy since there is a wealth of data from which a robust distribution can be developed and ample evidence to indicate that the variable is not significantly influenced by site-specific conditions. In this paper, we propose several distributions that can be considered standard and customary for most settings. Age-specific distributions for soil ingestion rates, inhalation rates, body weights, skin surface area, tapwater and fish consumption, residential occupancy and occupational tenure, and soil-on-skin adherence were developed. For each distribution offered in this paper, we discuss the adequacy of the database, derivation of the distribution, and applicability of the distribution to various settings and conditions.  相似文献   

18.
We propose 14 principles of good practice to assist people in performing and reviewing probabilistic or Monte Carlo risk assessments, especially in the context of the federal and state statutes concerning chemicals in the environment. Monte Carlo risk assessments for hazardous waste sites that follow these principles will be easier to understand, will explicitly distinguish assumptions from data, and will consider and quantify effects that could otherwise lead to misinterpretation of the results. The proposed principles are neither mutually exclusive nor collectively exhaustive. We think and hope that these principles will evolve as new ideas arise and come into practice.  相似文献   

19.
Standard statistical methods understate the uncertainty one should attach to effect estimates obtained from observational data. Among the methods used to address this problem are sensitivity analysis, Monte Carlo risk analysis (MCRA), and Bayesian uncertainty assessment. Estimates from MCRAs have been presented as if they were valid frequentist or Bayesian results, but examples show that they need not be either in actual applications. It is concluded that both sensitivity analyses and MCRA should begin with the same type of prior specification effort as Bayesian analysis.  相似文献   

20.
Application of Monte Carlo simulation methods to quantitative risk assessment are becoming increasingly popular. With this methodology, investigators have become concerned about correlations among input variables which might affect the resulting distribution of risk. We show that the choice of input distributions in these simulations likely has a larger effect on the resultant risk distribution than does the inclusion or exclusion of correlations. Previous investigators have studied the effect of correlated input variables for the addition of variables with any underlying distribution and for the product of lognormally distributed variables. The effects in the main part of the distribution are small unless the correlation and variances are large. We extend this work by considering addition, multiplication and division of two variables with assumed normal, lognormal, uniform and triangular distributions. For all possible pairwise combinations, we find that the effects of correlated input variables are similar to those observed for lognormal distributions, and thus relatively small overall. The effect of using different distributions, however, can be large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号