首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, the lag phase research in predictive microbiology is focusing more on the individual cell variability, especially for pathogenic microorganisms that typically occur in very low contamination levels, like Listeria monocytogenes. In this study, the effect of this individual cell lag phase variability was introduced in an exposure assessment study for L. monocytogenes in a liver paté. A basic framework was designed to estimate the contamination level of paté at the time of consumption, taking into account the frequency of contamination and the initial contamination levels of paté at retail. Growth was calculated on paté units of 150 g, comparing an individual-based approach with a classical population-based approach. The two different protocols were compared using simulations. If only the individual cell lag variability was taken into account, important differences were observed in cell density at the time of consumption between the individual-based approach and the classical approach, especially at low inoculum levels, resulting in high variability when using the individual-based approach. Although, when all variable factors were taken into account, no significant differences were observed between the different approaches, allowing the conclusion that the individual cell lag phase variability was overruled by the global variability of the exposure assessment framework. Even in more extreme conditions like a low inoculum level or a low water activity, no differences were created in cell density at the time of consumption between the individual-based approach and the classical approach. This means that the individual cell lag phase variability of L. monocytogenes has important consequences when studying specific growth cases, especially when the applied inoculum levels are low, but when performing more general exposure assessment studies, the variability between the individual cell lag phases is too limited to have a major impact on the total exposure assessment.  相似文献   

2.
Cheese smearing is a complex process and the potential for cross-contamination with pathogenic or undesirable microorganisms is critical. During ripening, cheeses are salted and washed with brine to develop flavor and remove molds that could develop on the surfaces. Considering the potential for cross-contamination of this process in quantitative risk assessments could contribute to a better understanding of this phenomenon and, eventually, improve its control. The purpose of this article is to model the cross-contamination of smear-ripened cheeses due to the smearing operation under industrial conditions. A compartmental, dynamic, and stochastic model is proposed for mechanical brush smearing. This model has been developed to describe the exchange of microorganisms between compartments. Based on the analytical solution of the model equations and on experimental data collected with an industrial smearing machine, we assessed the values of the transfer parameters of the model. Monte Carlo simulations, using the distributions of transfer parameters, provide the final number of contaminated products in a batch and their final level of contamination for a given scenario taking into account the initial number of contaminated cheeses of the batch and their contaminant load. Based on analytical results, the model provides indicators for smearing efficiency and propensity of the process for cross-contamination. Unlike traditional approaches in mechanistic models, our approach captures the variability and uncertainty inherent in the process and the experimental data. More generally, this model could represent a generic base to use in modeling similar processes prone to cross-contamination.  相似文献   

3.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

4.
A Bayesian statistical temporal‐prevalence‐concentration model (TPCM) was built to assess the prevalence and concentration of pathogenic campylobacter species in batches of fresh chicken and turkey meat at retail. The data set was collected from Finnish grocery stores in all the seasons of the year. Observations at low concentration levels are often censored due to the limit of determination of the microbiological methods. This model utilized the potential of Bayesian methods to borrow strength from related samples in order to perform under heavy censoring. In this extreme case the majority of the observed batch‐specific concentrations was below the limit of determination. The hierarchical structure was included in the model in order to take into account the within‐batch and between‐batch variability, which may have a significant impact on the sample outcome depending on the sampling plan. Temporal changes in the prevalence of campylobacter were modeled using a Markovian time series. The proposed model is adaptable for other pathogens if the same type of data set is available. The computation of the model was performed using OpenBUGS software.  相似文献   

5.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

6.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

7.
Regulatory agencies often perform microbial risk assessments to evaluate the change in the number of human illnesses as the result of a new policy that reduces the level of contamination in the food supply. These agencies generally have regulatory authority over the production and retail sectors of the farm‐to‐table continuum. Any predicted change in contamination that results from new policy that regulates production practices occurs many steps prior to consumption of the product. This study proposes a framework for conducting microbial food‐safety risk assessments; this framework can be used to quantitatively assess the annual effects of national regulatory policies. Advantages of the framework are that estimates of human illnesses are consistent with national disease surveillance data (which are usually summarized on an annual basis) and some of the modeling steps that occur between production and consumption can be collapsed or eliminated. The framework leads to probabilistic models that include uncertainty and variability in critical input parameters; these models can be solved using a number of different Bayesian methods. The Bayesian synthesis method performs well for this application and generates posterior distributions of parameters that are relevant to assessing the effect of implementing a new policy. An example, based on Campylobacter and chicken, estimates the annual number of illnesses avoided by a hypothetical policy; this output could be used to assess the economic benefits of a new policy. Empirical validation of the policy effect is also examined by estimating the annual change in the numbers of illnesses observed via disease surveillance systems.  相似文献   

8.
According to Codex Alimentarius Commission recommendations, management options applied at the process production level should be based on good hygiene practices, HACCP system, and new risk management metrics such as the food safety objective. To follow this last recommendation, the use of quantitative microbiological risk assessment is an appealing approach to link new risk‐based metrics to management options that may be applied by food operators. Through a specific case study, Listeria monocytogenes in soft cheese made from pasteurized milk, the objective of the present article is to practically show how quantitative risk assessment could be used to direct potential intervention strategies at different food processing steps. Based on many assumptions, the model developed estimates the risk of listeriosis at the moment of consumption taking into account the entire manufacturing process and potential sources of contamination. From pasteurization to consumption, the amplification of a primo‐contamination event of the milk, the fresh cheese or the process environment is simulated, over time, space, and between products, accounting for the impact of management options, such as hygienic operations and sampling plans. A sensitivity analysis of the model will help orientating data to be collected prioritarily for the improvement and the validation of the model. What‐if scenarios were simulated and allowed for the identification of major parameters contributing to the risk of listeriosis and the optimization of preventive and corrective measures.  相似文献   

9.
The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al . Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.  相似文献   

10.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

11.
The World Trade Organization introduced the concept of appropriate level of protection (ALOP) as a public health target. For this public health objective to be interpretable by the actors in the food chain, the concept of food safety objective (FSO) was proposed by the International Commission on Microbiological Specifications for Foods and adopted later by the Codex Alimentarius Food Hygiene Committee. The way to translate an ALOP into a FSO is still in debate. The purpose of this article is to develop a methodological tool to derive a FSO from an ALOP being expressed as a maximal annual marginal risk. We explore the different models relating the annual marginal risk to the parameters of the FSO depending on whether the variability in the survival probability and in the concentration of the pathogen are considered or not. If they are not, determination of the FSO is straightforward. If they are, we propose to use stochastic Monte Carlo simulation models and logistic discriminant analysis in order to determine which sets of parameters are compatible with the ALOP. The logistic discriminant function was chosen such that the kappa coefficient is maximized. We illustrate this method by the example of the risks of listeriosis and salmonellosis in one type of soft cheese. We conclude that the definition of the FSO should integrate three dimensions: the prevalence of contamination, the average concentration per contaminated typical serving, and the dispersion of the concentration among those servings.  相似文献   

12.
We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.  相似文献   

13.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

14.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

15.
High Risk or Low: How Location on a "Risk Ladder" Affects Perceived Risk   总被引:2,自引:0,他引:2  
Efforts to explain risk magnitude often rely on a "risk ladder" in which exposure levels and associated risk estimates are arrayed with low levels at the bottom and high ones at the top. Two experiments were conducted to test the hypothesis that perceived threat and intended mitigation vary with the location of the subject's assigned level on the risk ladder. Subjects were New Jersey homeowners, asked to assume a particular level of radon or asbestos contamination in their homes, to read a brochure explaining the risk, and then to complete a questionnaire. Both studies found that the difference between an assigned level one-quarter of the way up the ladder and the same level three-quarters of the way up the ladder significantly affected threat perception; the effect on mitigation intentions was significant in only one of the studies. Variations in assigned risk also affected threat perception and mitigation intentions. Variations in test magnitude (e.g., 15 fibers per liter vs. 450 fibers per cubic foot, roughly equivalent risks) had no effect, nor did the distinction between radon and asbestos affect the dependent variables. These findings suggest that communicators can design risk ladders to emphasize particular risk characteristics.  相似文献   

16.
The development of catastrophe models in recent years allows for assessment of the flood hazard much more effectively than when the federally run National Flood Insurance Program (NFIP) was created in 1968. We propose and then demonstrate a methodological approach to determine pure premiums based on the entire distribution of possible flood events. We apply hazard, exposure, and vulnerability analyses to a sample of 300,000 single‐family residences in two counties in Texas (Travis and Galveston) using state‐of‐the‐art flood catastrophe models. Even in zones of similar flood risk classification by FEMA there is substantial variation in exposure between coastal and inland flood risk. For instance, homes in the designated moderate‐risk X500/B zones in Galveston are exposed to a flood risk on average 2.5 times greater than residences in X500/B zones in Travis. The results also show very similar average annual loss (corrected for exposure) for a number of residences despite their being in different FEMA flood zones. We also find significant storm‐surge exposure outside of the FEMA designated storm‐surge risk zones. Taken together these findings highlight the importance of a microanalysis of flood exposure. The process of aggregating risk at a flood zone level—as currently undertaken by FEMA—provides a false sense of uniformity. As our analysis indicates, the technology to delineate the flood risks exists today.  相似文献   

17.
Due to the increasing level of supply risk, it is imperative to obtain a better understanding of the nature of risk which is a premise to developing well-grounded risk mitigation strategies. This paper examines how to mitigate supply risk enlightened by discussing the concept of risk in association of three closely related concepts which are uncertainty, variability and trust. The proposed three perspectives are supported and explained using four case studies comprising of two manufacturers based in Australia and four suppliers based in China. The study provides evidence that supply risk can be mitigated by high level of information and knowledge sharing as well as building trust, commitment and goal congruence in a buyer–supplier relationship. It offers theoretical and managerial implications.  相似文献   

18.
《Risk analysis》2018,38(8):1718-1737
We developed a probabilistic mathematical model for the postharvest processing of leafy greens focusing on Escherichia coli O157:H7 contamination of fresh‐cut romaine lettuce as the case study. Our model can (i) support the investigation of cross‐contamination scenarios, and (ii) evaluate and compare different risk mitigation options. We used an agent‐based modeling framework to predict the pathogen prevalence and levels in bags of fresh‐cut lettuce and quantify spread of E. coli O157:H7 from contaminated lettuce to surface areas of processing equipment. Using an unbalanced factorial design, we were able to propagate combinations of random values assigned to model inputs through different processing steps and ranked statistically significant inputs with respect to their impacts on selected model outputs. Results indicated that whether contamination originated on incoming lettuce heads or on the surface areas of processing equipment, pathogen prevalence among bags of fresh‐cut lettuce and batches was most significantly impacted by the level of free chlorine in the flume tank and frequency of replacing the wash water inside the tank. Pathogen levels in bags of fresh‐cut lettuce were most significantly influenced by the initial levels of contamination on incoming lettuce heads or surface areas of processing equipment. The influence of surface contamination on pathogen prevalence or levels in fresh‐cut bags depended on the location of that surface relative to the flume tank. This study demonstrates that developing a flexible yet mathematically rigorous modeling tool, a “virtual laboratory,” can provide valuable insights into the effectiveness of individual and combined risk mitigation options.  相似文献   

19.
Soil contaminated with heavy metals is a salient example of environmental risk. Consumption of vegetables cultivated in contaminated soil or direct ingestion of soil by small children can damage health. In contrast to other kinds of pollution or risks such as air pollution or exposure to ozone, the individual risk concerning soil contamination is highly dependent on the way one is exposed to the local source of risk. Thus, we wanted to know if risk perception varies according to the level of exposure. A quasi-experimental, questionnaire-based study was conducted in a community in northwest Switzerland, where the soil is widely contaminated. The level of contamination varies with the distance from the source of the contamination, a metal processing plant. We investigated the perception of risk of heavy-metal-contaminated soil by inhabitants with high-exposure levels (N= 27) and those with low-exposure levels (N= 30). Both groups judged the risk for oneself similarly whereas the low-exposure group, when compared to the high-exposure group, judged perceived risk for other affected people living in their community to be higher. Besides this exposure effect, risk perception was mainly determined by emotional concerns. Participants with higher scores in self-estimated knowledge tended to provide low-risk judgments, were less interested in further information, showed low emotional concern, and thus displayed high risk acceptance. In contrast, actual knowledge showed no correlation with any of theses variables. Judgments on the need for decontamination are determined by risk perception, less application of dissonance-reducing heuristics and commitment to sustainability. The desire for additional information is not affected by missing knowledge but is affected by emotional concerns.  相似文献   

20.
M. C. Kennedy 《Risk analysis》2011,31(10):1597-1609
Two‐dimensional Monte Carlo simulation is frequently used to implement probabilistic risk models, as it allows for uncertainty and variability to be quantified separately. In many cases, we are interested in the proportion of individuals from a variable population exceeding a critical threshold, together with uncertainty about this proportion. In this article we introduce a new method that can accurately estimate these quantities much more efficiently than conventional algorithms. We also show how those model parameters having the greatest impact on the probabilities of rare events can be quickly identified via this method. The algorithm combines elements from well‐established statistical techniques in extreme value theory and Bayesian analysis of computer models. We demonstrate the practical application of these methods with a simple example, in which the true distributions are known exactly, and also with a more realistic model of microbial contamination of milk with seven parameters. For the latter, sensitivity analysis (SA) is shown to identify the two inputs explaining the majority of variation in distribution tail behavior. In the subsequent prediction of probabilities of large contamination events, similar results are obtained using the new approach taking 43 seconds or the conventional simulation that requires more than 3 days.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号