首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several statistical models for salmonella source attribution have been presented in the literature. However, these models have often been found to be sensitive to the model parameterization, as well as the specifics of the data set used. The Bayesian salmonella source attribution model presented here was developed to be generally applicable with small and sparse annual data sets obtained over several years. The full Bayesian model was modularized into three parts (an exposure model, a subtype distribution model, and an epidemiological model) in order to separately estimate unknown parameters in each module. The proposed model takes advantage of the consumption and overall salmonella prevalence of the studied sources, as well as bacteria typing results from adjacent years. The latter were used for a smoothed estimation of the annual relative proportions of different salmonella subtypes in each of the sources. The source‐specific effects and the salmonella subtype‐specific effects were included in the epidemiological model to describe the differences between sources and between subtypes in their ability to infect humans. The estimation of these parameters was based on data from multiple years. Finally, the model combines the total evidence from different modules to proportion human salmonellosis cases according to their sources. The model was applied to allocate reported human salmonellosis cases from the years 2008 to 2015 to eight food sources.  相似文献   

2.
Models for the assessment of the risk of complex engineering systems are affected by uncertainties due to the randomness of several phenomena involved and the incomplete knowledge about some of the characteristics of the system. The objective of this article is to provide operative guidelines to handle some conceptual and technical issues related to the treatment of uncertainty in risk assessment for engineering practice. In particular, the following issues are addressed: (1) quantitative modeling and representation of uncertainty coherently with the information available on the system of interest; (2) propagation of the uncertainty from the input(s) to the output(s) of the system model; (3) (Bayesian) updating as new information on the system becomes available; and (4) modeling and representation of dependences among the input variables and parameters of the system model. Different approaches and methods are recommended for efficiently tackling each of issues (1)?(4) above; the tools considered are derived from both classical probability theory as well as alternative, nonfully probabilistic uncertainty representation frameworks (e.g., possibility theory). The recommendations drawn are supported by the results obtained in illustrative applications of literature.  相似文献   

3.
Prevention of the emergence and spread of foodborne diseases is an important prerequisite for the improvement of public health. Source attribution models link sporadic human cases of a specific illness to food sources and animal reservoirs. With the next generation sequencing technology, it is possible to develop novel source attribution models. We investigated the potential of machine learning to predict the animal reservoir from which a bacterial strain isolated from a human salmonellosis case originated based on whole-genome sequencing. Machine learning methods recognize patterns in large and complex data sets and use this knowledge to build models. The model learns patterns associated with genetic variations in bacteria isolated from the different animal reservoirs. We selected different machine learning algorithms to predict sources of human salmonellosis cases and trained the model with Danish Salmonella Typhimurium isolates sampled from broilers (n = 34), cattle (n = 2), ducks (n = 11), layers (n = 4), and pigs (n = 159). Using cgMLST as input features, the model yielded an average accuracy of 0.783 (95% CI: 0.77–0.80) in the source prediction for the random forest and 0.933 (95% CI: 0.92–0.94) for the logit boost algorithm. Logit boost algorithm was most accurate (valid accuracy: 92%, CI: 0.8706–0.9579) and predicted the origin of 81% of the domestic sporadic human salmonellosis cases. The most important source was Danish produced pigs (53%) followed by imported pigs (16%), imported broilers (6%), imported ducks (2%), Danish produced layers (2%), Danish produced cattle and imported cattle (<1%) while 18% was not predicted. Machine learning has potential for improving source attribution modeling based on sequence data. Results of such models can inform risk managers to identify and prioritize food safety interventions.  相似文献   

4.
Marc Kennedy  Andy Hart 《Risk analysis》2009,29(10):1427-1442
We propose new models for dealing with various sources of variability and uncertainty that influence risk assessments for dietary exposure. The uncertain or random variables involved can interact in complex ways, and the focus is on methodology for integrating their effects and on assessing the relative importance of including different uncertainty model components in the calculation of dietary exposures to contaminants, such as pesticide residues. The combined effect is reflected in the final inferences about the population of residues and subsequent exposure assessments. In particular, we show how measurement uncertainty can have a significant impact on results and discuss novel statistical options for modeling this uncertainty. The effect of measurement error is often ignored, perhaps due to the laboratory process conforming to the relevant international standards, for example, or is treated in an  ad hoc  way. These issues are common to many dietary risk analysis problems, and the methods could be applied to any food and chemical of interest. An example is presented using data on carbendazim in apples and consumption surveys of toddlers.  相似文献   

5.
A Bayesian approach was developed by Hald et al .( 1 ) to estimate the contribution of different food sources to the burden of human salmonellosis in Denmark. This article describes the development of several modifications that can be used to adapt the model to different countries and pathogens. Our modified Hald model has several advantages over the original approach, which include the introduction of uncertainty in the estimates of source prevalence and an improved strategy for identifiability. We have applied our modified model to the two major food-borne zoonoses in New Zealand, namely, campylobacteriosis and salmonellosis. Major challenges were the data quality for salmonellosis and the inclusion of environmental sources of campylobacteriosis. We conclude that by modifying the Hald model we have improved its identifiability, made it more applicable to countries with less intensive surveillance, and feasible for other pathogens, in particular with respect to the inclusion of nonfood sources. The wider application and better understanding of this approach is of particular importance due to the value of the model for decision making and risk management.  相似文献   

6.
This article discusses how analyst's or expert's beliefs on the credibility and quality of models can be assessed and incorporated into the uncertainty assessment of an unknown of interest. The proposed methodology is a specialization of the Bayesian framework for the assessment of model uncertainty presented in an earlier paper. This formalism treats models as sources of information in assessing the uncertainty of an unknown, and it allows the use of predictions from multiple models as well as experimental validation data about the models’ performances. In this article, the methodology is extended to incorporate additional types of information about the model, namely, subjective information in terms of credibility of the model and its applicability when it is used outside its intended domain of application. An example in the context of fire risk modeling is also provided.  相似文献   

7.
Attributing foodborne illnesses to food sources is essential to conceive, prioritize, and assess the impact of public health policy measures. The Bayesian microbial subtyping attribution model by Hald et al. is one of the most advanced approaches to attribute sporadic cases; it namely allows taking into account the level of exposure to the sources and the differences between bacterial types and between sources. This step forward requires introducing type and source‐dependent parameters, and generates overparameterization, which was addressed in Hald's paper by setting some parameters to constant values. We question the impact of the choices made for the parameterization (parameters set and values used) on model robustness and propose an alternative parameterization for the Hald model. We illustrate this analysis with the 2005 French data set of non‐typhi Salmonella. Mullner's modified Hald model and a simple deterministic model were used to compare the results and assess the accuracy of the estimates. Setting the parameters for bacterial types specific to a unique source instead of the most frequent one and using data‐based values instead of arbitrary values enhanced the convergence and adequacy of the estimates and led to attribution estimates consistent with the other models’ results. The type and source parameters estimates were also coherent with Mullner's model estimates. The model appeared to be highly sensitive to parameterization. The proposed solution based on specific types and data‐based values improved the robustness of estimates and enabled the use of this highly valuable tool successfully with the French data set.  相似文献   

8.
《Risk analysis》2018,38(6):1183-1201
In assessing environmental health risks, the risk characterization step synthesizes information gathered in evaluating exposures to stressors together with dose–response relationships, characteristics of the exposed population, and external environmental conditions. This article summarizes key steps of a cumulative risk assessment (CRA) followed by a discussion of considerations for characterizing cumulative risks. Cumulative risk characterizations differ considerably from single chemical‐ or single source‐based risk characterization. CRAs typically focus on a specific population instead of a pollutant or pollutant source and should include an evaluation of all relevant sources contributing to the exposures in the population and other factors that influence dose–response relationships. Second, CRAs may include influential environmental and population‐specific conditions, involving multiple chemical and nonchemical stressors. Third, a CRA could examine multiple health effects, reflecting joint toxicity and the potential for toxicological interactions. Fourth, the complexities often necessitate simplifying methods, including judgment‐based and semi‐quantitative indices that collapse disparate data into numerical scores. Fifth, because of the higher dimensionality and potentially large number of interactions, information needed to quantify risk is typically incomplete, necessitating an uncertainty analysis. Three approaches that could be used for characterizing risks in a CRA are presented: the multiroute hazard index, stressor grouping by exposure and toxicity, and indices for screening multiple factors and conditions. Other key roles of the risk characterization in CRAs are also described, mainly the translational aspect of including a characterization summary for lay readers (in addition to the technical analysis), and placing the results in the context of the likely risk‐based decisions.  相似文献   

9.
Evaluations of Listeria monocytogenes dose‐response relationships are crucially important for risk assessment and risk management, but are complicated by considerable variability across population subgroups and L. monocytogenes strains. Despite difficulties associated with the collection of adequate data from outbreak investigations or sporadic cases, the limitations of currently available animal models, and the inability to conduct human volunteer studies, some of the available data now allow refinements of the well‐established exponential L. monocytogenes dose response to more adequately represent extremely susceptible population subgroups and highly virulent L. monocytogenes strains. Here, a model incorporating adjustments for variability in L. monocytogenes strain virulence and host susceptibility was derived for 11 population subgroups with similar underlying comorbidities using data from multiple sources, including human surveillance and food survey data. In light of the unique inherent properties of L. monocytogenes dose response, a lognormal‐Poisson dose‐response model was chosen, and proved able to reconcile dose‐response relationships developed based on surveillance data with outbreak data. This model was compared to a classical beta‐Poisson dose‐response model, which was insufficiently flexible for modeling the specific case of L. monocytogenes dose‐response relationships, especially in outbreak situations. Overall, the modeling results suggest that most listeriosis cases are linked to the ingestion of food contaminated with medium to high concentrations of L. monocytogenes. While additional data are needed to refine the derived model and to better characterize and quantify the variability in L. monocytogenes strain virulence and individual host susceptibility, the framework derived here represents a promising approach to more adequately characterize the risk of listeriosis in highly susceptible population subgroups.  相似文献   

10.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   

11.
Based on the data from the integrated Danish Salmonella surveillance in 1999, we developed a mathematical model for quantifying the contribution of each of the major animal-food sources to human salmonellosis. The model was set up to calculate the number of domestic and sporadic cases caused by different Salmonella sero and phage types as a function of the prevalence of these Salmonella types in the animal-food sources and the amount of food source consumed. A multiparameter prior accounting for the presumed but unknown differences between serotypes and food sources with respect to causing human salmonellosis was also included. The joint posterior distribution was estimated by fitting the model to the reported number of domestic and sporadic cases per Salmonella type in a Bayesian framework using Markov Chain Monte Carlo simulation. The number of domestic and sporadic cases was obtained by subtracting the estimated number of travel- and outbreak-associated cases from the total number of reported cases, i.e., the observed data. The most important food sources were found to be table eggs and domestically produced pork comprising 47.1% (95% credibility interval, CI: 43.3-50.8%) and 9% (95% CI: 7.8-10.4%) of the cases, respectively. Taken together, imported foods were estimated to account for 11.8% (95% CI: 5.0-19.0%) of the cases. Other food sources considered had only a minor impact, whereas 25% of the cases could not be associated with any source. This approach of quantifying the contribution of the various sources to human salmonellosis has proved to be a valuable tool in risk management in Denmark and provides an example of how to integrate quantitative risk assessment and zoonotic disease surveillance.  相似文献   

12.
Epidemiology and quantitative microbiological risk assessment are disciplines in which the same public health measures are estimated, but results differ frequently. If large, these differences can confuse public health policymakers. This article aims to identify uncertainty sources that explain apparent differences in estimates for Campylobacter spp. incidence and attribution in the Netherlands, based on four previous studies (two for each discipline). An uncertainty typology was used to identify uncertainty sources and the NUSAP method was applied to characterize the uncertainty and its influence on estimates. Model outcomes were subsequently calculated for alternative scenarios that simulated very different but realistic alternatives in parameter estimates, modeling, data handling, or analysis to obtain impressions of the total uncertainty. For the epidemiological assessment, 32 uncertainty sources were identified and for QMRA 67. Definitions (e.g., of a case) and study boundaries (e.g., of the studied pathogen) were identified as important drivers for the differences between the estimates of the original studies. The range in alternatively calculated estimates usually overlapped between disciplines, showing that proper appreciation of uncertainty can explain apparent differences between the initial estimates from both disciplines. Uncertainty was not estimated in the original QMRA studies and underestimated in the epidemiological studies. We advise to give appropriate attention to uncertainty in QMRA and epidemiological studies, even if only qualitatively, so that scientists and policymakers can interpret reported outcomes more correctly. Ideally, both disciplines are joined by merging their strong respective properties, leading to unified public health measures.  相似文献   

13.
Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation‐based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source‐to‐source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry.  相似文献   

14.
Semiconductor manufacturing is confronted with a large number of products whose mix is changing over time, heterogeneous fabrication processes, re-entrant flows of material, and different sources of environmental and system uncertainty. In this context, the mid-term production planning approach, i.e., master planning, typically does not capture the entire complexity of the shop-floor. It deals with an aggregated representation of the production system. There is a need for evaluating the planning algorithm in use while taking the execution level into account. Therefore, we introduce in this paper a simulation-based framework that allows for modeling the behavior of the market demand and the production system. An appropriate performance assessment methodology is proposed. The performance of two heuristic approaches for master planning in semiconductor manufacturing, a genetic algorithm and a rule-based assignment procedure, is evaluated within a rolling horizon setting while considering demand and execution uncertainty. A reduced discrete-event simulation model is used to mimic a one-stage network of wafer fabrication facilities. The results of simulation experiments are discussed.  相似文献   

15.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

16.
Lack of information about technology and prices often hampers the empirical assessment of the profit maximization hypothesis (viz. by measuring the degree of profit efficiency). The non-parametric Data Envelopment Analysis (DEA) methodology can deal with such incomplete information. We exploit the implicit but largely neglected profit interpretation of the DEA model that builds on assumptions of monotone and convex production possibility sets. We show how its embedded assessment of necessary conditions for profit maximization can be strengthened given partial information in the form of monetary sub-cost/-revenue data (that are often easier obtained than the pure quantity data). Finally, we argue that a ‘mix’ efficiency analysis is naturally complementary to such a profit efficiency analysis. An application to German farm types complements our methodological discussion. By using non-parametric statistical tests, we further demonstrate the potential of the non-parametric approach in deriving strong and robust statistical evidence while imposing minimal structure on the setting under study. In particular, we look for significant efficiency variation over regions.  相似文献   

17.
The paper addresses the problem of plant location in the formal context of decision making under uncertainty and presents a framework employing Bayesian analysis in the collection and assessment of information. As a general model, the Bayesian approach is shown to subsume two practical approaches common to plant-location literature: satisficing and spatial hierarchy of plant-site characteristics.  相似文献   

18.
We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.  相似文献   

19.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

20.
A risk assessment was performed to incorporate uncertainty in food processing conditions to develop a risk-based sterilization process design. The focus of this analysis was uncertainty associated with heterogeneous food products. Quartered button mushrooms were the chosen food product because it represents the most typical type. A model for sterilization of spherical particles was utilized, and each parameter's uncertainty was characterized for use under Monte Carlo simulation. Various particle distributions and fluid types were compared. The output of the model was the required sterilization time to achieve the target sterilization conditions with 95% probability. This value was then used to determine the mean fluid velocity for a given tube length. Finally, the output from the model was analyzed to determine the confidence in output based on uncertainty in the input parameters. The model was more sensitive to variation in particle size distribution than fluid type for power-law fluids. The 90% confidence interval included a holding time range of 1 min. With a 95% confidence level that only 8% of the data will be below the target sterilization conditions, a maximum of 9% of the data were expected to achieve double the target level. The results of such an analysis would be useful for management decisions concerning the design of aseptic food processing operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号