首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 603 毫秒
1.
The Grunow–Finke epidemiological assessment tool (GFT) has several limitations in its ability to differentiate between natural and man-made epidemics. Our study aimed to improve the GFT and analyze historical epidemics to validate the model. Using a gray relational analysis (GRA), we improved the GFT by revising the existing standards and adding five new standards. We then removed the artificial weights and final decision threshold. Finally, by using typically unnatural epidemic events as references, we used the GRA to calculate the unnatural probability and obtain assessment results. Using the advanced tool, we conducted retrospective and case analyses to test its performance. In the validation set of 13 historical epidemics, unnatural and natural epidemics were divided into two categories near the unnatural probability of 45%, showing evident differences (p < 0.01) and an assessment accuracy close to 100%. The unnatural probabilities of the Ebola virus disease of 2013 and Middle East Respiratory Syndrome of 2012 were 30.6% and 36.1%, respectively. Our advanced epidemic assessment tool improved the accuracy of the original GFT from approximately 55% to approximately 100% and reduced the impact of human factors on these outcomes effectively.  相似文献   

2.
The Grunow–Finke assessment tool (GFT) is an accepted scoring system for determining likelihood of an outbreak being unnatural in origin. Considering its high specificity but low sensitivity, a modified Grunow–Finke tool (mGFT) has been developed with improved sensitivity. The mGFT has been validated against some past disease outbreaks, but it has not been applied to ongoing outbreaks. This study is aimed to score the outbreak of Middle East respiratory syndrome coronavirus (MERS-CoV) in Saudi Arabia using both the original GFT and mGFT. The publicly available data on human cases of MERS-CoV infections reported in Saudi Arabia (2012–2018) were sourced from the FluTrackers, World Health Organization, Saudi Ministry of Health, and published literature associated with MERS outbreaks investigations. The risk assessment of MERS-CoV in Saudi Arabia was analyzed using the original GFT and mGFT criteria, algorithms, and thresholds. The scoring points for each criterion were determined by three researchers to minimize the subjectivity. The results showed 40 points of total possible 54 points using the original GFT (likelihood: 74%), and 40 points of a total possible 60 points (likelihood: 67%) using the mGFT, both tools indicating a high likelihood that human MERS-CoV in Saudi Arabia is unnatural in origin. The findings simply flag unusual patterns in this outbreak, but do not prove unnatural etiology. Proof of bioattacks can only be obtained by law enforcement and intelligence agencies. This study demonstrated the value and flexibility of the mGFT in assessing and predicting the risk for an ongoing outbreak with simple criteria.  相似文献   

3.
Root cause analysis can be used in foodborne illness outbreak investigations to determine the underlying causes of an outbreak and to help identify actions that could be taken to prevent future outbreaks. We developed a new tool, the Quantitative Risk Assessment-Epidemic Curve Prediction Model (QRA-EC), to assist with these goals and applied it to a case study to investigate and illustrate the utility of leveraging quantitative risk assessment to provide unique insights for foodborne illness outbreak root cause analysis. We used a 2019 Salmonella outbreak linked to melons as a case study to demonstrate the utility of this model (Centers for Disease Control and Prevention [CDC], 2019). The model was used to evaluate the impact of various root cause hypotheses (representing different contamination sources and food safety system failures in the melon supply chain) on the predicted number and timeline of illnesses. The predicted number of illnesses varied by contamination source and was strongly impacted by the prevalence and level of Salmonella contamination on the surface/inside of whole melons and inside contamination niches on equipment surfaces. The timeline of illnesses was most strongly impacted by equipment sanitation efficacy for contamination niches. Evaluations of a wide range of scenarios representing various potential root causes enabled us to identify which hypotheses, were likely to result in an outbreak of similar size and illness timeline to the 2019 Salmonella melon outbreak. The QRA-EC framework can be adapted to accommodate any food–pathogen pairs to provide insights for foodborne outbreak investigations.  相似文献   

4.
A novel method was used to incorporate in vivo host–pathogen dynamics into a new robust outbreak model for legionellosis. Dose‐response and time‐dose‐response (TDR) models were generated for Legionella longbeachae exposure to mice via the intratracheal route using a maximum likelihood estimation approach. The best‐fit TDR model was then incorporated into two L. pneumophila outbreak models: an outbreak that occurred at a spa in Japan, and one that occurred in a Melbourne aquarium. The best‐fit TDR from the murine dosing study was the beta‐Poisson with exponential‐reciprocal dependency model, which had a minimized deviance of 32.9. This model was tested against other incubation distributions in the Japan outbreak, and performed consistently well, with reported deviances ranging from 32 to 35. In the case of the Melbourne outbreak, the exponential model with exponential dependency was tested against non‐time‐dependent distributions to explore the performance of the time‐dependent model with the lowest number of parameters. This model reported low minimized deviances around 8 for the Weibull, gamma, and lognormal exposure distribution cases. This work shows that the incorporation of a time factor into outbreak distributions provides models with acceptable fits that can provide insight into the in vivo dynamics of the host‐pathogen system.  相似文献   

5.
Tucker Burch 《Risk analysis》2019,39(3):599-615
The assumptions underlying quantitative microbial risk assessment (QMRA) are simple and biologically plausible, but QMRA predictions have never been validated for many pathogens. The objective of this study was to validate QMRA predictions against epidemiological measurements from outbreaks of waterborne gastrointestinal disease. I screened 2,000 papers and identified 12 outbreaks with the necessary data: disease rates measured using epidemiological methods and pathogen concentrations measured in the source water. Eight of the 12 outbreaks were caused by Cryptosporidium, three by Giardia, and one by norovirus. Disease rates varied from 5.5 × 10?6 to 1.1 × 10?2 cases/person‐day, and reported pathogen concentrations varied from 1.2 × 10?4 to 8.6 × 102 per liter. I used these concentrations with single‐hit dose–response models for all three pathogens to conduct QMRA, producing both point and interval predictions of disease rates for each outbreak. Comparison of QMRA predictions to epidemiological measurements showed good agreement; interval predictions contained measured disease rates for 9 of 12 outbreaks, with point predictions off by factors of 1.0–120 (median = 4.8). Furthermore, 11 outbreaks occurred at mean doses of less than 1 pathogen per exposure. Measured disease rates for these outbreaks were clearly consistent with a single‐hit model, and not with a “two‐hit” threshold model. These results demonstrate the validity of QMRA for predicting disease rates due to Cryptosporidium and Giardia.  相似文献   

6.
《Risk analysis》2018,38(3):429-441
The 2014 Ebola virus disease (EVD) outbreak affected several countries worldwide, including six West African countries. It was the largest Ebola epidemic in the history and the first to affect multiple countries simultaneously. Significant national and international delay in response to the epidemic resulted in 28,652 cases and 11,325 deaths. The aim of this study was to develop a risk analysis framework to prioritize rapid response for situations of high risk. Based on findings from the literature, sociodemographic features of the affected countries, and documented epidemic data, a risk scoring framework using 18 criteria was developed. The framework includes measures of socioeconomics, health systems, geographical factors, cultural beliefs, and traditional practices. The three worst affected West African countries (Guinea, Sierra Leone, and Liberia) had the highest risk scores. The scores were much lower in developed countries that experienced Ebola compared to West African countries. A more complex risk analysis framework using 18 measures was compared with a simpler one with 10 measures, and both predicted risk equally well. A simple risk scoring system can incorporate measures of hazard and impact that may otherwise be neglected in prioritizing outbreak response. This framework can be used by public health personnel as a tool to prioritize outbreak investigation and flag outbreaks with potentially catastrophic outcomes for urgent response. Such a tool could mitigate costly delays in epidemic response.  相似文献   

7.
Increasing evidence suggests that persistence of Listeria monocytogenes in food processing plants has been the underlying cause of a number of human listeriosis outbreaks. This study extracts criteria used by food safety experts in determining bacterial persistence in the environment, using retail delicatessen operations as a model. Using the Delphi method, we conducted an expert elicitation with 10 food safety experts from academia, industry, and government to classify L. monocytogenes persistence based on environmental sampling results collected over six months for 30 retail delicatessen stores. The results were modeled using variations of random forest, support vector machine, logistic regression, and linear regression; variable importance values of random forest and support vector machine models were consolidated to rank important variables in the experts’ classifications. The duration of subtype isolation ranked most important across all expert categories. Sampling site category also ranked high in importance and validation errors doubled when this covariate was removed. Support vector machine and random forest models successfully classified the data with average validation errors of 3.1% and 2.2% (n = 144), respectively. Our findings indicate that (i) the frequency of isolations over time and sampling site information are critical factors for experts determining subtype persistence, (ii) food safety experts from different sectors may not use the same criteria in determining persistence, and (iii) machine learning models have potential for future use in environmental surveillance and risk management programs. Future work is necessary to validate the accuracy of expert and machine classification against biological measurement of L. monocytogenes persistence.  相似文献   

8.
Elizabethkingia spp. are common environmental pathogens responsible for infections in more vulnerable populations. Although the exposure routes of concern are not well understood, some hospital-associated outbreaks have indicated possible waterborne transmission. In order to facilitate quantitative microbial risk assessment (QMRA) for Elizabethkingia spp., this study fit dose–response models to frog and mice datasets that evaluated intramuscular and intraperitoneal exposure to Elizabethkingia spp. The frog datasets could be pooled, and the exact beta-Poisson model was the best fitting model with optimized parameters α  = 0.52 and β = 86,351. Using the exact beta-Poisson model, the dose of Elizabethkingia miricola resulting in a 50% morbidity response (LD50) was estimated to be approximately 237,000 CFU. The model developed herein was used to estimate the probability of infection for a hospital patient under a modeled exposure scenario involving a contaminated medical device and reported Elizabethkingia spp. concentrations isolated from hospital sinks after an outbreak. The median exposure dose was approximately 3 CFU/insertion event, and the corresponding median risk of infection was 3.4E-05. The median risk estimated in this case study was lower than the 3% attack rate observed in a previous outbreak, however, there are noted gaps pertaining to the possible concentrations of Elizabethkingia spp. in tap water and the most likely exposure routes. This is the first dose–response model developed for Elizabethkingia spp. thus enabling future risk assessments to help determine levels of risk and potential effective risk management strategies.  相似文献   

9.
Giardia is a zoonotic gastrointestinal parasite responsible for a substantial global public health burden, and quantitative microbial risk assessment (QMRA) is often used to forecast and manage this burden. QMRA requires dose–response models to extrapolate available dose–response data, but the existing model for Giardia ignores valuable dose–response information, particularly data from several well-documented waterborne outbreaks of giardiasis. The current study updates Giardia dose–response modeling by synthesizing all available data from outbreaks and experimental studies using a Bayesian random effects dose–response model. For outbreaks, mean doses (D) and the degree of spatial and temporal aggregation among cysts were estimated using exposure assessment implemented via two-dimensional Monte Carlo simulation, while potential overreporting of outbreak cases was handled using published overreporting factors and censored binomial regression. Parameter estimation was by Markov chain Monte Carlo simulation and indicated that a typical exponential dose–response parameter for Giardia is r = 1.6 × 10−2 [3.7 × 10−3, 6.2 × 10−2] (posterior median [95% credible interval]), while a typical morbidity ratio is m = 3.8 × 10−1 [2.3 × 10−1, 5.5 × 10−1]. Corresponding (logistic-scale) variance components were σr = 5.2 × 10−1 [1.1 × 10−1, 9.6 × 10−1] and σm = 9.3 × 10−1 [7.0 × 10−2, 2.8 × 100], indicating substantial variation in the Giardia dose–response relationship. Compared to the existing Giardia dose–response model, the current study provides more representative estimation of uncertainty in r and novel quantification of its natural variability. Several options for incorporating variability in r (and m) into QMRA predictions are discussed, including incorporation via Monte Carlo simulation as well as evaluation of the current study's model using the approximate beta-Poisson.  相似文献   

10.
Emerging diseases (ED) can have devastating effects on agriculture. Consequently, agricultural insurance for ED can develop if basic insurability criteria are met, including the capability to estimate the severity of ED outbreaks with associated uncertainty. The U.S. farm‐raised channel catfish (Ictalurus punctatus) industry was used to evaluate the feasibility of using a disease spread simulation modeling framework to estimate the potential losses from new ED for agricultural insurance purposes. Two stochastic models were used to simulate the spread of ED between and within channel catfish ponds in Mississippi (MS) under high, medium, and low disease impact scenarios. The mean (95% prediction interval (PI)) proportion of ponds infected within disease‐impacted farms was 7.6% (3.8%, 22.8%), 24.5% (3.8%, 72.0%), and 45.6% (4.0%, 92.3%), and the mean (95% PI) proportion of fish mortalities in ponds affected by the disease was 9.8% (1.4%, 26.7%), 49.2% (4.7%, 60.7%), and 88.3% (85.9%, 90.5%) for the low, medium, and high impact scenarios, respectively. The farm‐level mortality losses from an ED were up to 40.3% of the total farm inventory and can be used for insurance premium rate development. Disease spread modeling provides a systematic way to organize the current knowledge on the ED perils and, ultimately, use this information to help develop actuarially sound agricultural insurance policies and premiums. However, the estimates obtained will include a large amount of uncertainty driven by the stochastic nature of disease outbreaks, by the uncertainty in the frequency of future ED occurrences, and by the often sparse data available from past outbreaks.  相似文献   

11.
This article details a systemic analysis of the controls in place and possible interventions available to further reduce the risk of a foot and mouth disease (FMD) outbreak in the United Kingdom. Using a research‐based network analysis tool, we identify vulnerabilities within the multibarrier control system and their corresponding critical control points (CCPs). CCPs represent opportunities for active intervention that produce the greatest improvement to United Kingdom's resilience to future FMD outbreaks. Using an adapted ‘features, events, and processes’ (FEPs) methodology and network analysis, our results suggest that movements of animals and goods associated with legal activities significantly influence the system's behavior due to their higher frequency and ability to combine and create scenarios of exposure similar in origin to the U.K. FMD outbreaks of 1967/8 and 2001. The systemic risk assessment highlights areas outside of disease control that are relevant to disease spread. Further, it proves to be a powerful tool for demonstrating the need for implementing disease controls that have not previously been part of the system.  相似文献   

12.
To prevent and control foodborne diseases, there is a fundamental need to identify the foods that are most likely to cause illness. The goal of this study was to rank 25 commonly consumed food products associated with Salmonella enterica contamination in the Central Region of Mexico. A multicriteria decision analysis (MCDA) framework was developed to obtain an S. enterica risk score for each food product based on four criteria: probability of exposure to S. enterica through domestic food consumption (Se); S. enterica growth potential during home storage (Sg); per capita consumption (Pcc); and food attribution of S. enterica outbreak (So). Risk scores were calculated by the equation Se*W1+Sg*W2+Pcc*W3+So*W4, where each criterion was assigned a normalized value (1–5) and the relative weights (W) were defined by 22 experts’ opinion. Se had the largest effect on the risk score being the criterion with the highest weight (35%; IC95% 20%–60%), followed by So (24%; 5%–50%), Sg (23%; 10%–40%), and Pcc (18%; 10%–35%). The results identified chicken (4.4 ± 0.6), pork (4.2 ± 0.6), and beef (4.2 ± 0.5) as the highest risk foods, followed by seed fruits (3.6 ± 0.5), tropical fruits (3.4 ± 0.4), and dried fruits and nuts (3.4 ± 0.5), while the food products with the lowest risk were yogurt (2.1 ± 0.3), chorizo (2.1 ± 0.4), and cream (2.0 ± 0.3). Approaches with expert-based weighting and equal weighting showed good correlation (R= 0.96) and did not show significant differences among the ranking order in the top 20 tier. This study can help risk managers select interventions and develop targeted surveillance programs against S. enterica in high-risk food products.  相似文献   

13.
《Risk analysis》2018,38(10):2178-2192
While it seems intuitive that highly visible vaccine‐preventable disease outbreaks should impact perceptions of disease risk and facilitate vaccination, few empirical studies exist to confirm or dispel these beliefs. This study investigates the impact of the 2014–2015 Disneyland measles outbreak on parents’ vaccination attitudes and future vaccination intentions. The analysis relies on a pair of public opinion surveys of American parents with at least one child under the age of six (N = 1,000 across each survey). Controlling for basic demographics, we found higher levels of reported confidence in the safety and efficacy of childhood vaccinations in our follow‐up data collection. However, this confidence was also accompanied by elevated levels of concern toward childhood vaccines among American parents. We then examined how different subgroups in the population scored on these measures before and after the outbreak. We found that parents with high levels of interest in the topic of vaccines and a child who is not fully upto date with the recommended vaccination schedule reported more supportive attitudes toward vaccines. However, future intentions to follow the recommended vaccination schedule were not positively impacted by the outbreak. Possible explanations for these results and implications for vaccination outreach are discussed.  相似文献   

14.
The objective of this study was to leverage quantitative risk assessment to investigate possible root cause(s) of foodborne illness outbreaks related to Shiga toxin-producing Escherichia coli O157:H7 (STEC O157) infections in leafy greens in the United States. To this end, we developed the FDA leafy green quantitative risk assessment epidemic curve prediction model (FDA-LG QRA-EC) that simulated the lettuce supply chain. The model was used to predict the number of reported illnesses and the epidemic curve associated with lettuce contaminated with STEC O157 for a wide range of scenarios representing various contamination conditions and facility processing/sanitation practices. Model predictions were generated for fresh-cut and whole lettuce, quantifying the differing impacts of facility processing and home preparation on predicted illnesses. Our model revealed that the timespan (i.e., number of days with at least one reported illness) and the peak (i.e., day with the most predicted number of reported illnesses) of the epidemic curve of a STEC O157-lettuce outbreak were not strongly influenced by facility processing/sanitation practices and were indications of contamination pattern among incoming lettuce batches received by the facility or distribution center. Through comparisons with observed number of illnesses from recent STEC O157-lettuce outbreaks, the model identified contamination conditions on incoming lettuce heads that could result in an outbreak of similar size, which can be used to narrow down potential root cause hypotheses.  相似文献   

15.
Obvious spatial infection patterns are often observed in cases associated with airborne transmissible diseases. Existing quantitative infection risk assessment models analyze the observed cases by assuming a homogeneous infectious particle concentration and ignore the spatial infection pattern, which may cause errors. This study aims at developing an approach to analyze spatial infection patterns associated with infectious respiratory diseases or other airborne transmissible diseases using infection risk assessment and likelihood estimation. Mathematical likelihood, based on binomial probability, was used to formulate the retrospective component with some additional mathematical treatments. Together with an infection risk assessment model that can address spatial heterogeneity, the method can be used to analyze the spatial infection pattern and retrospectively estimate the influencing parameters causing the cases, such as the infectious source strength of the pathogen. A Varicella outbreak was selected to demonstrate the use of the new approach. The infectious source strength estimated by the Wells‐Riley concept using the likelihood estimation was compared with the estimation using the existing method. It was found that the maximum likelihood estimation matches the epidemiological observation of the outbreak case much better than the estimation under the assumption of homogeneous infectious particle concentration. Influencing parameters retrospectively estimated using the new approach can be used as input parameters in quantitative infection risk assessment of the disease under other scenarios. The approach developed in this study can also serve as an epidemiological tool in outbreak investigation. Limitations and further developments are also discussed.  相似文献   

16.
Herbert Moskowitz 《Omega》1982,10(6):647-661
Linear aggregation models employing unit and equal weights have been shown to be superior to human decisions in a surprising range of decision situations. In addition, decisions based on these models have often been found to be superior to those based on linear regression models (LRMs). This general issue was explored for repetitive decisions in production planning. The problem considered differs in several aspects from the types of problems investigated previously: (1) the problem is dynamic rather than static; (2) a set (or vector) of interactive decisions dependent on previous decisions is required to be made, where a decision in stage t, the dependent variable, becomes an independent variable in stage t + 1; and (3) the criterion function is cost with a quadratic loss function (rather than the correlation measure of R2). Moreover, since repetitive decisions were involved, the parameters of the models were estimated using past human decisions. These were used to predict specific values of the decision variables (rather than rank order), which in turn were employed recursively to predict values of the decision variables at subsequent stages. While decisions from equal weighting rules were found to be superior to human decisions and not greatly inferior to decisions from linear regression models, decisions from unit weighting rules performed poorly. The rationale for such performance is discussed, indicating that previous theoretical and empirical research on linear weighting models is not generally applicable to dynamic, multivariate interactive decisions problems with lagged variables.  相似文献   

17.
In this study, a variance‐based global sensitivity analysis method was first applied to a contamination assessment model of Listeria monocytogenes in cold smoked vacuum packed salmon at consumption. The impact of the choice of the modeling approach (populational or cellular) of the primary and secondary models as well as the effect of their associated input factors on the final contamination level was investigated. Results provided a subset of important factors, including the food water activity, its storage temperature, and duration in the domestic refrigerator. A refined sensitivity analysis was then performed to rank the important factors, tested over narrower ranges of variation corresponding to their current distributions, using three techniques: ANOVA, Spearman correlation coefficient, and partial least squares regression. Finally, the refined sensitivity analysis was used to rank the important factors.  相似文献   

18.
Mathematical programming and multicriteria approaches to classification and discrimination are reviewed, with an emphasis on preference disaggregation. The latter include the UTADIS family and a new method, Multigroup Hierarchical DIScrimination (MHDIS). They are used to assess investing risk in 51 countries that have stock exchanges, according to 27 criteria. These criteria include quantitative and qualitative measures of market risk (volatility and currency fluctuations); range of investment opportunities; quantity and quality on market information; investor protection (security regulations treatment of minority shareholders); and administrative “headaches” (custody, settlement, and taxes). The model parameters are determined so that the results best match the risk level assigned to those countries by experienced international investment managers commissioned by The Wall Street Journal. Among the six evaluation models developed, one (MHDIS) classifies correctly all countries into the appropriate groups. Thus, this model is able to reproduce consistently the evaluation of the expert investment analysts. The most significant criteria and their weights for assessing global risk investing are also presented, along with their marginal utilities, leading to identifiers of risk groups and global utilities portraying the strength of each country's risk classification. The same method, MHDIS, outperformed the other five methods in a 10‐fold validation experiment. These results are promising for the study of emerging new markets in fast‐growing regions, which present fertile areas for investment growth but also  相似文献   

19.
Certification is an essential feature in organic farming, and it is based on inspections to verify compliance with respect to European Council Regulation—EC Reg. No 834/2007. A risk‐based approach to noncompliance that alerts the control bodies to activate planning inspections would contribute to a more efficient and cost‐effective certification system. An analysis of factors that can affect the probability of noncompliance in organic farming has thus been developed. This article examines the application of zero‐inflated count data models to farm‐level panel data from inspection results and sanctions obtained from the Ethical and Environmental Certification Institute, one of the main control bodies in Italy. We tested many a priori hypotheses related to the risk of noncompliance. We find evidence of an important role for past noncompliant behavior in predicting future noncompliance, while farm size and the occurrence of livestock also have roles in an increased probability of noncompliance. We conclude the article proposing that an efficient risk‐based inspection system should be designed, weighting up the known probability of occurrence of a given noncompliance according to the severity of its impact.  相似文献   

20.
Evaluations of Listeria monocytogenes dose‐response relationships are crucially important for risk assessment and risk management, but are complicated by considerable variability across population subgroups and L. monocytogenes strains. Despite difficulties associated with the collection of adequate data from outbreak investigations or sporadic cases, the limitations of currently available animal models, and the inability to conduct human volunteer studies, some of the available data now allow refinements of the well‐established exponential L. monocytogenes dose response to more adequately represent extremely susceptible population subgroups and highly virulent L. monocytogenes strains. Here, a model incorporating adjustments for variability in L. monocytogenes strain virulence and host susceptibility was derived for 11 population subgroups with similar underlying comorbidities using data from multiple sources, including human surveillance and food survey data. In light of the unique inherent properties of L. monocytogenes dose response, a lognormal‐Poisson dose‐response model was chosen, and proved able to reconcile dose‐response relationships developed based on surveillance data with outbreak data. This model was compared to a classical beta‐Poisson dose‐response model, which was insufficiently flexible for modeling the specific case of L. monocytogenes dose‐response relationships, especially in outbreak situations. Overall, the modeling results suggest that most listeriosis cases are linked to the ingestion of food contaminated with medium to high concentrations of L. monocytogenes. While additional data are needed to refine the derived model and to better characterize and quantify the variability in L. monocytogenes strain virulence and individual host susceptibility, the framework derived here represents a promising approach to more adequately characterize the risk of listeriosis in highly susceptible population subgroups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号