首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consumer Phase Risk Assessment for Listeria monocytogenes in Deli Meats   总被引:1,自引:0,他引:1  
The foodborne disease risk associated with the pathogen Listeria monocytogenes has been the subject of recent efforts in quantitative microbial risk assessment. Building upon one of these efforts undertaken jointly by the U.S. Food and Drug Administration and the U.S. Department of Agriculture (USDA), the purpose of this work was to expand on the consumer phase of the risk assessment to focus on handling practices in the home. One-dimensional Monte Carlo simulation was used to model variability in growth and cross-contamination of L. monocytogenes during food storage and preparation of deli meats. Simulations approximated that 0.3% of the servings were contaminated with >10(4) CFU/g of L. monocytogenes at the time of consumption. The estimated mean risk associated with the consumption of deli meats for the intermediate-age population was approximately 7 deaths per 10(11) servings. Food handling in homes increased the estimated mean mortality by 10(6)-fold. Of all the home food-handling practices modeled, inadequate storage, particularly refrigeration temperatures, provided the greatest contribution to increased risk. The impact of cross-contamination in the home was considerably less. Adherence to USDA Food Safety and Inspection Service recommendations for consumer handling of ready-to-eat foods substantially reduces the risk of listeriosis.  相似文献   

2.
This article compares two nonparametric tree‐based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high‐resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2‐km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree‐leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources.  相似文献   

3.
Eren Demir 《决策科学》2014,45(5):849-880
The number of emergency (or unplanned) readmissions in the United Kingdom National Health Service (NHS) has been rising for many years. This trend, which is possibly related to poor patient care, places financial pressures on hospitals and on national healthcare budgets. As a result, clinicians and key decision makers (e.g., managers and commissioners) are interested in predicting patients at high risk of readmission. Logistic regression is the most popular method of predicting patient‐specific probabilities. However, these studies have produced conflicting results with poor prediction accuracies. We compared the predictive accuracy of logistic regression with that of regression trees for predicting emergency readmissions within 45 days after been discharged from hospital. We also examined the predictive ability of two other types of data‐driven models: generalized additive models (GAMs) and multivariate adaptive regression splines (MARS). We used data on 963 patients readmitted to hospitals with chronic obstructive pulmonary disease and asthma. We used repeated split‐sample validation: the data were divided into derivation and validation samples. Predictive models were estimated using the derivation sample and the predictive accuracy of the resultant model was assessed using a number of performance measures, such as area under the receiver operating characteristic (ROC) curve in the validation sample. This process was repeated 1,000 times—the initial data set was divided into derivation and validation samples 1,000 times, and the predictive accuracy of each method was assessed each time. The mean ROC curve area for the regression tree models in the 1,000 derivation samples was .928, while the mean ROC curve area of a logistic regression model was .924. Our study shows that logistic regression model and regression trees had performance comparable to that of more flexible, data‐driven models such as GAMs and MARS. Given that the models have produced excellent predictive accuracies, this could be a valuable decision support tool for clinicians (healthcare managers, policy makers, etc.) for informed decision making in the management of diseases, which ultimately contributes to improved measures for hospital performance management.  相似文献   

4.
Tucker Burch 《Risk analysis》2019,39(3):599-615
The assumptions underlying quantitative microbial risk assessment (QMRA) are simple and biologically plausible, but QMRA predictions have never been validated for many pathogens. The objective of this study was to validate QMRA predictions against epidemiological measurements from outbreaks of waterborne gastrointestinal disease. I screened 2,000 papers and identified 12 outbreaks with the necessary data: disease rates measured using epidemiological methods and pathogen concentrations measured in the source water. Eight of the 12 outbreaks were caused by Cryptosporidium, three by Giardia, and one by norovirus. Disease rates varied from 5.5 × 10?6 to 1.1 × 10?2 cases/person‐day, and reported pathogen concentrations varied from 1.2 × 10?4 to 8.6 × 102 per liter. I used these concentrations with single‐hit dose–response models for all three pathogens to conduct QMRA, producing both point and interval predictions of disease rates for each outbreak. Comparison of QMRA predictions to epidemiological measurements showed good agreement; interval predictions contained measured disease rates for 9 of 12 outbreaks, with point predictions off by factors of 1.0–120 (median = 4.8). Furthermore, 11 outbreaks occurred at mean doses of less than 1 pathogen per exposure. Measured disease rates for these outbreaks were clearly consistent with a single‐hit model, and not with a “two‐hit” threshold model. These results demonstrate the validity of QMRA for predicting disease rates due to Cryptosporidium and Giardia.  相似文献   

5.
The food industry faces two paradoxical demands: on the one hand, foods need to be microbiologically safe for consumption and on the other hand, consumers want fresh, minimally processed foods. To meet these demands, more insight into the mechanisms of microbial growth is needed, which includes, among others, the microbial lag phase. This is the time needed by bacterial cells to adapt to a new environment (for example, after food product contamination) before starting an exponential growth regime. Since food products are often contaminated with low amounts of pathogenic microorganisms, it is important to know the distribution of these individual cell lag times to make accurate predictions concerning food safety. More precisely, cells with the shortest lag times (i.e., appearing in the left tail of the distribution) are largely decisive for the outgrowth of the population. In this study, an integrated modeling approach is proposed and applied to an existing data set of individual cell lag time measurements of Listeria monocytogenes. In a first step, a logistic modeling approach is applied to predict the fraction of zero-lag cells (which start growing immediately) as a function of temperature, pH, and water activity. For the nonzero-lag cells, the mean and variance of the lag time distribution are modeled with a hyperbolic-type model structure. This mean and variance allow identification of the parameters of a two-parameter Weibull distribution, representing the nonzero-lag cell lag time distribution. The integration of the developed models allows prediction of a global distribution of individual cell lag times for any combination of environmental conditions in the interpolation domain of the original temperature, pH, and water activity settings. The global fitting quality of the model is quantified using several measures indicating that the model gives accurate predictions, erring slightly on the fail-safe side when predicting the shortest lag times.  相似文献   

6.

The aim of this study was to study associations between psychosocial, physical and individual factors, and musculoskeletal symptoms in the neck, shoulder and hand/wrist regions of computer users. Questionnaires were distributed to 5033 employees in 11 Danish companies; these employees all used computers for at least some of their work time. The response rate was 69% ( n = 3475). The analyses were based on employees working 32-41 h/week ( n = 2579). Symptomatic respondents reported symptoms for at least 8 days within the previous year. Multiple logistic regression analyses were used. Gender (female), age and duration of employment in the same job were associated with an increased prevalence of symptoms. High quantitative job demands and low possibilities for development at work were predictors of neck and hand/wrist symptoms. A high degree of repetitiveness and disturbing reflections on the computer screen were associated with symptoms in all three body regions. Repetitive movements were defined as the same finger, hand or arm movements performed many times per minute for at least 75% of their work time. Repetitiveness was the only factor that could partly explain associations between symptoms and duration of computer use, i.e. respondents who used the computer almost all the time at work reported more repetitive movements than those who used it less. Thus, long hours of computer use may be associated with musculoskeletal symptoms, due to physical factors such as repetitive movements, whereas psychosocial factors appeared to be associated with the symptoms independently of the duration of computer use.  相似文献   

7.
The aim of this study was to study associations between psychosocial, physical and individual factors, and musculoskeletal symptoms in the neck, shoulder and hand/wrist regions of computer users. Questionnaires were distributed to 5033 employees in 11 Danish companies; these employees all used computers for at least some of their work time. The response rate was 69% ( n = 3475). The analyses were based on employees working 32-41 h/week ( n = 2579). Symptomatic respondents reported symptoms for at least 8 days within the previous year. Multiple logistic regression analyses were used. Gender (female), age and duration of employment in the same job were associated with an increased prevalence of symptoms. High quantitative job demands and low possibilities for development at work were predictors of neck and hand/wrist symptoms. A high degree of repetitiveness and disturbing reflections on the computer screen were associated with symptoms in all three body regions. Repetitive movements were defined as the same finger, hand or arm movements performed many times per minute for at least 75% of their work time. Repetitiveness was the only factor that could partly explain associations between symptoms and duration of computer use, i.e. respondents who used the computer almost all the time at work reported more repetitive movements than those who used it less. Thus, long hours of computer use may be associated with musculoskeletal symptoms, due to physical factors such as repetitive movements, whereas psychosocial factors appeared to be associated with the symptoms independently of the duration of computer use.  相似文献   

8.
This research examines the relationship between U.S. foreign direct investment (FDI) and exports of processed foods to China and identifies management strategies to enhance U.S. competitiveness. Two-stage least-squares empirical econometric results from a simultaneous equation system indicate that there exists a strong complementary relationship between U.S. exports and FDI into China. Therefore, the appropriate managerial strategy to access Chinese processed foods markets is to increase overall business activity, both FDI and exports into China.  相似文献   

9.
Using an additive super-efficiency data envelopment analysis (DEA) model, this paper develops a new assessment index based on two frontiers for predicting corporate failure and success. The proposed approach is applied to a random sample of 1001 firms, which is composed of 50 large US bankrupt firms randomly selected from Altman's bankruptcy database and 901 healthy matching firms. This sample represents the largest firms that went bankrupt over the period 1991–2004 and represents a full spectrum of industries. Our findings demonstrate that the DEA model is relatively weak in predicting corporate failures compared to healthy firm predictions, and the assessment index improves this weakness by giving the decision maker various options to achieve different precision levels of bankrupt, non-bankrupt, and total predictions.  相似文献   

10.
Semisoft cheese made from raw sheep's milk is traditionally and economically important in southern Europe. However, raw milk cheese is also a known vehicle of human listeriosis and contamination of sheep cheese with Listeria monocytogenes has been reported. In the present study, we have developed and applied a quantitative risk assessment model, based on available evidence and challenge testing, to estimate risk of invasive listeriosis due to consumption of an artisanal sheep cheese made with raw milk collected from a single flock in central Italy. In the model, contamination of milk may originate from the farm environment or from mastitic animals, with potential growth of the pathogen in bulk milk and during cheese ripening. Based on the 48‐day challenge test of a local semisoft raw sheep's milk cheese we found limited growth only during the initial phase of ripening (24 hours) and no growth or limited decline during the following ripening period. In our simulation, in the baseline scenario, 2.2% of cheese servings are estimated to have at least 1 colony forming unit (CFU) per gram. Of these, 15.1% would be above the current E.U. limit of 100 CFU/g (5.2% would exceed 1,000 CFU/g). Risk of invasive listeriosis per random serving is estimated in the 10?12 range (mean) for healthy adults, and in the 10?10 range (mean) for vulnerable populations. When small flocks (10–36 animals) are combined with the presence of a sheep with undetected subclinical mastitis, risk of listeriosis increases and such flocks may represent a public health risk.  相似文献   

11.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

12.
Adverse outcome pathway Bayesian networks (AOPBNs) are a promising avenue for developing predictive toxicology and risk assessment tools based on adverse outcome pathways (AOPs). Here, we describe a process for developing AOPBNs. AOPBNs use causal networks and Bayesian statistics to integrate evidence across key events. In this article, we use our AOPBN to predict the occurrence of steatosis under different chemical exposures. Since it is an expert-driven model, we use external data (i.e., data not used for modeling) from the literature to validate predictions of the AOPBN model. The AOPBN accurately predicts steatosis for the chemicals from our external data. In addition, we demonstrate how end users can utilize the model to simulate the confidence (based on posterior probability) associated with predicting steatosis. We demonstrate how the network topology impacts predictions across the AOPBN, and how the AOPBN helps us identify the most informative key events that should be monitored for predicting steatosis. We close with a discussion of how the model can be used to predict potential effects of mixtures and how to model susceptible populations (e.g., where a mutation or stressor may change the conditional probability tables in the AOPBN). Using this approach for developing expert AOPBNs will facilitate the prediction of chemical toxicity, facilitate the identification of assay batteries, and greatly improve chemical hazard screening strategies.  相似文献   

13.
Abstract. Using 1985–99 data from the German Socio‐Economic Panel Study (GSOEP) we confirm the hypothesis that existing computer wage premiums are determined by ability or other unobserved individual characteristics rather than by productivity effects. In addition to the conventional longitudinal regression analysis, the two competing hypotheses were tested by employing future PC variables in the wage regressions in order to obtain a further control for worker heterogeneity. The finding that future PC variables have a statistically significant effect on current wages leads us to conclude that computer wage differentials can be attributed to worker heterogeneity rather than to computer‐induced productivity.  相似文献   

14.
Tetrachloroethylene (PCE) is a commonly used organic solvent and a suspected human carcinogen, reportedly transferred to human breast milk following inhalation exposure. Transfer of PCE to milk may represent a threat to the nursing infant. A physiologically based pharmacokinetic (PBPK) model was developed to quantitatively assess the transfer of inhaled PCE into breast milk and the consequent exposure of the nursing infant. The model was validated in lactating rats. Lactating Sprague-Dawley female rats were exposed via inhalation to PCE at concentrations ranging from 20-1000 ppm, and then returned to their nursing, 10- to 11-day-old pups. Tetrachloroethylene concentrations in the air, blood, milk, and tissue were determined by gas chromatography and compared to model predictions. The model described the distribution of inhaled PCE in maternal blood and milk, as well as the nursed pup's gastrointestinal tract, blood, and tissue. Several computer simulations of PCE distribution kinetics in exhaled air, blood, and milk of exposed human subjects were run and compared with limited human data available from the literature. It is concluded that the PBPK model successfully described the concentration of PCE in both lactating rats and humans. Although predictions vs. observations were good, the model slightly underpredicted the peak whole pup PCE concentration and underpredicted systemic clearance of PCE from the pup.  相似文献   

15.
沪深300股指期货的波动率预测模型研究   总被引:5,自引:2,他引:5  
以沪深300股指期货仿真交易的5分钟高频数据为例,运用滚动时间窗的样本外预测和具有Bootstrap特性的SPA检验法,全面对比了基于日收益数据的历史波动率(historical volatility)模型和基于高频数据的已实现波动率(realized volatility)模型对波动率的刻画和预测能力.主要实证结果显示,已实现波动率模型以及加入附加解释变量的扩展随机波动模型是预测精度较高的波动模型,而在学术界和实务界常用的GARCH及其扩展模型对沪深300股指期货的波动率预测能力最弱.  相似文献   

16.
To prevent and control foodborne diseases, there is a fundamental need to identify the foods that are most likely to cause illness. The goal of this study was to rank 25 commonly consumed food products associated with Salmonella enterica contamination in the Central Region of Mexico. A multicriteria decision analysis (MCDA) framework was developed to obtain an S. enterica risk score for each food product based on four criteria: probability of exposure to S. enterica through domestic food consumption (Se); S. enterica growth potential during home storage (Sg); per capita consumption (Pcc); and food attribution of S. enterica outbreak (So). Risk scores were calculated by the equation Se*W1+Sg*W2+Pcc*W3+So*W4, where each criterion was assigned a normalized value (1–5) and the relative weights (W) were defined by 22 experts’ opinion. Se had the largest effect on the risk score being the criterion with the highest weight (35%; IC95% 20%–60%), followed by So (24%; 5%–50%), Sg (23%; 10%–40%), and Pcc (18%; 10%–35%). The results identified chicken (4.4 ± 0.6), pork (4.2 ± 0.6), and beef (4.2 ± 0.5) as the highest risk foods, followed by seed fruits (3.6 ± 0.5), tropical fruits (3.4 ± 0.4), and dried fruits and nuts (3.4 ± 0.5), while the food products with the lowest risk were yogurt (2.1 ± 0.3), chorizo (2.1 ± 0.4), and cream (2.0 ± 0.3). Approaches with expert-based weighting and equal weighting showed good correlation (R= 0.96) and did not show significant differences among the ranking order in the top 20 tier. This study can help risk managers select interventions and develop targeted surveillance programs against S. enterica in high-risk food products.  相似文献   

17.
本文旨在研究基金净值增长率在月末、季末、年末最后1个交易日是否被显著地拉升而表现出一定的日历效应.首先引入虚拟变量进行回归分析,然后讨论不同业绩组的日历末表现以及前后10个交易日的净值增长率走势.结果表明,实证支持基金日历末净值增长率异常增大的原假设.此外还发现,净值增长率在月末、季末前后10个交易日明显呈现一个由低点逐渐增大,然后回落的趋势.  相似文献   

18.
In either simple or multivariate regression models, measured coefficients represent a kind of average of co-variant relationships implicit in the data. For time-series data, if co-variant relationships follow no particular pattern over the sample period, use of this average in forecasting is rational. However, if relationships among independent and dependent variables is undergoing change over time, then measured regression coefficients are likely to be poor tools for prediction. This paper is an exploratory study of this general problem area. It develops implications as to the predictive impact of the problem and offers preliminary suggestions as to a method of developing predictions under such circumstances.  相似文献   

19.
Biomonitoring programs for urinary chromium (Cr) typically attempt to evaluate occupational exposure via the inhalation route. This study investigated whether Cr can be detected in the urine of people following the ingestion of soils that contain relatively high concentrations of chromium in chromite ore processing residue (COPR). To evaluate the reasonableness of using urinary monitoring to assess environmental exposure, six volunteers ingested 400 mg of soil/day (low-dose group), two others ingested 2.0 g of soil/day (high-dose group) for 3 consecutive days, and one person ingested a placebo on each of 3 days. The soil and COPR mixture contained concentrations of total chromium (Cr) and hexavalent chromium [Cr(VI)] of 103 ± 20 and 9.3 ± 3.8 mg/kg, respectively. Therefore, the low-dose group ingested 41 μg Cr/day [including 3.7 μg Cr(VI)] and the high-dose group ingested 206 μg Cr/day [including 18.6 μg Cr(VI)] on each of 3 consecutive days. All urine samples were collected and analyzed individually for total Cr on the day prior to dosing, during the 3 days of dosing, and up to the first void 48 h after the last dose. No significant increases in urinary Cr excretion were found when background excretion data were compared with data following each of the 3 days of dosing or in daily mean urine concentrations of the high- vs the low-dose groups. It appears that Cr present in a soil and COPR mixture at Cr doses up to 200 μg/day is not sufficiently bioavailable for biomonitoring of urine to be informative. These results are consistent with previously published findings suggesting that incidental exposure to dusts and soils containing comparable levels of Cr will not result in increased concentrations of Cr in urine.  相似文献   

20.
A novel extension of traditional growth models for exposure assessment of food-borne microbial pathogens was developed to address the complex interactions of competing microbial populations in foods. Scenarios were designed for baseline refrigeration and mild abuse of servings of chicken broiler and ground beef Our approach employed high-quality data for microbiology of foods at production, refrigerated storage temperatures, and growth kinetics of microbial populations in culture media. Simple parallel models were developed for exponential growth of multiple pathogens and the abundant and ubiquitous nonpathogenic indigenous microbiota. Monte Carlo simulations were run for unconstrained growth and growth with the density-dependent constraint based on the "Jameson effect," inhibition of pathogen growth when the indigenous microbiota reached 10(9) counts per serving. The modes for unconstrained growth of the indigenous microbiota were 10(8), 10(10), and 10(11) counts per serving for chicken broilers, and 10(7), 10(9) and 10(11) counts per serving for ground beef at respective sites for backroom, meat case, and home refrigeration. Contamination rates and likelihoods of reaching temperatures supporting growth of the pathogens in the baseline refrigeration scenario were rare events. The unconstrained exponential growth models appeared to overestimate L. monocytogenes growth maxima for the baseline refrigeration scenario by 1500-7233% (10(6)-10(7) counts/serving) when the inhibitory effects of the indigenous microbiota are ignored. The extreme tails of the distributions for the constrained models appeared to overestimate growth maxima 110% (10(4)-10(5) counts/serving) for Salmonella spp. and 108% (6 x 10(3) counts/serving) for E. coli O157:H7 relative to the extremes of the unconstrained models. The approach of incorporating parallel models for pathogens and the indigenous microbiota into exposure assessment modeling motivates the design of validation studies to test the modeling assumptions, consistent with the analytical-deliberative process of risk analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号