首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A model for the assessment of exposure to Listeria monocytogenes from cold-smoked salmon consumption in France was presented in the first of this pair of articles (Pouillot et al ., 2007, Risk Analysis, 27:683–700). In the present study, the exposure model output was combined with an internationally accepted hazard characterization model, adapted to the French situation, to assess the risk of invasive listeriosis from cold-smoked salmon consumption in France in a second-order Monte Carlo simulation framework. The annual number of cases of invasive listeriosis due to cold-smoked salmon consumption in France is estimated to be 307, with a very large credible interval ([10; 12,453]), reflecting data uncertainty. This uncertainty is mainly associated with the dose-response model. Despite the significant uncertainty associated with the predictions, this model provides a scientific base for risk managers and food business operators to manage the risk linked to cold-smoked salmon contaminated with L. monocytogenes. Under the modeling assumptions, risk would be efficiently reduced through a decrease in the prevalence of L. monocytogenes or better control of the last steps of the cold chain (shorter and/or colder storage during the consumer step), whereas reduction of the initial contamination levels of the contaminated products and improvement in the first steps of the cold chain do not seem to be promising strategies. An attempt to apply the recent risk-based concept of FSO (food safety objective) on this example underlines the ambiguity in practical implementation of the risk management metrics and the need for further elaboration on these concepts.  相似文献   

2.
We used simulation modeling to assess potential climate change impacts on wildfire exposure in Italy and Corsica (France). Weather data were obtained from a regional climate model for the period 1981–2070 using the IPCC A1B emissions scenario. Wildfire simulations were performed with the minimum travel time fire spread algorithm using predicted fuel moisture, wind speed, and wind direction to simulate expected changes in weather for three climatic periods (1981–2010, 2011–2040, and 2041–2070). Overall, the wildfire simulations showed very slight changes in flame length, while other outputs such as burn probability and fire size increased significantly in the second future period (2041–2070), especially in the southern portion of the study area. The projected changes fuel moisture could result in a lengthening of the fire season for the entire study area. This work represents the first application in Europe of a methodology based on high resolution (250 m) landscape wildfire modeling to assess potential impacts of climate changes on wildfire exposure at a national scale. The findings can provide information and support in wildfire management planning and fire risk mitigation activities.  相似文献   

3.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

4.
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half‐lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance‐covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter‐ and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements.  相似文献   

5.
A quantitative assessment of the exposure to Listeria monocytogenes from cold-smoked salmon (CSS) consumption in France is developed. The general framework is a second-order (or two-dimensional) Monte Carlo simulation, which characterizes the uncertainty and variability of the exposure estimate. The model takes into account the competitive bacterial growth between L. monocytogenes and the background competitive flora from the end of the production line to the consumer phase. An original algorithm is proposed to integrate this growth in conditions of varying temperature. As part of a more general project led by the French Food Safety Agency (Afssa), specific data were acquired and modeled for this quantitative exposure assessment model, particularly time-temperature profiles, prevalence data, and contamination-level data. The sensitivity analysis points out the main influence of the mean temperature in household refrigerators and the prevalence of contaminated CSS on the exposure level. The outputs of this model can be used as inputs for further risk assessment.  相似文献   

6.
The developmental neurotoxicity of methylmercury (MeHg) in humans has been described following catastrophic events in Minamata Bay, Japan and in Iraq, and following the exposure to lower doses elsewhere in the world. The most common route of MeHg exposure in humans is through the intake of contaminated food, especially fish. Although the precautions against the ingestion of potentially contaminated food during pregnancy are well recognized, precautions against the ingestion of MeHg during lactation are not so uniformly recognized. However, the continued development of the central nervous system during the early postnatal period serves to prolong the period during which this critical system is susceptible to the toxic insult of MeHg. Because no direct method is available to quantitatively assess the lactational transfer of MeHg to humans, a computer-aided simulation method was developed. An available gestational physiologically based pharmacokinetic model was refined and expanded to include parameters and algorithms specific for the elimination of MeHg in breast milk. The predictions of the completed model were compared with experimental data obtained from rodents, and the model parameters were allometrically scaled to humans. Finally, the model was validated by comparing its predictions against the available clinical data for MeHg distribution and elimination in mothers and their nursing infants. This model incorporated current and previous maternal exposures to MeHg to accurately predict the kinetics of MeHg excretion in breast milk and the daily intake by the nursing infant. This model may be used to quantify MeHg intake by the nursing infant, under different rates of maternal MeHg ingestion.  相似文献   

7.
Various methods for risk characterization have been developed using probabilistic approaches. Data on Vietnamese farmers are available for the comparison of outcomes for risk characterization using different probabilistic methods. This article addresses the health risk characterization of chlorpyrifos using epidemiological dose‐response data and probabilistic techniques obtained from a case study with rice farmers in Vietnam. Urine samples were collected from farmers and analyzed for trichloropyridinol (TCP), which was converted into absorbed daily dose of chlorpyrifos. Adverse health response doses due to chlorpyrifos exposure were collected from epidemiological studies to develop dose‐adverse health response relationships. The health risk of chlorpyrifos was quantified using hazard quotient (HQ), Monte Carlo simulation (MCS), and overall risk probability (ORP) methods. With baseline (prior to pesticide spraying) and lifetime exposure levels (over a lifetime of pesticide spraying events), the HQ ranged from 0.06 to 7.1. The MCS method indicated less than 0.05% of the population would be affected while the ORP method indicated that less than 1.5% of the population would be adversely affected. With postapplication exposure levels, the HQ ranged from 1 to 32.5. The risk calculated by the MCS method was that 29% of the population would be affected, and the risk calculated by ORP method was 33%. The MCS and ORP methods have advantages in risk characterization due to use of the full distribution of data exposure as well as dose response, whereas HQ methods only used the exposure data distribution. These evaluations indicated that single‐event spraying is likely to have adverse effects on Vietnamese rice farmers.  相似文献   

8.
本文针对银行双边风险敞口不可得的现实情况,利用贝叶斯方法,基于185家商业银行在2013年至2017年的资产负债表数据,在不同的网络结构设定下构建吉布斯抽样器,根据大量银行间同业资产及同业负债分布矩阵的样本,考察了每个商业银行在负面冲击后违约的概率及其分布。研究结果表明,银行同业借贷网络的结构能够显著影响银行的系统风险和违约概率。当网络连接概率处于中等水平时,冲击影响的范围最广;在完全网络结构下,风险分担的作用大于风险传染。总之,银行同业借贷既可以分担风险,也成为了风险传染的渠道,这种功能的转换取决于以下几类因素的相互作用:冲击的性质,例如冲击的规模,受冲击银行的数量以及冲击涉及的银行类型;清算时资产的贬值程度;银行自身资产负债表的特征。如果仅考虑银行同业借贷渠道,样本期内最稳健的银行系统是在2017年,而2014年的银行系统最脆弱。  相似文献   

9.
Phthalates have been detected in various types of retail foods. Consumers' exposure to phthalates is common. Consumers are concerned about chemicals in food. Our aim was to investigate the relationships between consumers' exposure to phthalates through food, consumers' interest in a natural and healthy diet, risk perception of food chemicals, and consumers' diet patterns. We collected data through a mail survey in the adult Swiss-German population ( N  = 1,200). We modeled exposure to di(2-ethylhexyl) phthalate (DEHP), dibutyl phthalate (DBP), benzyl butyl phthalate (BBP), and diethyl phthalate (DEP) based on a food frequency questionnaire and phthalate concentrations reported from food surveys. Using rating scales, we assessed risk perceptions of chemicals in food and interest in a natural and healthy diet. Higher risk perceptions and higher natural and healthy diet interest were associated with higher daily doses of DEHP, BBP, and DEP. No health risk from phthalates in food was identified for the vast majority of the population. Four consumers' diet clusters were discerned, with differences in phthalate exposure, risk perceptions, and interest in a natural and healthy diet. This study shows that even those consumers who express strong interest in natural food and low acceptance of food chemicals, and who try to make respective food choices, are exposed to contaminants such as phthalates.  相似文献   

10.
There are a number of sources of variability in food consumption patterns and residue levels of a particular chemical (e.g., pesticide, food additive) in commodities that lead to an expected high level of variability in dietary exposures across a population. This paper focuses on examples of consumption pattern survey data for specific commodities, namely that for wine and grape juice, and demonstrates how such data might be analyzed in preparation for performing stochastic analyses of dietary exposure. Data from the NIAAA/NHIS wine consumption survey were subset for gender and age group and, with matched body weight data from the survey database, were used to define empirically-based percentile estimates for wine intake (μl wine/kg body weight) for the strata of interest. The data for these two subpopulations were analyzed to estimate 14-day consumption distributional statistics and distributions for only those days on which wine was consumed. Data subsets for all wine-consuming adults and wine-consuming females ages 18 through 45, were determined to fit a lognormal distribution ( R 2= 0.99 for both datasets). Market share data were incorporated into estimation of chronic exposures to hypothetical chemical residues in imported table wine. As a separate example, treatment of grape juice consumption data for females, ages 18–40, as a simple lognormal distribution resulted in a significant underestimation of intake, and thus exposure, because the actual distribution is a mixture (i.e., multiple subpopulations of grape juice consumers exist in the parent distribution). Thus, deriving dietary intake statistics from food consumption survey data requires careful analysis of the underlying empirical distributions.  相似文献   

11.
This article analyzes the debate about data acquisition and assessment in health claims regulation by identifying the underlying controversies on methodological choice. Regulation in the European Union imposes the need for a scientific substantiation of all health claims (claims about a relationship between consumption of certain food ingredients and positive health effects). Randomized controlled trials (RCTs) are the method that generally is considered to provide the highest quality data for decision making in claims regulation because they allow for establishing cause–effect relationships. The latter are demanded in European regulatory practice for authorization of a claim. This requirement has contributed to a debate about the advantages and limitations of the RCT methodology in nutrition research and regulation. Our analysis identifies five types of tensions that underlie the controversy, with respect to evidence, cognitive values, standards of proof, future lines of research, as well as expert judgment. We conclude that there is a direct and mutual interaction between methodological decisions in nutrition science, and different strategies in health claims regulation. The latter have social and public health consequences because not only may they affect the European market for functional foods, as well as concomitant consumption patterns, but also the generation of future regulation‐relevant evidence in nutrition.  相似文献   

12.
This study illustrates a newly developed methodology, as a part of the U.S. EPA ecological risk assessment (ERA) framework, to predict exposure concentrations in a marine environment due to underwater release of oil and gas. It combines the hydrodynamics of underwater blowout, weathering algorithms, and multimedia fate and transport to measure the exposure concentration. Naphthalene and methane are used as surrogate compounds for oil and gas, respectively. Uncertainties are accounted for in multimedia input parameters in the analysis. The 95th percentile of the exposure concentration (EC95%) is taken as the representative exposure concentration for the risk estimation. A bootstrapping method is utilized to characterize EC95% and associated uncertainty. The toxicity data of 19 species available in the literature are used to calculate the 5th percentile of the predicted no observed effect concentration (PNEC5%) by employing the bootstrapping method. The risk is characterized by transforming the risk quotient (RQ), which is the ratio of EC95% to PNEC5%, into a cumulative risk distribution. This article describes a probabilistic basis for the ERA, which is essential from risk management and decision‐making viewpoints. Two case studies of underwater oil and gas mixture release, and oil release with no gaseous mixture are used to show the systematic implementation of the methodology, elements of ERA, and the probabilistic method in assessing and characterizing the risk.  相似文献   

13.
Aircraft routing and crew pairing problems aim at building the sequences of flight legs operated respectively by airplanes and by crews of an airline. Given their impact on airlines operating costs, both have been extensively studied for decades. Our goal is to provide reliable and easy to maintain frameworks for both problems at Air France. We propose simple approaches to deal with Air France current setting. For routing, we introduce an exact compact IP formulation that can be solved to optimality by current MIP solvers in at most a few minutes even on Air France largest instances. Regarding crew pairing, we provide a methodology to model the column generation pricing subproblem within a new resource constrained shortest path framework recently introduced by the first author. This new framework, which can be used as a black-box, leverages on bounds to discard partial solutions and speed-up the resolution. The resulting approach enables to solve to optimality Air France largest instances. Recent literature has focused on integrating aircraft routing and crew pairing problems. As a side result, we are able to solve to near optimality large industrial instances of the integrated problem by combining the aforementioned algorithms within a simple cut generating method.  相似文献   

14.
A cancer risk assessment methodology based upon the Armitage–Doll multistage model of cancer is applied to animal bioassay data. The method utilizes the exact time-dependent dose pattern used in a bioassay rather than some single measure of dose such as average dose rate or cumulative dose. The methodology can be used to predict risks from arbitrary exposure patterns including, for example, intermittent exposure and short-term exposure occurring at an arbitrary age. The methodology is illustrated by applying it to a National Cancer Institute bioassay of ethylene dibromide in which dose rates were modified several times during the course of the experiment.  相似文献   

15.
When assessing risks posed by environmental chemical mixtures, whole mixture approaches are preferred to component approaches. When toxicological data on whole mixtures as they occur in the environment are not available, Environmental Protection Agency guidance states that toxicity data from a mixture considered “sufficiently similar” to the environmental mixture can serve as a surrogate. We propose a novel method to examine whether mixtures are sufficiently similar, when exposure data and mixture toxicity study data from at least one representative mixture are available. We define sufficient similarity using equivalence testing methodology comparing the distance between benchmark dose estimates for mixtures in both data‐rich and data‐poor cases. We construct a “similar mixtures risk indicator”(SMRI) (analogous to the hazard index) on sufficiently similar mixtures linking exposure data with mixtures toxicology data. The methods are illustrated using pyrethroid mixtures occurrence data collected in child care centers (CCC) and dose‐response data examining acute neurobehavioral effects of pyrethroid mixtures in rats. Our method shows that the mixtures from 90% of the CCCs were sufficiently similar to the dose‐response study mixture. Using exposure estimates for a hypothetical child, the 95th percentile of the (weighted) SMRI for these sufficiently similar mixtures was 0.20 (i.e., where SMRI <1, less concern; >1, more concern).  相似文献   

16.
Methylmercury (Me-Hg) is widely distributed through freshwater and saltwater food chains and human consumption of fish and shellfish has lead to widespread exposure. Both the U.S. EPA Reference Dose (0.3 μg/kg/day) and the FAO/WHO Permissible Tolerable Weekly Intake (3.3 μg/kg/week) are currently based on the prevention of paraesthesia in adult and older children. However, Me-Hg exposure in utero is known to result in a range of developmental neurologic effects including clinical CNS symptoms and delayed onset of walking. Based on a critical review of developmental toxicity data from human and animal studies, it is concluded that current guidelines for the prevention of paraesthesia are not adequate to address developmental effects. A dose of 0.07 μ/kg/day is suggested as the best estimate of a potential reference dose for developmental effects. Data on nationwide fish consumption rates and Me-Hg levels in fish/seafood weighted by proportion of the catch intended for human consumption are analyzed in a Monte Carlo simulation to derive a probability distribution of background Me-Hg exposure. While various uncertainties in the toxicologic and exposure data limit the precision with which health risk can be estimated, this analysis suggests that at current levels of Me-Hg exposure, a significant fraction of women of childbearing age have exposures above this suggested reference dose.  相似文献   

17.
Trond Rafoss 《Risk analysis》2003,23(4):651-661
Pest risk analysis represents an emerging field of risk analysis that evaluates the potential risks of the introduction and establishment of plant pests into a new geographic location and then assesses the management options to reduce those potential risks. Development of new and adapted methodology is required to answer questions concerning pest risk analysis of exotic plant pests. This research describes a new method for predicting the potential establishment and spread of a plant pest into new areas using a case study, Ralstonia solanacearum, a bacterial disease of potato. This method combines current quantitative methodologies, stochastic simulation, and geographic information systems with knowledge of pest biology and environmental data to derive new information about pest establishment potential in a geographical region where a pest had not been introduced. This proposed method extends an existing methodology for matching pest characteristics with environmental conditions by modeling and simulating dissemination behavior of a pest organism. Issues related to integrating spatial variables into risk analysis models are further discussed in this article.  相似文献   

18.
Twenty-four-hour recall data from the Continuing Survey of Food Intake by Individuals (CSFII) are frequently used to estimate dietary exposure for risk assessment. Food frequency questionnaires are traditional instruments of epidemiological research; however, their application in dietary exposure and risk assessment has been limited. This article presents a probabilistic method of bridging the National Health and Nutrition Examination Survey (NHANES) food frequency and the CSFII data to estimate longitudinal (usual) intake, using a case study of seafood mercury exposures for two population subgroups (females 16 to 49 years and children 1 to 5 years). Two hundred forty-nine CSFII food codes were mapped into 28 NHANES fish/shellfish categories. FDA and state/local seafood mercury data were used. A uniform distribution with minimum and maximum blood-diet ratios of 0.66 to 1.07 was assumed. A probabilistic assessment was conducted to estimate distributions of individual 30-day average daily fish/shellfish intakes, methyl mercury exposure, and blood levels. The upper percentile estimates of fish and shellfish intakes based on the 30-day daily averages were lower than those based on two- and three-day daily averages. These results support previous findings that distributions of "usual" intakes based on a small number of consumption days provide overestimates in the upper percentiles. About 10% of the females (16 to 49 years) and children (1 to 5 years) may be exposed to mercury levels above the EPA's RfD. The predicted 75th and 90th percentile blood mercury levels for the females in the 16-to-49-year group were similar to those reported by NHANES. The predicted 90th percentile blood mercury levels for children in the 1-to-5-year subgroup was similar to NHANES and the 75th percentile estimates were slightly above the NHANES.  相似文献   

19.
Kun Xie  Kaan Ozbay  Hong Yang  Di Yang 《Risk analysis》2019,39(6):1342-1357
The widely used empirical Bayes (EB) and full Bayes (FB) methods for before–after safety assessment are sometimes limited because of the extensive data needs from additional reference sites. To address this issue, this study proposes a novel before–after safety evaluation methodology based on survival analysis and longitudinal data as an alternative to the EB/FB method. A Bayesian survival analysis (SARE) model with a random effect term to address the unobserved heterogeneity across sites is developed. The proposed survival analysis method is validated through a simulation study before its application. Subsequently, the SARE model is developed in a case study to evaluate the safety effectiveness of a recent red‐light‐running photo enforcement program in New Jersey. As demonstrated in the simulation and the case study, the survival analysis can provide valid estimates using only data from treated sites, and thus its results will not be affected by the selection of defective or insufficient reference sites. In addition, the proposed approach can take into account the censored data generated due to the transition from the before period to the after period, which has not been previously explored in the literature. Using individual crashes as units of analysis, survival analysis can incorporate longitudinal covariates such as the traffic volume and weather variation, and thus can explicitly account for the potential temporal heterogeneity.  相似文献   

20.
The appearance of measurement error in exposure and risk factor data potentially affects any inferences regarding variability and uncertainty because the distribution representing the observed data set deviates from the distribution that represents an error-free data set. A methodology for improving the characterization of variability and uncertainty with known measurement errors in data is demonstrated in this article based on an observed data set, known measurement error, and a measurement-error model. A practical method for constructing an error-free data set is presented and a numerical method based upon bootstrap pairs, incorporating two-dimensional Monte Carlo simulation, is introduced to address uncertainty arising from measurement error in selected statistics. When measurement error is a large source of uncertainty, substantial differences between the distribution representing variability of the observed data set and the distribution representing variability of the error-free data set will occur. Furthermore, the shape and range of the probability bands for uncertainty differ between the observed and error-free data set. Failure to separately characterize contributions from random sampling error and measurement error will lead to bias in the variability and uncertainty estimates. However, a key finding is that total uncertainty in mean can be properly quantified even if measurement and random sampling errors cannot be separated. An empirical case study is used to illustrate the application of the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号