首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 764 毫秒
1.
This paper presents a general model for exposure to homegrown foods that is used with a Monte Carlo analysis to determine the relative contributions of variability (Type A uncertainty) and true uncertainty (Type B uncertainty) to the overall variance in prediction of the dose-to-concentration ratio. Although classification of exposure inputs as uncertain or variable is somewhat subjective, food consumption rates and exposure duration are judged to have a predicted variance that is dominated by variability among individuals by age, income, culture, and geographical region. Whereas, biotransfer factors and partition factors are inputs that, to a large extent, involve uncertainty. Using ingestion of fruits, vegetables, grains, dairy products, and meat and soils assumed to be contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a Monte Carlo analysis is used to explore the relative contribution of uncertainty and variability to overall variance in the estimated distribution of potential dose within the population that consumes homegrown foods. It is found that, when soil concentrations are specified, variances in ratios of dose-to-concentration for HCB are equally attributable to uncertainty and variability, whereas for BaP, variance in these ratios is dominated by true uncertainty.  相似文献   

2.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

3.
Information of exposure factors used in quantitative risk assessments has previously been compiled and reported for U.S. and European populations. However, due to the advancement of science and knowledge, these reports are in continuous need of updating with new data. Equally important is the change over time of many exposure factors related to both physiological characteristics and human behavior. Body weight, skin surface, time use, and dietary habits are some of the most obvious examples covered here. A wealth of data is available from literature not primarily gathered for the purpose of risk assessment. Here we review a number of key exposure factors and compare these factors between northern Europe—here represented by Sweden—and the United States. Many previous compilations of exposure factor data focus on interindividual variability and variability between sexes and age groups, while uncertainty is mainly dealt with in a qualitative way. In this article variability is assessed along with uncertainty. As estimates of central tendency and interindividual variability, mean, standard deviation, skewness, kurtosis, and multiple percentiles were calculated, while uncertainty was characterized using 95% confidence intervals for these parameters. The presented statistics are appropriate for use in deterministic analyses using point estimates for each input parameter as well as in probabilistic assessments.  相似文献   

4.
We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.  相似文献   

5.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

6.
The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows us to focus our resources if we want to reduce the uncertainty. The potential dose of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.  相似文献   

7.
Because of their mouthing behaviors, children have a higher potential for exposure to available chemicals through the nondietary ingestion route; thus, frequency of hand-to-mouth activity is an important variable for exposure assessments. Such data are limited and difficult to collect. Few published studies report such information, and the studies that have been conducted used different data collection approaches (e.g., videography versus real-time observation), data analysis and reporting methods, ages of children, locations, and even definitions of "mouthing." For this article, hand-to-mouth frequency data were gathered from 9 available studies representing 429 subjects and more than 2,000 hours of behavior observation. A meta-analysis was conducted to study differences in hand-to-mouth frequency based on study, age group, gender, and location (indoor vs. outdoor), to fit variability and uncertainty distributions that can be used in probabilistic exposure assessments, and to identify any data gaps. Results of this analysis indicate that age and location are important for hand-to-mouth frequency, but study and gender are not. As age increases, both indoor and outdoor hand-to-mouth frequencies decrease. Hand-to-mouth behavior is significantly greater indoors than outdoors. For both indoor and outdoor hand-to-mouth frequencies, interpersonal, and intra-personal variability are approximately 60% and approximately 30%, respectively. The variance difference among different studies is much bigger than its mean, indicating that different studies with different methodologies have similar central values. Weibull distributions best fit the observed data for the different variables considered and are presented in this article by study, age group, and location. Average indoor hand-to-mouth behavior ranged from 6.7 to 28.0 contacts/hour, with the lowest value corresponding to the 6 to <11 year olds and the highest value corresponding to the 3 to <6 month olds. Average outdoor hand-to-mouth frequency ranged from 2.9 to 14.5 contacts/hour, with the lowest value corresponding to the 6 to <11 year olds and the highest value corresponding to the 6 to <12 month olds. The analysis highlights the need for additional hand-to-mouth data for the <3 months, 3 to <6 months, and 3 to <6 year age groups using standardized collection and analysis because of lack of data or high uncertainty in available data. This is the first publication to report Weibull distributions as the best fitting distribution for hand-to-mouth frequency; using the best fitting exposure factor distribution will help improve estimates of exposure. The analyses also represent a first comprehensive effort to fit hand-to-mouth frequency variability and uncertainty distributions by indoor/outdoor location and by age groups, using the new standard set of age groups recommended by the U.S. Environmental Protection Agency for assessing childhood exposures. Thus, the data presented in this article can be used to update the U.S. EPA's Child-Specific Exposure Factors Handbook and to improve estimates of nondietary ingestion in probabilistic exposure modeling.  相似文献   

8.
Peanut allergy is a public health concern, owing to the high prevalence in France and the severity of the reactions. Despite peanut-containing product avoidance diets, a risk may exist due to the adventitious presence of peanut allergens in a wide range of food products. Peanut is not mentioned in their ingredients list, but precautionary labeling is often present. A method of quantifying the risk of allergic reactions following the consumption of such products is developed, taking the example of peanut in chocolate tablets. The occurrence of adventitious peanut proteins in chocolate and the dose-response relationship are estimated with a Bayesian approach using available published data. The consumption pattern is described by the French individual consumption survey INCA2. Risk simulations are performed using second-order Monte Carlo simulations, which separately propagates variability and uncertainty of the model input variables. Peanut allergens occur in approximately 36% of the chocolates, leading to a mean exposure level of 0.2 mg of peanut proteins per eating occasion. The estimated risk of reaction averages 0.57% per eating occasion for peanut-allergic adults. The 95% values of the risk stand between 0 and 3.61%, which illustrates the risk variability. The uncertainty, represented by the 95% credible intervals, is concentrated around these risk estimates. Children have similar results. The conclusion is that adventitious peanut allergens induce a risk of reaction for a part of the French peanut-allergic population. The method developed can be generalized to assess the risk due to the consumption of every foodstuff potentially contaminated by allergens.  相似文献   

9.
This article presents a general model for estimating population heterogeneity and "lack of knowledge" uncertainty in methylmercury (MeHg) exposure assessments using two-dimensional Monte Carlo analysis. Using data from fish-consuming populations in Bangladesh, Brazil, Sweden, and the United Kingdom, predictive model estimates of dietary MeHg exposures were compared against those derived from biomarkers (i.e., [Hg]hair and [Hg]blood). By disaggregating parameter uncertainty into components (i.e., population heterogeneity, measurement error, recall error, and sampling error) estimates were obtained of the contribution of each component to the overall uncertainty. Steady-state diet:hair and diet:blood MeHg exposure ratios were estimated for each population and were used to develop distributions useful for conducting biomarker-based probabilistic assessments of MeHg exposure. The 5th and 95th percentile modeled MeHg exposure estimates around mean population exposure from each of the four study populations are presented to demonstrate lack of knowledge uncertainty about a best estimate for a true mean. Results from a U.K. study population showed that a predictive dietary model resulted in a 74% lower lack of knowledge uncertainty around a central mean estimate relative to a hair biomarker model, and also in a 31% lower lack of knowledge uncertainty around central mean estimate relative to a blood biomarker model. Similar results were obtained for the Brazil and Bangladesh populations. Such analyses, used here to evaluate alternative models of dietary MeHg exposure, can be used to refine exposure instruments, improve information used in site management and remediation decision making, and identify sources of uncertainty in risk estimates.  相似文献   

10.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

11.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

12.
Probabilistic seismic risk analysis is a well‐established method in the insurance industry for modeling portfolio losses from earthquake events. In this context, precise exposure locations are often unknown. However, so far, location uncertainty has not been in the focus of a large amount of research. In this article, we propose a novel framework for treatment of location uncertainty. As a case study, a large number of synthetic portfolios resembling typical real‐world cases were created. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on the variability of loss frequency estimations. The results indicate that due to loss aggregation effects and spatial hazard variability, location uncertainty in isolation and in conjunction with ground motion uncertainty can induce significant variability to probabilistic loss results, especially for portfolios with a small number of risks. After quantifying its effect, we conclude that location uncertainty should not be neglected when assessing probabilistic seismic risk, but should be treated stochastically and the resulting variability should be visualized and interpreted carefully.  相似文献   

13.
Variability and Uncertainty Meet Risk Management and Risk Communication   总被引:1,自引:0,他引:1  
In the past decade, the use of probabilistic risk analysis techniques to quantitatively address variability and uncertainty in risks increased in popularity as recommended by the 1994 National Research Council that wrote Science and Judgment in Risk Assessment. Under the 1996 Food Quality Protection Act, for example, the U.S. EPA supported the development of tools that produce distributions of risk demonstrating the variability and/or uncertainty in the results. This paradigm shift away from the use of point estimates creates new challenges for risk managers, who now struggle with decisions about how to use distributions in decision making. The challenges for risk communication, however, have only been minimally explored. This presentation uses the case studies of variability in the risks of dying on the ground from a crashing airplane and from the deployment of motor vehicle airbags to demonstrate how better characterization of variability and uncertainty in the risk assessment lead to better risk communication. Analogies to food safety and environmental risks are also discussed. This presentation demonstrates that probabilistic risk assessment has an impact on both risk management and risk communication, and highlights remaining research issues associated with using improved sensitivity and uncertainty analyses in risk assessment.  相似文献   

14.
Regulatory agencies often perform microbial risk assessments to evaluate the change in the number of human illnesses as the result of a new policy that reduces the level of contamination in the food supply. These agencies generally have regulatory authority over the production and retail sectors of the farm‐to‐table continuum. Any predicted change in contamination that results from new policy that regulates production practices occurs many steps prior to consumption of the product. This study proposes a framework for conducting microbial food‐safety risk assessments; this framework can be used to quantitatively assess the annual effects of national regulatory policies. Advantages of the framework are that estimates of human illnesses are consistent with national disease surveillance data (which are usually summarized on an annual basis) and some of the modeling steps that occur between production and consumption can be collapsed or eliminated. The framework leads to probabilistic models that include uncertainty and variability in critical input parameters; these models can be solved using a number of different Bayesian methods. The Bayesian synthesis method performs well for this application and generates posterior distributions of parameters that are relevant to assessing the effect of implementing a new policy. An example, based on Campylobacter and chicken, estimates the annual number of illnesses avoided by a hypothetical policy; this output could be used to assess the economic benefits of a new policy. Empirical validation of the policy effect is also examined by estimating the annual change in the numbers of illnesses observed via disease surveillance systems.  相似文献   

15.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

16.
Biomarkers such as DNA adducts have significant potential to improve quantitative risk assessment by characterizing individual differences in metabolism of genotoxins and DNA repair and accounting for some of the factors that could affect interindividual variation in cancer risk. Inherent uncertainty in laboratory measurements and within-person variability of DNA adduct levels over time are putatively unrelated to cancer risk and should be subtracted from observed variation to better estimate interindividual variability of response to carcinogen exposure. A total of 41 volunteers, both smokers and nonsmokers, were asked to provide a peripheral blood sample every 3 weeks for several months in order to specifically assess intraindividual variability of polycyclic aromatic hydrocarbon (PAH)-DNA adduct levels. The intraindividual variance in PAH-DNA adduct levels, together with measurement uncertainty (laboratory variability and unaccounted for differences in exposure), constituted roughly 30% of the overall variance. An estimated 70% of the total variance was contributed by interindividual variability and is probably representative of the true biologic variability of response to carcinogenic exposure in lymphocytes. The estimated interindividual variability in DNA damage after subtracting intraindividual variability and measurement uncertainty was 24-fold. Inter-individual variance was higher (52-fold) in persons who constitutively lack the Glutathione S-Transferase M1 (GSTM1) gene which is important in the detoxification pathway of PAH. Risk assessment models that do not consider the variability of susceptibility to DNA damage following carcinogen exposure may underestimate risks to the general population, especially for those people who are most vulnerable.  相似文献   

17.
A Latin Hypercube probabilistic risk assessment methodology was employed in the assessment of health risks associated with exposures to contaminated sediment and biota in an estuary in the Tidewater region of Virginia. The primary contaminants were polychlorinated biphenyls (PCBs), polychlorinated terphenyls (PCTs), polynuclear aromatic hydrocarbons (PAHs), and metals released into the estuary from a storm sewer system. The exposure pathways associated with the highest contaminant intake and risks were dermal contact with contaminated sediment and ingestion of contaminated aquatic and terrestrial biota from the contaminated area. As expected, all of the output probability distributions of risk were highly skewed, and the ratios of the expected value (mean) to median risk estimates ranged from 1.4 to 14.8 for the various exposed populations. The 99th percentile risk estimates were as much as two orders of magnitude above the mean risk estimates. For the sediment exposure pathways, the stability of the median risk estimates was found to be much greater than the stability of the expected value risk estimates. The interrun variability in the median risk estimate was found to be +/-1.9% at 3000 iterations. The interrun stability of the mean risk estimates was found to be approximately equal to that of the 95th percentile estimates at any number of iterations. The variation in neither contaminant concentrations nor any other single input variable contributed disproportionately to the overall simulation variance. The inclusion or exclusion of spatial correlations among contaminant concentrations in the simulation model did not significantly effect either the magnitude or the variance of the simulation risk estimates for sediment exposures.  相似文献   

18.
A probabilistic and interdisciplinary risk–benefit assessment (RBA) model integrating microbiological, nutritional, and chemical components was developed for infant milk, with the objective of predicting the health impact of different scenarios of consumption. Infant feeding is a particular concern of interest in RBA as breast milk and powder infant formula have both been associated with risks and benefits related to chemicals, bacteria, and nutrients, hence the model considers these three facets. Cronobacter sakazakii, dioxin‐like polychlorinated biphenyls (dl‐PCB), and docosahexaenoic acid (DHA) were three risk/benefit factors selected as key issues in microbiology, chemistry, and nutrition, respectively. The present model was probabilistic with variability and uncertainty separated using a second‐order Monte Carlo simulation process. In this study, advantages and limitations of undertaking probabilistic and interdisciplinary RBA are discussed. In particular, the probabilistic technique was found to be powerful in dealing with missing data and to translate assumptions into quantitative inputs while taking uncertainty into account. In addition, separation of variability and uncertainty strengthened the interpretation of the model outputs by enabling better consideration and distinction of natural heterogeneity from lack of knowledge. Interdisciplinary RBA is necessary to give more structured conclusions and avoid contradictory messages to policymakers and also to consumers, leading to more decisive food recommendations. This assessment provides a conceptual development of the RBA methodology and is a robust basis on which to build upon.  相似文献   

19.
Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations.  相似文献   

20.
Food web models have two uses in assessments of environmental contaminants. First, they are used to determine whether remediation is needed by estimating exposure of end-point species and subsequent effects. Second, they are used to establish cleanup goals by estimating concentrations of contaminants in ambient media that will not cause significant effects. This paper demonstrates how achievement of these goals can be enhanced by the use of stochastic food web models. The models simulate the dynamics of PCBs and mercury in the food webs of mink and great blue herons. All parameters of the models are treated as having knowledge uncertainty, due to imperfect knowledge of the actual parameter values for the site, chemicals, and species of interest. This uncertainty is an indicator of the potential value of additional measurements. In addition, those parameters that are responsible for variance among individual organisms are assigned stochastic uncertainty. This uncertainty indicates the range of body burdens that are expected when the end-point species are monitored. These two types of uncertainty are separately accounted for in Monte Carlo simulations of the models. Preliminary monitoring results indicate that the models give reasonably good estimates of heron egg and nestling body burdens and of variance among individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号