首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.  相似文献   

2.
This article presents a general model for estimating population heterogeneity and "lack of knowledge" uncertainty in methylmercury (MeHg) exposure assessments using two-dimensional Monte Carlo analysis. Using data from fish-consuming populations in Bangladesh, Brazil, Sweden, and the United Kingdom, predictive model estimates of dietary MeHg exposures were compared against those derived from biomarkers (i.e., [Hg]hair and [Hg]blood). By disaggregating parameter uncertainty into components (i.e., population heterogeneity, measurement error, recall error, and sampling error) estimates were obtained of the contribution of each component to the overall uncertainty. Steady-state diet:hair and diet:blood MeHg exposure ratios were estimated for each population and were used to develop distributions useful for conducting biomarker-based probabilistic assessments of MeHg exposure. The 5th and 95th percentile modeled MeHg exposure estimates around mean population exposure from each of the four study populations are presented to demonstrate lack of knowledge uncertainty about a best estimate for a true mean. Results from a U.K. study population showed that a predictive dietary model resulted in a 74% lower lack of knowledge uncertainty around a central mean estimate relative to a hair biomarker model, and also in a 31% lower lack of knowledge uncertainty around central mean estimate relative to a blood biomarker model. Similar results were obtained for the Brazil and Bangladesh populations. Such analyses, used here to evaluate alternative models of dietary MeHg exposure, can be used to refine exposure instruments, improve information used in site management and remediation decision making, and identify sources of uncertainty in risk estimates.  相似文献   

3.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

4.
Health risk assessments have become so widely accepted in the United States that their conclusions are a major factor in many environmental decisions. Although the risk assessment paradigm is 10 years old, the basic risk assessment process has been used by certain regulatory agencies for nearly 40 years. Each of the four components of the paradigm has undergone significant refinements, particularly during the last 5 years. A recent step in the development of the exposure assessment component can be found in the 1992 EPA Guidelines for Exposure Assessment. Rather than assuming worst-case or hypothetical maximum exposures, these guidelines are designed to lead to an accurate characterization, making use of a number of scientific advances. Many exposure parameters have become better defined, and more sensitive techniques now exist for measuring concentrations of contaminants in the environnment. Statistical procedures for characterizing variability, using Monte Carlo or similar approaches, eliminate the need to select point estimates for all individual exposure parameters. These probabilistic models can more accurately characterize the full range of exposures that may potentially be encountered by a given population at a particular site, reducing the need to select highly conservative values to account for this form of uncertainty in the exposure estimate. Lastly, our awareness of the uncertainties in the exposure assessment as well as our knowledge as to how best to characterize them will almost certainly provide evaluations that will be more credible and, therein, more useful to risk managers. If these refinements are incorporated into future exposure assessments, it is likely that our resources will be devoted to problems that, when resolved, will yield the largest improvement in public health.  相似文献   

5.
Biomarkers such as DNA adducts have significant potential to improve quantitative risk assessment by characterizing individual differences in metabolism of genotoxins and DNA repair and accounting for some of the factors that could affect interindividual variation in cancer risk. Inherent uncertainty in laboratory measurements and within-person variability of DNA adduct levels over time are putatively unrelated to cancer risk and should be subtracted from observed variation to better estimate interindividual variability of response to carcinogen exposure. A total of 41 volunteers, both smokers and nonsmokers, were asked to provide a peripheral blood sample every 3 weeks for several months in order to specifically assess intraindividual variability of polycyclic aromatic hydrocarbon (PAH)-DNA adduct levels. The intraindividual variance in PAH-DNA adduct levels, together with measurement uncertainty (laboratory variability and unaccounted for differences in exposure), constituted roughly 30% of the overall variance. An estimated 70% of the total variance was contributed by interindividual variability and is probably representative of the true biologic variability of response to carcinogenic exposure in lymphocytes. The estimated interindividual variability in DNA damage after subtracting intraindividual variability and measurement uncertainty was 24-fold. Inter-individual variance was higher (52-fold) in persons who constitutively lack the Glutathione S-Transferase M1 (GSTM1) gene which is important in the detoxification pathway of PAH. Risk assessment models that do not consider the variability of susceptibility to DNA damage following carcinogen exposure may underestimate risks to the general population, especially for those people who are most vulnerable.  相似文献   

6.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

7.
Because of their mouthing behaviors, children have a higher potential for exposure to available chemicals through the nondietary ingestion route; thus, frequency of hand-to-mouth activity is an important variable for exposure assessments. Such data are limited and difficult to collect. Few published studies report such information, and the studies that have been conducted used different data collection approaches (e.g., videography versus real-time observation), data analysis and reporting methods, ages of children, locations, and even definitions of "mouthing." For this article, hand-to-mouth frequency data were gathered from 9 available studies representing 429 subjects and more than 2,000 hours of behavior observation. A meta-analysis was conducted to study differences in hand-to-mouth frequency based on study, age group, gender, and location (indoor vs. outdoor), to fit variability and uncertainty distributions that can be used in probabilistic exposure assessments, and to identify any data gaps. Results of this analysis indicate that age and location are important for hand-to-mouth frequency, but study and gender are not. As age increases, both indoor and outdoor hand-to-mouth frequencies decrease. Hand-to-mouth behavior is significantly greater indoors than outdoors. For both indoor and outdoor hand-to-mouth frequencies, interpersonal, and intra-personal variability are approximately 60% and approximately 30%, respectively. The variance difference among different studies is much bigger than its mean, indicating that different studies with different methodologies have similar central values. Weibull distributions best fit the observed data for the different variables considered and are presented in this article by study, age group, and location. Average indoor hand-to-mouth behavior ranged from 6.7 to 28.0 contacts/hour, with the lowest value corresponding to the 6 to <11 year olds and the highest value corresponding to the 3 to <6 month olds. Average outdoor hand-to-mouth frequency ranged from 2.9 to 14.5 contacts/hour, with the lowest value corresponding to the 6 to <11 year olds and the highest value corresponding to the 6 to <12 month olds. The analysis highlights the need for additional hand-to-mouth data for the <3 months, 3 to <6 months, and 3 to <6 year age groups using standardized collection and analysis because of lack of data or high uncertainty in available data. This is the first publication to report Weibull distributions as the best fitting distribution for hand-to-mouth frequency; using the best fitting exposure factor distribution will help improve estimates of exposure. The analyses also represent a first comprehensive effort to fit hand-to-mouth frequency variability and uncertainty distributions by indoor/outdoor location and by age groups, using the new standard set of age groups recommended by the U.S. Environmental Protection Agency for assessing childhood exposures. Thus, the data presented in this article can be used to update the U.S. EPA's Child-Specific Exposure Factors Handbook and to improve estimates of nondietary ingestion in probabilistic exposure modeling.  相似文献   

8.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

9.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

10.
Biologic data on benzene metabolite doses, cytotoxicity, and genotoxicity often show that these effects do not vary directly with cumulative benzene exposure (i.e., concentration times time, or c × t ). To examine the effect of an alternate exposure metric, we analyzed cell-type specific leukemia mortality in Pliofilm workers. The work history of each Pliofilm worker was used to define each worker's maximally exposed job/department combination over time and the associated long-term average concentration associated with the maximally exposed job (LTA-MEJ). Using this measure, in conjunction with four job exposure estimates, we calculated SMRs for groups of workers with increasing LTA-MEJs. The analyses suggest that a critical concentration of benzene exposure must be reached in order for the risk of leukemia or, more specifically, AMML to be expressed. The minimum concentration is between 20 and 60 ppm depending on the exposure estimate and endpoint (all leukemias or AMMLs only). We believe these analyses are a useful adjunct to previous analyses of the Pliofilm data. They suggests that (a) AMML risk is shown only above a critical concentration of benzene exposure, measured as a long-term average and experienced for years, (b) the critical concentration is between 50 and 60 ppm when using a median exposure estimate derived from three previous exposure assessments, and is between 20 and 25 ppm using the lowest exposure estimates, and (c) risks for total leukemia are driven by risks for AMML, suggesting that AMML is the cell type related to benzene exposure.  相似文献   

11.
The extensive data from the Blair et al.((1)) epidemiology study of occupational acrylonitrile exposure among 25460 workers in eight plants in the United States provide an excellent opportunity to update quantitative risk assessments for this widely used commodity chemical. We employ the semiparametric Cox relative risk (RR) regression model with a cumulative exposure metric to model cause-specific mortality from lung cancer and all other causes. The separately estimated cause-specific cumulative hazards are then combined to provide an overall estimate of age-specific mortality risk. Age-specific estimates of the additional risk of lung cancer mortality associated with several plausible occupational exposure scenarios are obtained. For age 70, these estimates are all markedly lower than those generated with the cancer potency estimate provided in the USEPA acrylonitrile risk assessment.((2)) This result is consistent with the failure of recent occupational studies to confirm elevated lung cancer mortality among acrylonitrile-exposed workers as was originally reported by O'Berg,((3)) and it calls attention to the importance of using high-quality epidemiology data in the risk assessment process.  相似文献   

12.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

13.
For noncancer effects, the degree of human interindividual variability plays a central role in determining the risk that can be expected at low exposures. This discussion reviews available data on observations of interindividual variability in (a) breathing rates, based on observations in British coal miners; (b) systemic pharmacokinetic parameters, based on studies of a number of drugs; (c) susceptibility to neurological effects from fetal exposure to methyl mercury, based on observations of the incidence of effects in relation to hair mercury levels; and (d) chronic lung function changes in relation to long-term exposure to cigarette smoke. The quantitative ranges of predictions that follow from uncertainties in estimates of interindividual variability in susceptibility are illustrated.  相似文献   

14.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

15.
We performed benchmark exposure (BME) calculations for particulate matter when multiple dichotomous outcome variables are involved using latent class modeling techniques and generated separate results for both the extra risk and additional risk. The use of latent class models in this study is advantageous because it combined several outcomes into just two classes (namely, a high‐risk class and a low‐risk class) and compared these two classes to obtain the BME levels. This novel approach addresses a key problem in risk estimation—namely, the multiple comparisons problem, where separate regression models are fitted for each outcome variable and the reference exposure will rely on the results of the best‐fitting model. Because of the complex nature of the estimation process, the bootstrap approach was used to estimate the reference exposure level, thereby reducing uncertainty in the obtained values. The methodology developed in this article was applied to environmental data by identifying unmeasured class membership (e.g., morbidity vs. no morbidity class) among infants in utero using observed characteristics that included low birth weight, preterm birth, and small for gestational age.  相似文献   

16.
The application of an ISO standard procedure (Guide to the Expression of Uncertainty in Measurement (GUM)) is here discussed to quantify uncertainty in human risk estimation under chronic exposure to hazardous chemical compounds. The procedure was previously applied to a simple model; in this article a much more complex model is used, i.e., multiple compound and multiple exposure pathways. Risk was evaluated using the usual methodologies: the deterministic reasonable maximum exposure (RME) and the statistical Monte Carlo method. In both cases, the procedures to evaluate uncertainty on risk values are detailed. Uncertainties were evaluated by different methodologies to account for the peculiarity of information about the single variable. The GUM procedure enables the ranking of variables by their contribution to uncertainty; it provides a criterion for choosing variables for deeper analysis. The obtained results show that the application of GUM procedure is easy and straightforward to quantify uncertainty and variability of risk estimation. Health risk estimation is based on literature data on a water table contaminated by three volatile organic compounds. Daily intake was considered by either ingestion of water or inhalation during showering. The results indicate one of the substances as the main contaminant, and give a criterion to identify the key component on which the treatment selection may be performed and the treatment process may be designed in order to reduce risk.  相似文献   

17.
Marc Kennedy  Andy Hart 《Risk analysis》2009,29(10):1427-1442
We propose new models for dealing with various sources of variability and uncertainty that influence risk assessments for dietary exposure. The uncertain or random variables involved can interact in complex ways, and the focus is on methodology for integrating their effects and on assessing the relative importance of including different uncertainty model components in the calculation of dietary exposures to contaminants, such as pesticide residues. The combined effect is reflected in the final inferences about the population of residues and subsequent exposure assessments. In particular, we show how measurement uncertainty can have a significant impact on results and discuss novel statistical options for modeling this uncertainty. The effect of measurement error is often ignored, perhaps due to the laboratory process conforming to the relevant international standards, for example, or is treated in an  ad hoc  way. These issues are common to many dietary risk analysis problems, and the methods could be applied to any food and chemical of interest. An example is presented using data on carbendazim in apples and consumption surveys of toddlers.  相似文献   

18.
Exposure to chemical contaminants in various media must be estimated when performing ecological risk assessments. Exposure estimates are often based on the 95th-percentile upper confidence limit on the mean concentration of all samples, calculated without regard to critical ecological and spatial information about the relative relationship of receptors, their habitats, and contaminants. This practice produces exposure estimates that are potentially unrepresentative of the ecology of the receptor. This article proposes a habitat area and quality-conditioned exposure estimator, E[HQ], that requires consideration of these relationships. It describes a spatially explicit ecological exposure model to facilitate calculation of E[HQ]. The model provides (1) a flexible platform for investigating the effect of changes in habitat area, habitat quality, foraging area, and population size on exposure estimates, and (2) a tool for calculating E[HQ] for use in actual risk assessments. The inner loop of a Visual Basic program randomly walks a receptor over a multicelled landscape--each cell of which contains values for cell area, habitat area, habitat quality, and concentration--accumulating an exposure estimate until the total area foraged is less than or equal to a given foraging area. An outer loop then steps through foraging areas of increasing size. This program is iterated by Monte Carlo software, with the number of iterations representing the population size. Results indicate that (1) any single estimator may over- or underestimate exposure, depending on foraging strategy and spatial relationships of habitat and contamination, and (2) changes in exposure estimates in response to changes in foraging and habitat area are not linear.  相似文献   

19.
The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows us to focus our resources if we want to reduce the uncertainty. The potential dose of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.  相似文献   

20.
Application of Geostatistics to Risk Assessment   总被引:3,自引:0,他引:3  
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号