首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reference values, including an oral reference dose (RfD) and an inhalation reference concentration (RfC), were derived for propylene glycol methyl ether (PGME), and an oral RfD was derived for its acetate (PGMEA). These values were based on transient sedation observed in F344 rats and B6C3F1 mice during a two‐year inhalation study. The dose‐response relationship for sedation was characterized using internal dose measures as predicted by a physiologically‐based pharmacokinetic (PBPK) model for PGME and its acetate. PBPK modeling was used to account for changes in rodent physiology and metabolism due to aging and adaptation, based on data collected during Weeks 1, 2, 26, 52, and 78 of a chronic inhalation study. The peak concentration of PGME in richly perfused tissues (i.e., brain) was selected as the most appropriate internal dose measure based on a consideration of the mode of action for sedation and similarities in tissue partitioning between brain and other richly perfused tissues. Internal doses (peak tissue concentrations of PGME) were designated as either no‐observed‐adverse‐effect levels (NOAELs) or lowest‐observed‐adverse‐effect levels (LOAELs) based on the presence or the absence of sedation at each time point, species, and sex in the two‐year study. Distributions of the NOAEL and LOAEL values expressed in terms of internal dose were characterized using an arithmetic mean and standard deviation, with the mean internal NOAEL serving as the basis for the reference values, which was then divided by appropriate uncertainty factors. Where data were permitting, chemical‐specific adjustment factors were derived to replace default uncertainty factor values of 10. Nonlinear kinetics, which was predicted by the model in all species at PGME concentrations exceeding 100 ppm, complicate interspecies, and low‐dose extrapolations. To address this complication, reference values were derived using two approaches that differ with respect to the order in which these extrapolations were performed: (1) default approach of interspecies extrapolation to determine the human equivalent concentration (PBPK modeling) followed by uncertainty factor application, and (2) uncertainty factor application followed by interspecies extrapolation (PBPK modeling). The resulting reference values for these two approaches are substantially different, with values from the latter approach being seven‐fold higher than those from the former approach. Such a striking difference between the two approaches reveals an underlying issue that has received little attention in the literature regarding the application of uncertainty factors and interspecies extrapolations to compounds where saturable kinetics occur in the range of the NOAEL. Until such discussions have taken place, reference values based on the former approach are recommended for risk assessments involving human exposures to PGME and PGMEA.  相似文献   

2.
Izadi H  Grundy JE  Bose R 《Risk analysis》2012,32(5):830-835
Repeated-dose studies received by the New Substances Assessment and Control Bureau (NSACB) of Health Canada are used to provide hazard information toward risk calculation. These studies provide a point of departure (POD), traditionally the NOAEL or LOAEL, which is used to extrapolate the quantity of substance above which adverse effects can be expected in humans. This project explored the use of benchmark dose (BMD) modeling as an alternative to this approach for studies with few dose groups. Continuous data from oral repeated-dose studies for chemicals previously assessed by NSACB were reanalyzed using U.S. EPA benchmark dose software (BMDS) to determine the BMD and BMD 95% lower confidence limit (BMDL(05) ) for each endpoint critical to NOAEL or LOAEL determination for each chemical. Endpoint-specific benchmark dose-response levels , indicative of adversity, were consistently applied. An overall BMD and BMDL(05) were calculated for each chemical using the geometric mean. The POD obtained from benchmark analysis was then compared with the traditional toxicity thresholds originally used for risk assessment. The BMD and BMDL(05) generally were higher than the NOAEL, but lower than the LOAEL. BMDL(05) was generally constant at 57% of the BMD. Benchmark provided a clear advantage in health risk assessment when a LOAEL was the only POD identified, or when dose groups were widely distributed. Although the benchmark method cannot always be applied, in the selected studies with few dose groups it provided a more accurate estimate of the real no-adverse-effect level of a substance.  相似文献   

3.
Slob  W.  Pieters  M. N. 《Risk analysis》1998,18(6):787-798
The use of uncertainty factors in the standard method for deriving acceptable intake or exposure limits for humans, such as the Reference Dose (RfD), may be viewed as a conservative method of taking various uncertainties into account. As an obvious alternative, the use of uncertainty distributions instead of uncertainty factors is gaining attention. This paper presents a comprehensive discussion of a general framework that quantifies both the uncertainties in the no-adverse-effect level in the animal (using a benchmark-like approach) and the uncertainties in the various extrapolation steps involved (using uncertainty distributions). This approach results in an uncertainty distribution for the no-adverse-effect level in the sensitive human subpopulation, reflecting the overall scientific uncertainty associated with that level. A lower percentile of this distribution may be regarded as an acceptable exposure limit (e.g., RfD) that takes account of the various uncertainties in a nonconservative fashion. The same methodology may also be used as a tool to derive a distribution for possible human health effects at a given exposure level. We argue that in a probabilistic approach the uncertainty in the estimated no-adverse-effect-level in the animal should be explicitly taken into account. Not only is this source of uncertainty too large to be ignored, it also has repercussions for the quantification of the other uncertainty distributions.  相似文献   

4.
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.  相似文献   

5.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

6.
Variability is the heterogeneity of values within a population. Uncertainty refers to lack of knowledge regarding the true value of a quantity. Mixture distributions have the potential to improve the goodness of fit to data sets not adequately described by a single parametric distribution. Uncertainty due to random sampling error in statistics of interests can be estimated based upon bootstrap simulation. In order to evaluate the robustness of using mixture distribution as a basis for estimating both variability and uncertainty, 108 synthetic data sets generated from selected population mixture log-normal distributions were investigated, and properties of variability and uncertainty estimates were evaluated with respect to variation in sample size, mixing weight, and separation between components of mixtures. Furthermore, mixture distributions were compared with single-component distributions. Findings include: (1). mixing weight influences the stability of variability and uncertainty estimates; (2). bootstrap simulation results tend to be more stable for larger sample sizes; (3). when two components are well separated, the stability of bootstrap simulation is improved; however, a larger degree of uncertainty arises regarding the percentiles coinciding with the separated region; (4). when two components are not well separated, a single distribution may often be a better choice because it has fewer parameters and better numerical stability; and (5). dependencies exist in sampling distributions of parameters of mixtures and are influenced by the amount of separation between the components. An emission factor case study based upon NO(x) emissions from coal-fired tangential boilers is used to illustrate the application of the approach.  相似文献   

7.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

8.
Brand  Kevin P.  Rhomberg  Lorenz  Evans  John S. 《Risk analysis》1999,19(2):295-308
The prominent role of animal bioassay evidence in environmental regulatory decisions compels a careful characterization of extrapolation uncertainties. In noncancer risk assessment, uncertainty factors are incorporated to account for each of several extrapolations required to convert a bioassay outcome into a putative subthreshold dose for humans. Measures of relative toxicity taken between different dosing regimens, different endpoints, or different species serve as a reference for establishing the uncertainty factors. Ratios of no observed adverse effect levels (NOAELs) have been used for this purpose; statistical summaries of such ratios across sets of chemicals are widely used to guide the setting of uncertainty factors. Given the poor statistical properties of NOAELs, the informativeness of these summary statistics is open to question. To evaluate this, we develop an approach to calibrate the ability of NOAEL ratios to reveal true properties of a specified distribution for relative toxicity. A priority of this analysis is to account for dependencies of NOAEL ratios on experimental design and other exogenous factors. Our analysis of NOAEL ratio summary statistics finds (1) that such dependencies are complex and produce pronounced systematic errors and (2) that sampling error associated with typical sample sizes (50 chemicals) is non-negligible. These uncertainties strongly suggest that NOAEL ratio summary statistics cannot be taken at face value; conclusions based on such ratios reported in well over a dozen published papers should be reconsidered.  相似文献   

9.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

10.
A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two-part article. This Part 2 article discusses sensitivity and uncertainty analyses conducted to assess the key model inputs and areas of needed research for children's exposure to CCA-treated playsets and decks. The following types of analyses were conducted: (1) sensitivity analyses using a percentile scaling approach and multiple stepwise regression; and (2) uncertainty analyses using the bootstrap and two-stage Monte Carlo techniques. The five most important variables, based on both sensitivity and uncertainty analyses, were: wood surface residue-to-skin transfer efficiency; wood surface residue levels; fraction of hand surface area mouthed per mouthing event; average fraction of nonresidential outdoor time a child plays on/around CCA-treated public playsets; and frequency of hand washing. In general, there was a factor of 8 for the 5th and 95th percentiles and a factor of 4 for the 50th percentile in the uncertainty of predicted population dose estimates due to parameter uncertainty. Data were available for most of the key model inputs identified with sensitivity and uncertainty analyses; however, there were few or no data for some key inputs. To evaluate and improve the accuracy of model results, future measurement studies should obtain longitudinal time-activity diary information on children, spatial and temporal measurements of residue and soil concentrations on or near CCA-treated playsets and decks, and key exposure factors. Future studies should also address other sources of uncertainty in addition to parameter uncertainty, such as scenario and model uncertainty.  相似文献   

11.
Estimates of the lifetime-absorbed daily dose (LADD) of acrylamide resulting from use of representative personal-care products containing polyacrylamides have been developed. All of the parameters that determine the amount of acrylamide absorbed by an individual vary from one individual to another. Moreover, for some parameters there is uncertainty as to which is the correct or representative value from a range of values. Consequently, the parameters used in the estimation of the LADD of acrylamide from usage of a particular product type (e.g., deodorant, makeup, etc.) were represented by distributions evaluated using Monte Carlo analyses.((1-4)) From these data, distributions of values for key parameters, such as the amount of acrylamide in polyacrylamide, absorption fraction, etc., were defined and used to provide a distribution of LADDs for each personal-care product. The estimated total acrylamide LADD (across all products) for males and females at the median, mean, and 95th percentile of the distribution of individual LADD values were 4.7 x 10(-8), 2.3 x 10(-7), and 7.3 x 10(-7) mg/kg/day for females and 3.6 x 10(-8), 1.7 x 10(-7), and 5.4 x 10(-7) mg/kg/day for males. The ratio of the LADDs to risk-specific dose corresponding to a target risk level of 1 x 10(-5), the acceptable risk level for this investigation, derived using approaches typically used by the FDA, the USEPA, and proposed for use by the European Union (EU) were also calculated. All ratios were well below 1, indicating that all the extra lifetime cancer risk from the use of polyacrylamide-containing personal-care products, in the manner assumed in this assessment, are well below acceptable levels. Even if it were assumed that an individual used all of the products together, the estimated LADD would still provide a dose that was well below the acceptable risk levels.  相似文献   

12.
Variability arises due to differences in the value of a quantity among different members of a population. Uncertainty arises due to lack of knowledge regarding the true value of a quantity for a given member of a population. We describe and evaluate two methods for quantifying both variability and uncertainty. These methods, bootstrap simulation and a likelihood-based method, are applied to three datasets. The datasets include a synthetic sample of 19 values from a Lognormal distribution, a sample of nine values obtained from measurements of the PCB concentration in leafy produce, and a sample of five values for the partitioning of chromium in the flue gas desulfurization system of coal-fired power plants. For each of these datasets, we employ the two methods to characterize uncertainty in the arithmetic mean and standard deviation, cumulative distribution functions based upon fitted parametric distributions, the 95th percentile of variability, and the 63rd percentile of uncertainty for the 81st percentile of variability. The latter is intended to show that it is possible to describe any point within the uncertain frequency distribution by specifying an uncertainty percentile and a variability percentile. Using the bootstrap method, we compare results based upon use of the method of matching moments and the method of maximum likelihood for fitting distributions to data. Our results indicate that with only 5–19 data points as in the datasets we have evaluated, there is substantial uncertainty based upon random sampling error. Both the boostrap and likelihood-based approaches yield comparable uncertainty estimates in most cases.  相似文献   

13.
Children may be more susceptible to toxicity from some environmental chemicals than adults. This susceptibility may occur during narrow age periods (windows), which can last from days to years depending on the toxicant. Breathing rates specific to narrow age periods are useful to assess inhalation dose during suspected windows of susceptibility. Because existing breathing rates used in risk assessment are typically for broad age ranges or are based on data not representative of the population, we derived daily breathing rates for narrow age ranges of children designed to be more representative of the current U.S. children's population. These rates were derived using the metabolic conversion method of Layton (1993) and energy intake data adjusted to represent the U.S. population from a relatively recent dietary survey (CSFII 1994–1996, 1998). We calculated conversion factors more specific to children than those previously used. Both nonnormalized (L/day) and normalized (L/kg-day) breathing rates were derived and found comparable to rates derived using energy estimates that are accurate for the individuals sampled but not representative of the population. Estimates of breathing rate variability within a population can be used with stochastic techniques to characterize the range of risk in the population from inhalation exposures. For each age and age-gender group, we present the mean, standard error of the mean, percentiles (50th, 90th, and 95th), geometric mean, standard deviation, 95th percentile, and best-fit parametric models of the breathing rate distributions. The standard errors characterize uncertainty in the parameter estimate, while the percentiles describe the combined interindividual and intra-individual variability of the sampled population. These breathing rates can be used for risk assessment of subchronic and chronic inhalation exposures of narrow age groups of children.  相似文献   

14.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

15.
Human exposure to halons and halon replacement chemicals is often regulated on the basis of cardiac sensitization potential. The dose-response data obtained from animal testing are used to determine the no observable adverse effect level (NOAEL) and lowest observable adverse effect level (LOAEL) values. This approach alone does not provide the information necessary to evaluate the cardiac sensitization potential for the chemical of interest under a variety of exposure concentrations and durations. In order to provide a tool for decision-makers and regulators tasked with setting exposure guidelines for halon replacement chemicals, a quantitative approach was established which allowed exposures to be assessed in terms of the chemical concentrations in blood during the exposure. A physiologically-based pharmacokinetic (PBPK) model was used to simulate blood concentrations of Halon 1301 (bromotrifluoromethane, CF3Br), HFC-125 (pentafluoroethane, CHF2CF3), HFC-227ea (heptafluoropropane, CF3CHFCF3), HCFC-123 (dichlorotrifluoroethane, CHCl2CF3), and CF3I (trifluoroiodomethane) during inhalation exposures. This work demonstrates a quantitative approach for use in linking chemical inhalation exposures to the levels of chemical in blood achieved during the exposure.  相似文献   

16.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

17.
Information of exposure factors used in quantitative risk assessments has previously been compiled and reported for U.S. and European populations. However, due to the advancement of science and knowledge, these reports are in continuous need of updating with new data. Equally important is the change over time of many exposure factors related to both physiological characteristics and human behavior. Body weight, skin surface, time use, and dietary habits are some of the most obvious examples covered here. A wealth of data is available from literature not primarily gathered for the purpose of risk assessment. Here we review a number of key exposure factors and compare these factors between northern Europe—here represented by Sweden—and the United States. Many previous compilations of exposure factor data focus on interindividual variability and variability between sexes and age groups, while uncertainty is mainly dealt with in a qualitative way. In this article variability is assessed along with uncertainty. As estimates of central tendency and interindividual variability, mean, standard deviation, skewness, kurtosis, and multiple percentiles were calculated, while uncertainty was characterized using 95% confidence intervals for these parameters. The presented statistics are appropriate for use in deterministic analyses using point estimates for each input parameter as well as in probabilistic assessments.  相似文献   

18.
A tank car derailment in northern California in 1991 spilled metam sodium into the Sacramento River, and released its breakdown product, methyl isothiocyanate (MITC), into the air. This paper describes the risk evaluation process used. Over 240 individuals reported symptoms such as eye and throat irritation, dizziness, and shortness of breath. Reference exposure levels (RELs) for 1 hr were developed for MITC and compared to exposure concentrations. Ocular irritation in cats was the most sensitive endpoint reported. The no observed adverse effect level (NOAEL), divided by an uncertainty factor (UF) of 100, produced an REL of 0.5 ppb of MITC in air to prevent discomfort. An REL to prevent disability was estimated to be 40 ppb. An REL to prevent life-threatening injury was estimated to be 150 ppb. Measured MITC levels ranged from 0.2-37 ppb and estimated peak levels ranged from 140-1600 ppb. The usefulness of RELs for emergency planning is discussed.  相似文献   

19.
In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.  相似文献   

20.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号