首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Note on Compounded Conservatism   总被引:1,自引:0,他引:1  
Compounded conservatism (or "creeping safety") describes the impact of using conservative, upper-bound estimates of the values of multiple input variates to obtain a conservative estimate of risk modeled as an increasing function of those variates. In a simple multiplicative model of risk, for example, if upper p -fractile (100 p th percentile) values are used for each of several statistically independent input variates, the resulting risk estimate will be the upper p' -fractile of risk predicted according to that multiplicative model, where p' > p . The amount of compounded conservativism reflected by the difference between p' and p may be substantial, depending on the number of inputs, their relative uncertainties, and the value of p selected. Particular numerical examples of compounded conservatism are often cited, but an analytic approach may better serve to conceptualize and communicate its potential quantitative impact. This note briefly outlines such an approach and illustrates its application to the case of risk modeled as a product of lognormally distributed inputs.  相似文献   

2.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

3.
Variability arises due to differences in the value of a quantity among different members of a population. Uncertainty arises due to lack of knowledge regarding the true value of a quantity for a given member of a population. We describe and evaluate two methods for quantifying both variability and uncertainty. These methods, bootstrap simulation and a likelihood-based method, are applied to three datasets. The datasets include a synthetic sample of 19 values from a Lognormal distribution, a sample of nine values obtained from measurements of the PCB concentration in leafy produce, and a sample of five values for the partitioning of chromium in the flue gas desulfurization system of coal-fired power plants. For each of these datasets, we employ the two methods to characterize uncertainty in the arithmetic mean and standard deviation, cumulative distribution functions based upon fitted parametric distributions, the 95th percentile of variability, and the 63rd percentile of uncertainty for the 81st percentile of variability. The latter is intended to show that it is possible to describe any point within the uncertain frequency distribution by specifying an uncertainty percentile and a variability percentile. Using the bootstrap method, we compare results based upon use of the method of matching moments and the method of maximum likelihood for fitting distributions to data. Our results indicate that with only 5–19 data points as in the datasets we have evaluated, there is substantial uncertainty based upon random sampling error. Both the boostrap and likelihood-based approaches yield comparable uncertainty estimates in most cases.  相似文献   

4.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

5.
Roy L. Smith 《Risk analysis》1994,14(4):433-439
This work presents a comparison of probabilistic and deterministic health risk estimates based on data from an industrial site in the northeastern United States. The risk assessment considered exposures to volatile solvents by drinking water ingestion and showering. Probability densities used as inputs included concentrations, contact rates, and exposure frequencies; dose-response inputs were single values. Deterministic risk estimates were calculated by the "reasonable maximum exposure" (RME) approach recommended by the EPA Superfund program. The RME non-carcinogenic risk fell between the 90th and the 95th percentile of the probability density; the RME cancer risk fell between the 95th percentile and the maximum. These results suggest that in this case (1) EPA's deterministic RME risk was reasonably protective, (2) results of probabilistic and deterministic calculations were consistent, and (3) commercially available software Monte Carlo software effectively provided multiple risk estimates recommended by recent EPA guidance.  相似文献   

6.
Mark Nicas 《Risk analysis》1996,16(4):527-538
An adverse health impact is often treated as a binary variable (response vs. no response), in which case the risk of response is defined as a monotonically increasing function R of the dose received D. For a population of size N , specifying the forms of R(D) and of the probability density function (pdf) for D allows determination of the pdf for risk, and computation of the mean and variance of the distribution of incidence, where the latter parameters are denoted E[S N] and Var[ S N], respectively. The distribution of S N describes uncertainty in the future incidence value. Given variability in dose (and risk) among population members, the distribution of incidence is Poisson-binomial. However, depending on the value of E[S N], the distribution of incidence is adequately approximated by a Poisson distribution with parameter μ= E[S N], or by a normal distribution with mean and variance equal to E[S N] and Var[ S N]. The general analytical framework is applied to occupational infection by Mycobacterium tuberculosis (M. tb). Tuberculosis is transmitted by inhalation of 1–5 μm particles carrying viable M. tb bacilli. Infection risk has traditionally been modeled by the expression: R(D) = 1 – exp(– D ), where D is the expected number of bacilli that deposit in the pulmonary region. This model assumes that the infectious dose is one bacillus. The beta pdf and the gamma pdf are shown to be reasonable and especially convenient forms for modeling the distribution of the expected cumulative dose across a large healthcare worker cohort. Use of the the analytical framework is illustrated by estimating the efficacy of different respiratory protective devices in reducing healthcare worker infection risk.  相似文献   

7.
Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust ingestion via hand and object mouthing of children, using EPA's SHEDS model. Results for children 3 to <6 years old show that mean and 95th percentile total ingestion of soil and dust values are 68 and 224 mg/day, respectively; mean from soil ingestion, hand‐to‐mouth dust ingestion, and object‐to‐mouth dust ingestion are 41 mg/day, 20 mg/day, and 7 mg/day, respectively. In general, hand‐to‐mouth soil ingestion was the most important pathway, followed by hand‐to‐mouth dust ingestion, then object‐to‐mouth dust ingestion. The variability results are most sensitive to inputs on surface loadings, soil‐skin adherence, hand mouthing frequency, and hand washing frequency. The predicted total soil and dust ingestion fits a lognormal distribution with geometric mean = 35.7 and geometric standard deviation = 3.3. There are two uncertainty distributions, one below the 20th percentile and the other above. Modeled uncertainties ranged within a factor of 3–30. Mean modeled estimates for soil and dust ingestion are consistent with past information but lower than the central values recommended in the 2008 EPA Child‐Specific Exposure Factors Handbook. This new modeling approach, which predicts soil and dust ingestion by pathway, source type, population group, geographic location, and other factors, offers a better characterization of exposures relevant to health risk assessments as compared to using a single value.  相似文献   

8.
A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two-part article. This Part 2 article discusses sensitivity and uncertainty analyses conducted to assess the key model inputs and areas of needed research for children's exposure to CCA-treated playsets and decks. The following types of analyses were conducted: (1) sensitivity analyses using a percentile scaling approach and multiple stepwise regression; and (2) uncertainty analyses using the bootstrap and two-stage Monte Carlo techniques. The five most important variables, based on both sensitivity and uncertainty analyses, were: wood surface residue-to-skin transfer efficiency; wood surface residue levels; fraction of hand surface area mouthed per mouthing event; average fraction of nonresidential outdoor time a child plays on/around CCA-treated public playsets; and frequency of hand washing. In general, there was a factor of 8 for the 5th and 95th percentiles and a factor of 4 for the 50th percentile in the uncertainty of predicted population dose estimates due to parameter uncertainty. Data were available for most of the key model inputs identified with sensitivity and uncertainty analyses; however, there were few or no data for some key inputs. To evaluate and improve the accuracy of model results, future measurement studies should obtain longitudinal time-activity diary information on children, spatial and temporal measurements of residue and soil concentrations on or near CCA-treated playsets and decks, and key exposure factors. Future studies should also address other sources of uncertainty in addition to parameter uncertainty, such as scenario and model uncertainty.  相似文献   

9.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

10.
A Latin Hypercube probabilistic risk assessment methodology was employed in the assessment of health risks associated with exposures to contaminated sediment and biota in an estuary in the Tidewater region of Virginia. The primary contaminants were polychlorinated biphenyls (PCBs), polychlorinated terphenyls (PCTs), polynuclear aromatic hydrocarbons (PAHs), and metals released into the estuary from a storm sewer system. The exposure pathways associated with the highest contaminant intake and risks were dermal contact with contaminated sediment and ingestion of contaminated aquatic and terrestrial biota from the contaminated area. As expected, all of the output probability distributions of risk were highly skewed, and the ratios of the expected value (mean) to median risk estimates ranged from 1.4 to 14.8 for the various exposed populations. The 99th percentile risk estimates were as much as two orders of magnitude above the mean risk estimates. For the sediment exposure pathways, the stability of the median risk estimates was found to be much greater than the stability of the expected value risk estimates. The interrun variability in the median risk estimate was found to be +/-1.9% at 3000 iterations. The interrun stability of the mean risk estimates was found to be approximately equal to that of the 95th percentile estimates at any number of iterations. The variation in neither contaminant concentrations nor any other single input variable contributed disproportionately to the overall simulation variance. The inclusion or exclusion of spatial correlations among contaminant concentrations in the simulation model did not significantly effect either the magnitude or the variance of the simulation risk estimates for sediment exposures.  相似文献   

11.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

12.
This article presents a general model for estimating population heterogeneity and "lack of knowledge" uncertainty in methylmercury (MeHg) exposure assessments using two-dimensional Monte Carlo analysis. Using data from fish-consuming populations in Bangladesh, Brazil, Sweden, and the United Kingdom, predictive model estimates of dietary MeHg exposures were compared against those derived from biomarkers (i.e., [Hg]hair and [Hg]blood). By disaggregating parameter uncertainty into components (i.e., population heterogeneity, measurement error, recall error, and sampling error) estimates were obtained of the contribution of each component to the overall uncertainty. Steady-state diet:hair and diet:blood MeHg exposure ratios were estimated for each population and were used to develop distributions useful for conducting biomarker-based probabilistic assessments of MeHg exposure. The 5th and 95th percentile modeled MeHg exposure estimates around mean population exposure from each of the four study populations are presented to demonstrate lack of knowledge uncertainty about a best estimate for a true mean. Results from a U.K. study population showed that a predictive dietary model resulted in a 74% lower lack of knowledge uncertainty around a central mean estimate relative to a hair biomarker model, and also in a 31% lower lack of knowledge uncertainty around central mean estimate relative to a blood biomarker model. Similar results were obtained for the Brazil and Bangladesh populations. Such analyses, used here to evaluate alternative models of dietary MeHg exposure, can be used to refine exposure instruments, improve information used in site management and remediation decision making, and identify sources of uncertainty in risk estimates.  相似文献   

13.
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard “point” risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.  相似文献   

14.
The recent decision of the U.S. Supreme Court on the regulation of CO2 emissions from new motor vehicles( 1 ) shows the need for a robust methodology to evaluate the fraction of attributable risk from such emissions. The methodology must enable decisionmakers to reach practically relevant conclusions on the basis of expert assessments the decisionmakers see as an expression of research in progress, rather than as knowledge consolidated beyond any reasonable doubt.( 2,3,4 ) This article presents such a methodology and demonstrates its use for the Alpine heat wave of 2003. In a Bayesian setting, different expert assessments on temperature trends and volatility can be formalized as probability distributions, with initial weights (priors) attached to them. By Bayesian learning, these weights can be adjusted in the light of data. The fraction of heat wave risk attributable to anthropogenic climate change can then be computed from the posterior distribution. We show that very different priors consistently lead to the result that anthropogenic climate change has contributed more than 90% to the probability of the Alpine summer heat wave in 2003. The present method can be extended to a wide range of applications where conclusions must be drawn from divergent assessments under uncertainty.  相似文献   

15.
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.  相似文献   

16.
In environmental risk management, there are often interests in maximizing public health benefits (efficiency) and addressing inequality in the distribution of health outcomes. However, both dimensions are not generally considered within a single analytical framework. In this study, we estimate both total population health benefits and changes in quantitative indicators of health inequality for a number of alternative spatial distributions of diesel particulate filter retrofits across half of an urban bus fleet in Boston, Massachusetts. We focus on the impact of emissions controls on primary fine particulate matter (PM2.5) emissions, modeling the effect on PM2.5 concentrations and premature mortality. Given spatial heterogeneity in baseline mortality rates, we apply the Atkinson index and other inequality indicators to quantify changes in the distribution of mortality risk. Across the different spatial distributions of control strategies, the public health benefits varied by more than a factor of two, related to factors such as mileage driven per day, population density near roadways, and baseline mortality rates in exposed populations. Changes in health inequality indicators varied across control strategies, with the subset of optimal strategies considering both efficiency and equality generally robust across different parametric assumptions and inequality indicators. Our analysis demonstrates the viability of formal analytical approaches to jointly address both efficiency and equality in risk assessment, providing a tool for decisionmakers who wish to consider both issues.  相似文献   

17.
It has recently been suggested that "standard" data distributions for key exposure variables should be developed wherever appropriate for use in probabilistic or "Monte Carlo" exposure analyses. Soil-on-skin adherence estimates represent an ideal candidate for development of a standard data distribution: There are several readily available studies which offer a consistent pattern of reported results, and more importantly, soil adherence to skin is likely to vary little from site-to-site. In this paper, we thoroughly review each of the published soil adherence studies with respect to study design, sampling, and analytical methods, and level of confidence in the reported results. Based on these studies, probability density functions (PDF) of soil adherence values were examined for different age groups and different sampling techniques. The soil adherence PDF developed from adult data was found to resemble closely the soil adherence PDF based on child data in terms of both central tendency (mean = 0.49 and 0.63 mg-soil/cm2-skin, respectively) and 95th percentile values (1.6 and 2.4 mg-soil/cm2-skin, respectively). Accordingly, a single, "standard" PDF is presented based on all data collected for all age groups. This standard PDF is lognormally distributed; the arithmetic mean and standard deviation are 0.52 ± 0.9 mg-soil/cm2-skin. Since our review of the literature indicates that soil adherence under environmental conditions will be minimally influenced by age, sex, soil type, or particle size, this PDF should be considered applicable to all settings. The 50th and 95th percentile values of the standard PDF (0.25 and 1.7 mg-soil/cm2-skin, respectively) are very similar to recent U.S. EPA estimates of "average" and "upper-bound" soil adherence (0.2 and 1.0 mg-soil/cm2-skin, respectively).  相似文献   

18.
Due to the hydrophobic nature of synthetic based fluids (SBFs), drilling cuttings are not very dispersive in the water column and settle down close to the disposal site. Arsenic and copper are two important toxic heavy metals, among others, found in the drilling waste. In this article, the concentrations of heavy metals are determined using a steady state "aquivalence-based" fate model in a probabilistic mode. Monte Carlo simulations are employed to determine pore water concentrations. A hypothetical case study is used to determine the water quality impacts for two discharge options: 4% and 10% attached SBFs, which correspond to the best available technology option and the current discharge practice in the U.S. offshore. The exposure concentration ( CE ) is a predicted environmental concentration, which is adjusted for exposure probability and bioavailable fraction of heavy metals. The response of the ecosystem  ( RE )  is defined by developing an empirical distribution function of predicted no-effect concentration. The pollutants' pore water concentrations within the radius of 750 m are estimated and cumulative distributions of risk quotient  ( RQ = CE / RE )  are developed to determine the probability of RQ greater than 1.  相似文献   

19.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

20.
Estimates of the lifetime-absorbed daily dose (LADD) of acrylamide resulting from use of representative personal-care products containing polyacrylamides have been developed. All of the parameters that determine the amount of acrylamide absorbed by an individual vary from one individual to another. Moreover, for some parameters there is uncertainty as to which is the correct or representative value from a range of values. Consequently, the parameters used in the estimation of the LADD of acrylamide from usage of a particular product type (e.g., deodorant, makeup, etc.) were represented by distributions evaluated using Monte Carlo analyses.((1-4)) From these data, distributions of values for key parameters, such as the amount of acrylamide in polyacrylamide, absorption fraction, etc., were defined and used to provide a distribution of LADDs for each personal-care product. The estimated total acrylamide LADD (across all products) for males and females at the median, mean, and 95th percentile of the distribution of individual LADD values were 4.7 x 10(-8), 2.3 x 10(-7), and 7.3 x 10(-7) mg/kg/day for females and 3.6 x 10(-8), 1.7 x 10(-7), and 5.4 x 10(-7) mg/kg/day for males. The ratio of the LADDs to risk-specific dose corresponding to a target risk level of 1 x 10(-5), the acceptable risk level for this investigation, derived using approaches typically used by the FDA, the USEPA, and proposed for use by the European Union (EU) were also calculated. All ratios were well below 1, indicating that all the extra lifetime cancer risk from the use of polyacrylamide-containing personal-care products, in the manner assumed in this assessment, are well below acceptable levels. Even if it were assumed that an individual used all of the products together, the estimated LADD would still provide a dose that was well below the acceptable risk levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号