首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

2.
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually must be estimated using data from experiments conducted with individual chemicals. In estimating such risk, it is commonly assumed that the total risk due to the mixture is the sum of the risks of the individual components, provided that the risks associated with individual chemicals at levels present in the mixture are low. This assumption, while itself not necessarily conservative, has led to the conservative practice of summing individual upper-bound risk estimates in order to obtain an upper bound on the total excess cancer risk for a mixture. Less conservative procedures are described here and are illustrated for the case of a mixture of four carcinogens.  相似文献   

3.
Natural or manufactured products may contain mixtures of carcinogens and the human environment certainly contains mixtures of carcinogens. Various authors have shown that the total risk of a mixture can be approximated by the sum of the risks of the individual components under a variety of conditions at low doses. Under these conditions, summing the individual estimated upper bound risks, as currently often done, is too conservative because it is unlikely that all risks for a mixture are at their maximum levels simultaneously. In the absence of synergism, a simple procedure is proposed for estimating a more appropriate upper bound of the additive risks for a mixture of carcinogens. These simple limits also apply to noncancer endpoints when the risks of the components are approximately additive.  相似文献   

4.
Quantitative cancer risk assessments are typically expressed as plausible upper bounds rather than estimates of central tendency. In analyses involving several carcinogens, these upper bounds are often summed to estimate overall risk. This begs the question of whether a sum of upper bounds is itself a plausible estimate of overall risk. This question can be asked in two ways: whether the sum yields an improbable estimate of overall risk (that is, is it only remotely possible for the true sum of risks to match the sum of upper bounds), or whether the sum gives a misleading estimate (that is, is the true sum of risks likely to be very different from the sum of upper bounds). Analysis of four case studies shows that as the number of risk estimates increases, their sum becomes increasingly improbable, but not misleading. Though the overall risk depends on the independence, additivity, and number of risk estimates, as well as the shapes of the underlying risk distributions, sums of upper bounds provide useful information about the overall risk and can be adjusted downward to give a more plausible [perhaps probable] upper bound, or even a central estimate of overall risk.  相似文献   

5.
Two-year chronic bioassays were conducted by using B6C3F1 female mice fed several concentrations of two different mixtures of coal tars from manufactured gas waste sites or benzo(a)pyrene (BaP). The purpose of the study was to obtain estimates of cancer potency of coal tar mixtures, by using conventional regulatory methods, for use in manufactured gas waste site remediation. A secondary purpose was to investigate the validity of using the concentration of a single potent carcinogen, in this case benzo(a)pyrene, to estimate the relative risk for a coal tar mixture. The study has shown that BaP dominates the cancer risk when its concentration is greater than 6,300 ppm in the coal tar mixture. In this case the most sensitive tissue site is the forestomach. Using low-dose linear extrapolation, the lifetime cancer risk for humans is estimated to be: Risk < 1.03 x 10(-4) (ppm coal tar in total diet) + 240 x 10(-4) (ppm BaP in total diet), based on forestomach tumors. If the BaP concentration in the coal tar mixture is less than 6,300 ppm, the more likely case, then lung tumors provide the largest estimated upper limit of risk, Risk < 2.55 x 10(-4) (ppm coal tar in total diet), with no contribution of BaP to lung tumors. The upper limit of the cancer potency (slope factor) for lifetime oral exposure to benzo(a)pyrene is 1.2 x 10(-3) per microgram per kg body weight per day from this Good Laboratory Practice (GLP) study compared with the current value of 7.3 x 10(-3) per microgram per kg body weight per day listed in the U.S. EPA Integrated Risk Information System.  相似文献   

6.
Aggregate exposure metrics based on sums or weighted averages of component exposures are widely used in risk assessments of complex mixtures, such as asbestos-associated dusts and fibers. Allowed exposure levels based on total particle or fiber counts and estimated ambient concentrations of such mixtures may be used to make costly risk-management decisions intended to protect human health and to remediate hazardous environments. We show that, in general, aggregate exposure information alone may be inherently unable to guide rational risk-management decisions when the components of the mixture differ significantly in potency and when the percentage compositions of the mixture exposures differ significantly across locations. Under these conditions, which are not uncommon in practice, aggregate exposure metrics may be "worse than useless," in that risk-management decisions based on them are less effective than decisions that ignore the aggregate exposure information and select risk-management actions at random. The potential practical significance of these results is illustrated by a case study of 27 exposure scenarios in El Dorado Hills, California, where applying an aggregate unit risk factor (from EPA's IRIS database) to aggregate exposure metrics produces average risk estimates about 25 times greater - and of uncertain predictive validity - compared to risk estimates based on specific components of the mixture that have been hypothesized to pose risks of human lung cancer and mesothelioma.  相似文献   

7.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

8.
Two-year chronic bioassays were conducted by using B6C3F1 female mice fed several concentrations of two different mixtures of coal tars from manufactured gas waste sites or benzo(a)pyrene (BaP). The purpose of the study was to obtain estimates of cancer potency of coal tar mixtures, by using conventional regulatory methods, for use in manufactured gas waste site remediation. A secondary purpose was to investigate the validity of using the concentration of a single potent carcinogen, in this case benzo(a)pyrene, to estimate the relative risk for a coal tar mixture. The study has shown that BaP dominates the cancer risk when its concentration is greater than 6,300 ppm in the coal tar mixture. In this case the most sensitive tissue site is the forestomach. Using low-dose linear extrapolation, the lifetime cancer risk for humans is estimated to be: Risk < 1.03 × 10−4 (ppm coal tar in total diet) + 240 × 10−4 (ppm BaP in total diet), based on forestomach tumors. If the BaP concentration in the coal tar mixture is less than 6,300 ppm, the more likely case, then lung tumors provide the largest estimated upper limit of risk, Risk < 2.55 × 10−4 (ppm coal tar in total diet), with no contribution of BaP to lung tumors. The upper limit of the cancer potency (slope factor) for lifetime oral exposure to benzo(a)pyrene is 1.2 × 10−3 per μg per kg body weight per day from this Good Laboratory Practice (GLP) study compared with the current value of 7.3 × 10−3 per μg per kg body weight per day listed in the U.S. EPA Integrated Risk Information System.  相似文献   

9.
Estimated Soil Ingestion Rates for Use in Risk Assessment   总被引:2,自引:0,他引:2  
Assessing the risks to human health posed by contaminants present in soil requires an estimate of likely soil ingestion rates. In the past, direct measurements of soil ingestion were not available and risk assessors were forced to estimate soil ingestion rates based on observations of mouthing behavior and measurements of soil on hands. Recently, empirical data on soil ingestion rates have become available from two sources (Binder et al., 1986 and van Wijnen et al., 1986). Although preliminary, these data can be used to derive better estimates of soil ingestion rates for use in risk assessments. Estimates of average soil ingestion rates derived in this paper range from 25 to 100 mg/day, depending on the age of the individual at risk. Maximum soil ingestion rates that are unlikely to underestimate exposure range from 100 to 500 mg. A value of 5,000 mg/day is considered a reasonable estimate of a maximum single-day exposure for a child with habitual pica.  相似文献   

10.
Dermal absorption experiments form an important component in the assessment of risk from exposure to pesticides and other substances. Much dermal absorption data is gathered in rat experiments carried out using a certain standard protocol. Uncertainties in these data arise from many sources and can be quite large. For example, measurements of the systemic absorption of hexaconazole differed by more than an order of magnitude within a single experiment. Two diniconazole studies produced quite different results, due to minor differences in protocol and in chemical formulation. Limits of detection can also prevent accurate measurement when the amounts absorbed are small. These examples illustrate the need for measuring and reporting uncertainties in estimates that are based on these data. The most direct way to estimate uncertainty is to compute the sample standard deviations of replicate measurements. By pooling these estimates across dose and duration groups for which they are similar, the number of degrees of freedom is increased, and more precise confidence intervals can be obtained. In particular, the ratio of upper to lower 95% confidence limits was reduced by as much as ten-fold for hexaconazole, seven-fold for uniconazole, and nearly four-fold for propiconazole.  相似文献   

11.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

12.
Twenty-four-hour recall data from the Continuing Survey of Food Intake by Individuals (CSFII) are frequently used to estimate dietary exposure for risk assessment. Food frequency questionnaires are traditional instruments of epidemiological research; however, their application in dietary exposure and risk assessment has been limited. This article presents a probabilistic method of bridging the National Health and Nutrition Examination Survey (NHANES) food frequency and the CSFII data to estimate longitudinal (usual) intake, using a case study of seafood mercury exposures for two population subgroups (females 16 to 49 years and children 1 to 5 years). Two hundred forty-nine CSFII food codes were mapped into 28 NHANES fish/shellfish categories. FDA and state/local seafood mercury data were used. A uniform distribution with minimum and maximum blood-diet ratios of 0.66 to 1.07 was assumed. A probabilistic assessment was conducted to estimate distributions of individual 30-day average daily fish/shellfish intakes, methyl mercury exposure, and blood levels. The upper percentile estimates of fish and shellfish intakes based on the 30-day daily averages were lower than those based on two- and three-day daily averages. These results support previous findings that distributions of "usual" intakes based on a small number of consumption days provide overestimates in the upper percentiles. About 10% of the females (16 to 49 years) and children (1 to 5 years) may be exposed to mercury levels above the EPA's RfD. The predicted 75th and 90th percentile blood mercury levels for the females in the 16-to-49-year group were similar to those reported by NHANES. The predicted 90th percentile blood mercury levels for children in the 1-to-5-year subgroup was similar to NHANES and the 75th percentile estimates were slightly above the NHANES.  相似文献   

13.
For diseases with more than one risk factor, the sum of probabilistic estimates of the number of cases caused by each individual factor may exceed the total number of cases observed, especially when uncertainties about exposure and dose response for some risk factors are high. In this study, we outline a method of bounding the fraction of lung cancer fatalities not due to specific well-studied causes. Such information serves as a "reality check" for estimates of the impacts of the minor risk factors, and, as such, complements the traditional risk analysis. With lung cancer as our example, we allocate portions of the observed lung cancer mortality to known causes (such as smoking, residential radon, and asbestos fibers) and describe the uncertainty surrounding those estimates. The interactions among the risk factors are also quantified, to the extent possible. We then infer an upper bound on the residual mortality due to "other" causes, using a consistency constraint on the total number of deaths, the maximum uncertainty principle, and the mathematics originally developed of imprecise probabilities.  相似文献   

14.
Reassessing Benzene Cancer Risks Using Internal Doses   总被引:1,自引:0,他引:1  
Human cancer risks from benzene exposure have previously been estimated by regulatory agencies based primarily on epidemiological data, with supporting evidence provided by animal bioassay data. This paper reexamines the animal-based risk assessments for benzene using physiologically-based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. It demonstrates that internal doses (interpreted as total benzene metabolites formed) from oral gavage experiments in mice are well predicted by a PBPK model developed by Travis et al. Both the data and the model outputs can also be accurately described by the simple nonlinear regression model total metabolites = 76.4x/(80.75 + x), where x = administered dose in mg/kg/day. Thus, PBPK modeling validates the use of such nonlinear regression models, previously used by Bailer and Hoel. An important finding is that refitting the linearized multistage (LMS) model family to internal doses and observed responses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to cubic, leading to low-dose risk estimates smaller than in previous risk assessments. This is consistent with the conclusion for mice from the Bailer and Hoel analysis. An innovation in this paper is estimation of internal doses for humans based on a PBPK model (and the regression model approximating it) rather than on interspecies dose conversions. Estimates of human risks at low doses are reduced by the use of internal dose estimates when the estimates are obtained from a PBPK model, in contrast to Bailer and Hoel's findings based on interspecies dose conversion. Sensitivity analyses and comparisons with epidemiological data and risk models suggest that our finding of a nonlinear MLE dose-response curve at low doses is robust to changes in assumptions and more consistent with epidemiological data than earlier risk models.  相似文献   

15.
Exposure to chemical contaminants in various media must be estimated when performing ecological risk assessments. Exposure estimates are often based on the 95th-percentile upper confidence limit on the mean concentration of all samples, calculated without regard to critical ecological and spatial information about the relative relationship of receptors, their habitats, and contaminants. This practice produces exposure estimates that are potentially unrepresentative of the ecology of the receptor. This article proposes a habitat area and quality-conditioned exposure estimator, E[HQ], that requires consideration of these relationships. It describes a spatially explicit ecological exposure model to facilitate calculation of E[HQ]. The model provides (1) a flexible platform for investigating the effect of changes in habitat area, habitat quality, foraging area, and population size on exposure estimates, and (2) a tool for calculating E[HQ] for use in actual risk assessments. The inner loop of a Visual Basic program randomly walks a receptor over a multicelled landscape--each cell of which contains values for cell area, habitat area, habitat quality, and concentration--accumulating an exposure estimate until the total area foraged is less than or equal to a given foraging area. An outer loop then steps through foraging areas of increasing size. This program is iterated by Monte Carlo software, with the number of iterations representing the population size. Results indicate that (1) any single estimator may over- or underestimate exposure, depending on foraging strategy and spatial relationships of habitat and contamination, and (2) changes in exposure estimates in response to changes in foraging and habitat area are not linear.  相似文献   

16.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

17.
Risk assessment methodologies for passive smoking-induced lung cancer   总被引:1,自引:0,他引:1  
Risk assessment methodologies have been successfully applied to control societal risk from outdoor air pollutants. They are now being applied to indoor air pollutants such as environmental tobacco smoke (ETS) and radon. Nonsmokers' exposures to ETS have been assessed based on dosimetry of nicotine, its metabolite, continine, and on exposure to the particulate phase of ETS. Lung cancer responses have been based on both the epidemiology of active and of passive smoking. Nine risk assessments of nonsmokers' lung cancer risk from exposure to ETS have been performed. Some have estimated risks for lifelong nonsmokers only; others have included ex-smokers; still others have estimated total deaths from all causes. To facilitate interstudy comparison, in some cases lung cancers had to be interpolated from a total, or the authors' original estimate had to be adjusted to include ex-smokers. Further, all estimates were adjusted to 1988. Excluding one study whose estimate differs from the mean of the others by two orders of magnitude, the remaining risk assessments are in remarkable agreement. The mean estimate is approximately 5000 +/- 2400 nonsmokers' lung cancer deaths (LCDSs) per year. This is a 25% greater risk to nonsmokers than is indoor radon, and is about 57 times greater than the combined estimated cancer risk from all the hazardous outdoor air pollutants currently regulated by the Environmental Protection Agency: airborne radionuclides, asbestos, arsenic, benzene, coke oven emissions, and vinyl chloride.  相似文献   

18.
Modeling for Risk Assessment of Neurotoxic Effects   总被引:2,自引:0,他引:2  
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels.  相似文献   

19.
Quantitative Cancer Risk Estimation for Formaldehyde   总被引:2,自引:0,他引:2  
Of primary concern are irreversible effects, such as cancer induction, that formaldehyde exposure could have on human health. Dose-response data from human exposure situations would provide the most solid foundation for risk assessment, avoiding problematic extrapolations from the health effects seen in nonhuman species. However, epidemiologic studies of human formaldehyde exposure have provided little definitive information regarding dose-response. Reliance must consequently be placed on laboratory animal evidence. An impressive array of data points to significantly nonlinear relationships between rodent tumor incidence and administered dose, and between target tissue dose and administered dose (the latter for both rodents and Rhesus monkeys) following exposure to formaldehyde by inhalation. Disproportionately less formaldehyde binds covalently to the DNA of nasal respiratory epithelium at low than at high airborne concentrations. Use of this internal measure of delivered dose in analyses of rodent bioassay nasal tumor response yields multistage model estimates of low-dose risk, both point and upper bound, that are lower than equivalent estimates based upon airborne formaldehyde concentration. In addition, risk estimates obtained for Rhesus monkeys appear at least 10-fold lower than corresponding estimates for identically exposed Fischer-344 rats.  相似文献   

20.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号