首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

2.
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually must be estimated using data from experiments conducted with individual chemicals. In estimating such risk, it is commonly assumed that the total risk due to the mixture is the sum of the risks of the individual components, provided that the risks associated with individual chemicals at levels present in the mixture are low. This assumption, while itself not necessarily conservative, has led to the conservative practice of summing individual upper-bound risk estimates in order to obtain an upper bound on the total excess cancer risk for a mixture. Less conservative procedures are described here and are illustrated for the case of a mixture of four carcinogens.  相似文献   

3.
In the absence of data from multiple-compound exposure experiments, the health risk from exposure to a mixture of chemical carcinogens is generally based on the results of the individual single-compound experiments. A procedure to obtain an upper confidence limit on the total risk is proposed under the assumption that total risk for the mixture is additive. It is shown that the current practice of simply summing the individual upper-confidence-limit risk estimates as the upper-confidence-limit estimate on the total excess risk of the mixture may overestimate the true upper bound. In general, if the individual upper-confidence-limit risk estimates are on the same order of magnitude, the proposed method gives a smaller upper-confidence-limit risk estimate than the estimate based on summing the individual upper-confidence-limit estimates; the difference increases as the number of carcinogenic components increases.  相似文献   

4.
Human populations are generally exposed simultaneously to a number of toxicants present in the environment, including complex mixtures of unknown and variable origin. While scientific methods for evaluating the potential carcinogenic risks of pure compounds are relatively well established, methods for assessing the risks of complex mixtures are somewhat less developed. This article provides a report of a recent workshop on carcinogenic mixtures sponsored by the Committee on Toxicology of the U.S. National Research Council, in which toxicological, epidemiological, and statistical approaches to carcinogenic risk assessment for mixtures were discussed. Complex mixtures, such as diesel emissions and tobacco smoke, have been shown to have carcinogenic potential. Bioassay-directed fractionation based on short-term screening test for genotoxicity has also been used in identifying carcinogenic components of mixtures. Both toxicological and epidemiological studies have identified clear interactions between chemical carcinogens, including synergistic effects at moderate to high doses. To date, laboratory studies have demonstrated over 900 interactions involving nearly 200 chemical carcinogens. At lower doses, theoretical arguments suggest that risks may be near additive. Thus, additivity at low doses has been invoked as as a working hypothesis by regulatory authorities in the absence of evidence to the contrary. Future studies of the joint effects of carcinogenic agents may serve to elucidate the mechanisms by which interactions occur at higher doses.  相似文献   

5.
6.
In light of the Armitage-Doll multistage carcinogenesis theory, this paper examines the assumption that an additive relative risk relationship is indicative of two carcinogens that affect the same stage in the cancer process. We present formulas to compute excess cancer risks for a variety of patterns for limited exposure durations to two carcinogens that affect the first and penultimate stages; and using an index of synergy proposed by Thomas (1982), we find a number of these patterns to produce additive, or nearly additive, relative risk relationships. The consistent feature of these patterns is that the two exposure periods are of short duration and occur close together.  相似文献   

7.
Experimental Design of Bioassays for Screening and Low Dose Extrapolation   总被引:1,自引:0,他引:1  
Relatively high doses of chemicals generally are employed in animal bioassays to detect potential carcinogens with relatively small numbers of animals. The problem investigated here is the development of experimental designs which are effective for high to low dose extrapolation for tumor incidence as well as for screening (detecting) carcinogens. Several experimental designs are compared over a wide range of different dose response curves. Linear extrapolation is used below the experimental data range to establish an upper bound on carcinogenic risk at low doses. The goal is to find experimental designs which minimize the upper bound on low dose risk estimates (i.e., maximize the allowable dose for a given level of risk). The maximum tolerated dose (MTD) is employed for screening purposes. Among the designs investigated, experiments with doses at the MTD, 1/2 MTD, 1/4 MTD, and controls generally provide relatively good data for low dose extrapolation with relatively good power for detecting carcinogens. For this design, equal numbers of animals per dose level perform as well as unequal allocations.  相似文献   

8.
Aggregate exposure metrics based on sums or weighted averages of component exposures are widely used in risk assessments of complex mixtures, such as asbestos-associated dusts and fibers. Allowed exposure levels based on total particle or fiber counts and estimated ambient concentrations of such mixtures may be used to make costly risk-management decisions intended to protect human health and to remediate hazardous environments. We show that, in general, aggregate exposure information alone may be inherently unable to guide rational risk-management decisions when the components of the mixture differ significantly in potency and when the percentage compositions of the mixture exposures differ significantly across locations. Under these conditions, which are not uncommon in practice, aggregate exposure metrics may be "worse than useless," in that risk-management decisions based on them are less effective than decisions that ignore the aggregate exposure information and select risk-management actions at random. The potential practical significance of these results is illustrated by a case study of 27 exposure scenarios in El Dorado Hills, California, where applying an aggregate unit risk factor (from EPA's IRIS database) to aggregate exposure metrics produces average risk estimates about 25 times greater - and of uncertain predictive validity - compared to risk estimates based on specific components of the mixture that have been hypothesized to pose risks of human lung cancer and mesothelioma.  相似文献   

9.
P Milvy 《Risk analysis》1986,6(1):69-79
A simple relationship is formulated that helps to discriminate between acceptable and unacceptable individual lifetime risks (RL) to populations that are exposed to chemical carcinogens. The relationship is an empirical one and is developed using objective risk data as well as subjective risk levels that have found substantial acceptance among those concerned with carcinogenic risk assessment issues. The expression sets acceptable levels of lifetime carcinogenic risk and is a function of the total population exposed to the carcinogen. Its use in risk assessment and risk management provides guidance in distinguishing those carcinogens that should be regulated because of the health hazard they pose from those whose regulation may not be needed.  相似文献   

10.
Polycyclic aromatic hydrocarbons (PAHs) have been labeled contaminants of concern due to their carcinogenic potential, insufficient toxicological data, environmental ubiquity, and inconsistencies in the composition of environmental mixtures. The Environmental Protection Agency is reevaluating current methods for assessing the toxicity of PAHs, including the assumption of toxic additivity in mixtures. This study was aimed at testing mixture interactions through in vitro cell culture experimentation, and modeling the toxicity using quantitative structure‐activity relationships (QSAR). Clone‐9 rat liver cells were used to analyze cellular proliferation, viability, and genotoxicity of 15 PAHs in single doses and binary mixtures. Tests revealed that many mixtures have nonadditive toxicity, but display varying mixture effects depending on the mixture composition. Many mixtures displayed antagonism, similar to other published studies. QSARs were then developed using the genetic function approximation algorithm to predict toxic activity both in single PAH congeners and in binary mixtures. Effective concentrations inhibiting 50% of the cell populations were modeled, with R2 = 0.90, 0.99, and 0.84, respectively. The QSAR mixture algorithms were then adjusted to account for the observed mixture interactions as well as the mixture composition (ratios) to assess the feasibility of QSARs for mixtures. Based on these results, toxic addition is improbable and therefore environmental PAH mixtures are likely to see nonadditive responses when complex interactions occur between components. Furthermore, QSAR may be a useful tool to help bridge these data gaps surrounding the assessment of human health risks that are associated with PAH exposures.  相似文献   

11.
Quantitative cancer risk assessments are typically expressed as plausible upper bounds rather than estimates of central tendency. In analyses involving several carcinogens, these upper bounds are often summed to estimate overall risk. This begs the question of whether a sum of upper bounds is itself a plausible estimate of overall risk. This question can be asked in two ways: whether the sum yields an improbable estimate of overall risk (that is, is it only remotely possible for the true sum of risks to match the sum of upper bounds), or whether the sum gives a misleading estimate (that is, is the true sum of risks likely to be very different from the sum of upper bounds). Analysis of four case studies shows that as the number of risk estimates increases, their sum becomes increasingly improbable, but not misleading. Though the overall risk depends on the independence, additivity, and number of risk estimates, as well as the shapes of the underlying risk distributions, sums of upper bounds provide useful information about the overall risk and can be adjusted downward to give a more plausible [perhaps probable] upper bound, or even a central estimate of overall risk.  相似文献   

12.
This article explores the use of an approach for setting default values for the noncancer toxicity, developed as part of the Threshold of Toxicological Concern (TTC), for the evaluation of the chronic noncarcinogenic effects of certain chemical mixtures. Individuals are exposed to many mixtures where there are little or no toxicological data on some or all of the mixture components. The approach developed in the TTC can provide a basis for conservative estimates of the toxicity of the mixture components when compound-specific data are not available. The application of this approach to multiple chemicals in a mixture, however, has implications for the statistical assumptions made in developing component-based estimates of mixtures. Specifically, conservative assumptions that are appropriate for one compound may become overly conservative when applied to all components of a mixture. This overestimation can be investigated by modeling the uncertainty in toxicity standards. In this article the approach is applied to both hypothetical and actual examples of chemical mixtures and the potential for overestimation is investigated. The results indicate that the use of the approach leads to conservative estimates of mixture toxicity and therefore its use is most appropriate for screening assessments of mixtures.  相似文献   

13.
H J Gibb  C W Chen 《Risk analysis》1986,6(2):167-170
Under the assumption of multistage carcinogenesis, a multiplicative carcinogenic effect would be produced by the action of different carcinogens in a mixture on different stages of the carcinogenic process. An additive effect would be produced by the effect of different carcinogens on the same stage. A mathematical argument for these hypotheses is presented here.  相似文献   

14.
Ten years ago, the National Academy of Science released its risk assessment/risk management (RA/RM) “paradigm” that served to crystallize much of the early thinking about these concepts. By defining RA as a four-step process, operationally independent from RM, the paradigm has presented society with a scheme, or a conceptually common framework, for addressing many risky situations (e.g., carcinogens, noncarcinogens, and chemical mixtures). The procedure has facilitated decision-making in a wide variety of situations and has identified the most important research needs. The past decade, however, has revealed that additional progress is needed. These areas include addressing the appropriate interaction (not isolation) between RA and RM, improving the methods for assessing risks from mixtures, dealing with “adversity of effect,” deciding whether “hazard” should imply an exposure to environmental conditions or to laboratory conditions, and evolving the concept to include both health and ecological risk. Interest in and expectations of risk assessment are increasing rapidly. The emerging concept of “comparative risk” (i.e., distinguishing between large risks and smaller risks that may be qualitatively different) is at a level comparable to that held by the concept of “risk” just 10 years ago. Comparative risk stands in need of a paradigm of its own, especially given the current economic limitations. “Times are tough; Brother, can you paradigm?”  相似文献   

15.
Driven by differing statutory mandates and programmatic separation of regulatory responsibilities between federal, state, and tribal agencies, distinct chemical and radiation risk management strategies have evolved. In the field this separation poses real challenges since many of the major environmental risk management decisions we face today require the evaluation of both types of risks. Over the last decade, federal, state, and tribal agencies have continued to discuss their different approaches and explore areas where their activities could be harmonized. The current framework for managing public exposures to chemical carcinogens has been referred to as a "bottom up approach." Risk between 10(-4) and 10(-6) is established as an upper bound goal. In contrast, a "top down" approach that sets an upper bound dose limit and couples with site specific As Low As Reasonably Achievable Principle (ALARA), is in place to manage individual exposure to radiation. While radiation risk are typically managed on a cumulative basis, exposure to chemicals is generally managed on a chemical-by-chemical, medium-by-medium basis. There are also differences in the nature and size of sites where chemical and radiation contamination is found. Such differences result in divergent management concerns. In spite of these differences, there are several common and practical concerns among radiation and chemical risk managers. They include 1) the issue of cost for site redevelopment and long-term stewardship, 2) public acceptance and involvement, and 3) the need for flexible risk management framework to address the first two issues. This article attempts to synthesize key differences, opportunities for harmonization, and challenges ahead.  相似文献   

16.
Cancer risks for ethylene dibromide (EDB) were estimated by fitting several linear non-threshold additive models to data from a gavage bioassay. Risks predicted by these models were compared to the observed cancer mortality among a cohort of workers occupationally exposed to the same chemical. Models that accounted for the shortened latency period in the gavaged rats predicted upper bound risks that were within a factor of 3 of the observed cancer deaths. Data from an animal inhalation study of EDB also were compatible with the epidemiologic data. These findings contradict those of Ramsey et al. (1978), who reported that extrapolation from animal data produced highly exaggerated risk estimates for EDB-exposed workers. This paper explores the reasons for these discrepant findings.  相似文献   

17.
When assessing risks posed by environmental chemical mixtures, whole mixture approaches are preferred to component approaches. When toxicological data on whole mixtures as they occur in the environment are not available, Environmental Protection Agency guidance states that toxicity data from a mixture considered “sufficiently similar” to the environmental mixture can serve as a surrogate. We propose a novel method to examine whether mixtures are sufficiently similar, when exposure data and mixture toxicity study data from at least one representative mixture are available. We define sufficient similarity using equivalence testing methodology comparing the distance between benchmark dose estimates for mixtures in both data‐rich and data‐poor cases. We construct a “similar mixtures risk indicator”(SMRI) (analogous to the hazard index) on sufficiently similar mixtures linking exposure data with mixtures toxicology data. The methods are illustrated using pyrethroid mixtures occurrence data collected in child care centers (CCC) and dose‐response data examining acute neurobehavioral effects of pyrethroid mixtures in rats. Our method shows that the mixtures from 90% of the CCCs were sufficiently similar to the dose‐response study mixture. Using exposure estimates for a hypothetical child, the 95th percentile of the (weighted) SMRI for these sufficiently similar mixtures was 0.20 (i.e., where SMRI <1, less concern; >1, more concern).  相似文献   

18.
A population's long-term exposure distribution for a specified compound is typically estimated from short-term measurements of a sample of individuals from the population of interest. In this situation, estimates of a population's long-term exposure parameters contain two sources of sampling error: the typical sampling error associated with taking a sample from the population and the sampling error from estimating individual long-term exposure. These components are not separable in the data collected, i.e. , the value observed is due partly to the individual sampled and partly to the time at which the individual was sampled. Hence, the distribution of the data collected is not the same as the population exposure distribution. Monte Carlo simulations are used to compare the distribution of the observed data with the population exposure distribution for a simple additive model. A simple adjustment to standard estimates of percentiles and quantils is shown to be effective in reducing bias particularly for the upper percentiles and quantils of the population distribution.  相似文献   

19.
Compliance Versus Risk in Assessing Occupational Exposures   总被引:1,自引:0,他引:1  
Assessments of occupational exposures to chemicals are generally based upon the practice of compliance testing in which the probability of compliance is related to the exceedance [γ, the likelihood that any measurement would exceed an occupational exposure limit (OEL)] and the number of measurements obtained. On the other hand, workers’ chronic health risks generally depend upon cumulative lifetime exposures which are not directly related to the probability of compliance. In this paper we define the probability of “overexposure” (θ) as the likelihood that individual risk (a function of cumulative exposure) exceeds the risk inherent in the OEL (a function of the OEL and duration of exposure). We regard θ as a relevant measure of individual risk for chemicals, such as carcinogens, which produce chronic effects after long-term exposures but not necessarily for acutely-toxic substances which can produce effects relatively quickly. We apply a random-effects model to data from 179 groups of workers, exposed to a variety of chemical agents, and obtain parameter estimates for the group mean exposure and the within- and between-worker components of variance. These estimates are then combined with OELs to generate estimates of γ and θ. We show that compliance testing can significantly underestimate the health risk when sample sizes are small. That is, there can be large probabilities of compliance with typical sample sizes, despite the fact that large proportions of the working population have individual risks greater than the risk inherent in the OEL. We demonstrate further that, because the relationship between θ and γ depends upon the within- and between-worker components of variance, it cannot be assumed a priori that exceedance is a conservative surrogate for overexposure. Thus, we conclude that assessment practices which focus upon either compliance or exceedance are problematic and recommend that employers evaluate exposures relative to the probabilities of overexposure.  相似文献   

20.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1780-1784
A 2009 report of the National Research Council (NRC) recommended that the U.S. Environmental Protection Agency (EPA) increase its estimates of increased cancer risk from exposure to environmental agents by ~7‐fold, due to an approximate ~25‐fold typical ratio between the median and upper 95th percentile persons’ cancer sensitivity assuming approximately lognormally distributed sensitivities. EPA inaction on this issue has raised concerns that cancer risks to environmentally exposed populations remain systematically underestimated. This concern is unwarranted, however, because EPA point estimates of cancer risk have always pertained to the average, not the median, person in each modeled exposure group. Nevertheless, EPA has yet to explain clearly how its risk characterization and risk management policies concerning individual risks from environmental chemical carcinogens do appropriately address broad variability in human cancer susceptibility that has been a focus of two major NRC reports to EPA concerning its risk assessment methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号