首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 591 毫秒
1.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

2.
In the absence of data from multiple-compound exposure experiments, the health risk from exposure to a mixture of chemical carcinogens is generally based on the results of the individual single-compound experiments. A procedure to obtain an upper confidence limit on the total risk is proposed under the assumption that total risk for the mixture is additive. It is shown that the current practice of simply summing the individual upper-confidence-limit risk estimates as the upper-confidence-limit estimate on the total excess risk of the mixture may overestimate the true upper bound. In general, if the individual upper-confidence-limit risk estimates are on the same order of magnitude, the proposed method gives a smaller upper-confidence-limit risk estimate than the estimate based on summing the individual upper-confidence-limit estimates; the difference increases as the number of carcinogenic components increases.  相似文献   

3.
Quantitative cancer risk assessments are typically expressed as plausible upper bounds rather than estimates of central tendency. In analyses involving several carcinogens, these upper bounds are often summed to estimate overall risk. This begs the question of whether a sum of upper bounds is itself a plausible estimate of overall risk. This question can be asked in two ways: whether the sum yields an improbable estimate of overall risk (that is, is it only remotely possible for the true sum of risks to match the sum of upper bounds), or whether the sum gives a misleading estimate (that is, is the true sum of risks likely to be very different from the sum of upper bounds). Analysis of four case studies shows that as the number of risk estimates increases, their sum becomes increasingly improbable, but not misleading. Though the overall risk depends on the independence, additivity, and number of risk estimates, as well as the shapes of the underlying risk distributions, sums of upper bounds provide useful information about the overall risk and can be adjusted downward to give a more plausible [perhaps probable] upper bound, or even a central estimate of overall risk.  相似文献   

4.
Risk assessments for carcinogens are being developed through an accelerated process in California as a part of the state's implementation of Proposition 65, the Safe Drinking Water and Toxic Enforcement Act. Estimates of carcinogenic potency made by the California Department of Health Services (CDHS) are generally similar to estimates made by the U.S. Environmental Protection Agency (EPA). The largest differences are due to EPA's use of the maximum likelihood estimate instead of CDHS' use of the upper 95% confidence bounds on potencies derived from human data and to procedures used to correct for studies of short duration or with early mortality. Numerical limits derived from these potency estimates constitute "no significant risk" levels, which govern exemption from Proposition 65's discharge prohibition and warning requirements. Under Proposition 65 regulations, lifetime cancer risks less than 10(-5) are not significant and cumulative intake is not considered. Following these regulations, numerical limits for a number of Proposition 65 carcinogens that are applicable to the control of toxic discharges are less stringent than limits under existing federal water pollution control laws. Thus, existing federal limits will become the Proposition 65 levels for discharge. Chemicals currently not covered by federal and state controls will eventually be subject to discharge limitations under Proposition 65. "No significant risk" levels (expressed in terms of daily intake of carcinogens) also trigger warning requirements under Proposition 65 that are more extensive than existing state or federal requirements. A variety of chemical exposures from multiple sources are identified that exceed Proposition 65's "no significant risk" levels.  相似文献   

5.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

6.
To quantify the health benefits of environmental policies, economists generally require estimates of the reduced probability of illness or death. For policies that reduce exposure to carcinogenic substances, these estimates traditionally have been obtained through the linear extrapolation of experimental dose-response data to low-exposure scenarios as described in the U.S. Environmental Protection Agency's Guidelines for Carcinogen Risk Assessment (1986). In response to evolving scientific knowledge, EPA proposed revisions to the guidelines in 1996. Under the proposed revisions, dose-response relationships would not be estimated for carcinogens thought to exhibit nonlinear modes of action. Such a change in cancer-risk assessment methods and outputs will likely have serious consequences for how benefit-cost analyses of policies aimed at reducing cancer risks are conducted. Any tendency for reduced quantification of effects in environmental risk assessments, such as those contemplated in the revisions to EPA's cancer-risk assessment guidelines, impedes the ability of economic analysts to respond to increasing calls for benefit-cost analysis. This article examines the implications for benefit-cost analysis of carcinogenic exposures of the proposed changes to the 1986 Guidelines and proposes an approach for bounding dose-response relationships when no biologically based models are available. In spite of the more limited quantitative information provided in a carcinogen risk assessment under the proposed revisions to the guidelines, we argue that reasonable bounds on dose-response relationships can be estimated for low-level exposures to nonlinear carcinogens. This approach yields estimates of reduced illness for use in a benefit-cost analysis while incorporating evidence of nonlinearities in the dose-response relationship. As an illustration, the bounding approach is applied to the case of chloroform exposure.  相似文献   

7.
Modeling for Risk Assessment of Neurotoxic Effects   总被引:2,自引:0,他引:2  
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels.  相似文献   

8.
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually must be estimated using data from experiments conducted with individual chemicals. In estimating such risk, it is commonly assumed that the total risk due to the mixture is the sum of the risks of the individual components, provided that the risks associated with individual chemicals at levels present in the mixture are low. This assumption, while itself not necessarily conservative, has led to the conservative practice of summing individual upper-bound risk estimates in order to obtain an upper bound on the total excess cancer risk for a mixture. Less conservative procedures are described here and are illustrated for the case of a mixture of four carcinogens.  相似文献   

9.
Estimates were made of the numbers of liver carcinogens in 390 long-term bioassays conducted by the National Toxicology Program (NTP). These estimates were obtained from examination of the global pattern of p-values obtained from statistical tests applied to individual bioassays. Representative estimates of the number of liver carcinogens (90% confidence interval in parentheses) obtained in our analysis compared to NTP's determination are as follows: female rats—49 (23, 76), NTP = 30; male rats—88 (59, 116), NTP = 35; female mice—131 (105, 157), NTP = 81; male mice—100 (73, 126), NTP = 61; overall—166 (135, 197), NTP = 108. The estimator from which these estimates were obtained is biased low by an unknown amount. Consequently, this study provides persuasive evidence of the existence of more rodent liver carcinogens than were identified by the NTP.  相似文献   

10.
Experimental Design of Bioassays for Screening and Low Dose Extrapolation   总被引:1,自引:0,他引:1  
Relatively high doses of chemicals generally are employed in animal bioassays to detect potential carcinogens with relatively small numbers of animals. The problem investigated here is the development of experimental designs which are effective for high to low dose extrapolation for tumor incidence as well as for screening (detecting) carcinogens. Several experimental designs are compared over a wide range of different dose response curves. Linear extrapolation is used below the experimental data range to establish an upper bound on carcinogenic risk at low doses. The goal is to find experimental designs which minimize the upper bound on low dose risk estimates (i.e., maximize the allowable dose for a given level of risk). The maximum tolerated dose (MTD) is employed for screening purposes. Among the designs investigated, experiments with doses at the MTD, 1/2 MTD, 1/4 MTD, and controls generally provide relatively good data for low dose extrapolation with relatively good power for detecting carcinogens. For this design, equal numbers of animals per dose level perform as well as unequal allocations.  相似文献   

11.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

12.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

13.
Compliance Versus Risk in Assessing Occupational Exposures   总被引:1,自引:0,他引:1  
Assessments of occupational exposures to chemicals are generally based upon the practice of compliance testing in which the probability of compliance is related to the exceedance [γ, the likelihood that any measurement would exceed an occupational exposure limit (OEL)] and the number of measurements obtained. On the other hand, workers’ chronic health risks generally depend upon cumulative lifetime exposures which are not directly related to the probability of compliance. In this paper we define the probability of “overexposure” (θ) as the likelihood that individual risk (a function of cumulative exposure) exceeds the risk inherent in the OEL (a function of the OEL and duration of exposure). We regard θ as a relevant measure of individual risk for chemicals, such as carcinogens, which produce chronic effects after long-term exposures but not necessarily for acutely-toxic substances which can produce effects relatively quickly. We apply a random-effects model to data from 179 groups of workers, exposed to a variety of chemical agents, and obtain parameter estimates for the group mean exposure and the within- and between-worker components of variance. These estimates are then combined with OELs to generate estimates of γ and θ. We show that compliance testing can significantly underestimate the health risk when sample sizes are small. That is, there can be large probabilities of compliance with typical sample sizes, despite the fact that large proportions of the working population have individual risks greater than the risk inherent in the OEL. We demonstrate further that, because the relationship between θ and γ depends upon the within- and between-worker components of variance, it cannot be assumed a priori that exceedance is a conservative surrogate for overexposure. Thus, we conclude that assessment practices which focus upon either compliance or exceedance are problematic and recommend that employers evaluate exposures relative to the probabilities of overexposure.  相似文献   

14.
15.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1780-1784
A 2009 report of the National Research Council (NRC) recommended that the U.S. Environmental Protection Agency (EPA) increase its estimates of increased cancer risk from exposure to environmental agents by ~7‐fold, due to an approximate ~25‐fold typical ratio between the median and upper 95th percentile persons’ cancer sensitivity assuming approximately lognormally distributed sensitivities. EPA inaction on this issue has raised concerns that cancer risks to environmentally exposed populations remain systematically underestimated. This concern is unwarranted, however, because EPA point estimates of cancer risk have always pertained to the average, not the median, person in each modeled exposure group. Nevertheless, EPA has yet to explain clearly how its risk characterization and risk management policies concerning individual risks from environmental chemical carcinogens do appropriately address broad variability in human cancer susceptibility that has been a focus of two major NRC reports to EPA concerning its risk assessment methods.  相似文献   

16.
For the vast majority of chemicals that have cancer potency estimates on IRIS, the underlying database is deficient with respect to early-life exposures. This data gap has prevented derivation of cancer potency factors that are relevant to this time period, and so assessments may not fully address children's risks. This article provides a review of juvenile animal bioassay data in comparison to adult animal data for a broad array of carcinogens. This comparison indicates that short-term exposures in early life are likely to yield a greater tumor response than short-term exposures in adults, but similar tumor response when compared to long-term exposures in adults. This evidence is brought into a risk assessment context by proposing an approach that: (1) does not prorate children's exposures over the entire life span or mix them with exposures that occur at other ages; (2) applies the cancer slope factor from adult animal or human epidemiology studies to the children's exposure dose to calculate the cancer risk associated with the early-life period; and (3) adds the cancer risk for young children to that for older children/adults to yield a total lifetime cancer risk. The proposed approach allows for the unique exposure and pharmacokinetic factors associated with young children to be fully weighted in the cancer risk assessment. It is very similar to the approach currently used by U.S. EPA for vinyl chloride. The current analysis finds that the database of early life and adult cancer bioassays supports extension of this approach from vinyl chloride to other carcinogens of diverse mode of action. This approach should be enhanced by early-life data specific to the particular carcinogen under analysis whenever possible.  相似文献   

17.
Automobile accident risks vary significantly across populations, places, and times. This study describes the time-varying pattern of societal risk. The relative risks of occupant fatality per person-mile of travel are estimated here for each hour of the week, using 1983 data. The results exhibit a strong time-of-day effect and have a highly skewed frequency distribution, implying wide variations in risk-taking behavior. Indeed, the 168 hourly estimates ranged from a low of 0.32 times the average around Sunday noon to a high of 43 times the average at 3:00 a.m. on Sunday, i.e., by a factor of 134 from bottom to top. Quantile-quantile plots or "Lorenz curves," introduced to display the unequal distribution of risks, show that approximately 34% of the vehicle occupant fatalities occur in hours representing only 5% of the travel. These findings have serious implications for risk analysis. First, when attempting to reconcile objective and subjective risk estimates, risk communicators should carefully control for when and to whom the risk in question is applicable. Second, comparisons of hazards on the basis of average risk are necessarily misleading for risks distributed so unevenly. Third, resource allocation decisions can benefit by knowing how incidence, exposure, and risk vary across time, place, and other relevant variables. Finally, certain cost-benefit analyses that use average values to estimate risk exposure can be misleading.  相似文献   

18.
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard “point” risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.  相似文献   

19.
Andrea Herrmann 《Risk analysis》2013,33(8):1510-1531
How well can people estimate IT‐related risk? Although estimating risk is a fundamental activity in software management and risk is the basis for many decisions, little is known about how well IT‐related risk can be estimated at all. Therefore, we executed a risk estimation experiment with 36 participants. They estimated the probabilities of IT‐related risks and we investigated the effect of the following factors on the quality of the risk estimation: the estimator's age, work experience in computing, (self‐reported) safety awareness and previous experience with this risk, the absolute value of the risk's probability, and the effect of knowing the estimates of the other participants (see: Delphi method). Our main findings are: risk probabilities are difficult to estimate. Younger and inexperienced estimators were not significantly worse than older and more experienced estimators, but the older and more experienced subjects better used the knowledge gained by knowing the other estimators' results. Persons with higher safety awareness tend to overestimate risk probabilities, but can better estimate ordinal ranks of risk probabilities. Previous own experience with a risk leads to an overestimation of its probability (unlike in other fields like medicine or disasters, where experience with a disease leads to more realistic probability estimates and nonexperience to an underestimation).  相似文献   

20.
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI‐L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk‐reducing effectiveness of WHTI‐L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI‐L, and a range of casualty cost estimates based on the willingness‐to‐pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI‐L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14–26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5–6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit‐cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号