首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

2.
Natural or manufactured products may contain mixtures of carcinogens and the human environment certainly contains mixtures of carcinogens. Various authors have shown that the total risk of a mixture can be approximated by the sum of the risks of the individual components under a variety of conditions at low doses. Under these conditions, summing the individual estimated upper bound risks, as currently often done, is too conservative because it is unlikely that all risks for a mixture are at their maximum levels simultaneously. In the absence of synergism, a simple procedure is proposed for estimating a more appropriate upper bound of the additive risks for a mixture of carcinogens. These simple limits also apply to noncancer endpoints when the risks of the components are approximately additive.  相似文献   

3.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

4.
This paper reexamines the scaling approaches used in cancer risk assessment and proposes a more precise body weight scaling factor. Two approaches are conventionally used in scaling exposure and dose from experimental animals to man: body weight scaling (used by FDA) and surface area scaling (BW0.67--used by EPA). This paper reanalyzes the Freireich et al. (1966) study of the maximum tolerated dose (MTD) of 14 anticancer agents in mice, rats, dogs, monkeys, and humans, the dataset most commonly cited as justification for surface area extrapolation. This examination was augmented with an analysis of a similar dataset by Schein et al. (1970) of the MTD of 13 additional chemotherapy agents. The reanalysis shows that BW0.75 is a more appropriate scaling factor for the 27 direct-acting compounds in this dataset.  相似文献   

5.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

6.
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.  相似文献   

7.
Current practice in carcinogen bioassay calls for exposure of experimental animals at doses up to and including the maximum tolerated dose (MTD). Such studies have been used to compute measures of carcinogenic potency such as the TD50 as well as unit risk factors such as q 1 * for predicting low-dose risks. Recent studies have indicated that these measures of carcinogenic potency are highly correlated with the MTD. Carcinogenic potency has also been shown to be correlated with indicators of mutagenicity and toxicity. Correlation of the MTDs for rats and mice implies a corresponding correlation in TD50 values for these two species. The implications of these results for cancer risk assessment are examined in light of the large variation in potency among chemicals known to induce tumors in rodents.  相似文献   

8.
Hormetic effects have been observed at low exposure levels based on the dose-response pattern of data from developmental toxicity studies. This indicates that there might actually be a reduced risk of exhibiting toxic effects at low exposure levels. Hormesis implies the existence of a threshold dose level and there are dose-response models that include parameters that account for the threshold. We propose a function that introduces a parameter to account for hormesis. This function is a subset of the set of all functions that could represent a hormetic dose-response relationship at low exposure levels to toxic agents. We characterize the overall dose-response relationship with a piecewise function that consists of a hormetic u-shape curve at low dose levels and a logistic curve at high dose levels. We apply our model to a data set from an experiment conducted at the National Toxicology Program (NTP). We also use the beta-binomial distribution to model the litter response data. It can be seen by observing the structure of these data that current experimental designs for developmental studies employ a limited number of dose groups. These designs may not be satisfactory when the goal is to illustrate the existence of hormesis. In particular, increasing the number of low-level doses improves the power for detecting hormetic effects. Therefore, we also provide the results of simulations that were done to characterize the power of current designs in detecting hormesis and to demonstrate how this power can be improved upon by altering these designs with the addition of only a few low exposure levels.  相似文献   

9.
Microbial food safety risk assessment models can often at times be simplified by eliminating the need to integrate a complex dose‐response relationship across a distribution of exposure doses. This is possible if exposure pathways lead to pathogens at exposure that consistently have a small probability of causing illness. In this situation, the probability of illness will follow an approximately linear function of dose. Consequently, the predicted probability of illness per serving across all exposures is linear with respect to the expected value of dose. The majority of dose‐response functions are approximately linear when the dose is low. Nevertheless, what constitutes “low” is dependent on the parameters of the dose‐response function for a particular pathogen. In this study, a method is proposed to determine an upper bound of the exposure distribution for which the use of a linear dose‐response function is acceptable. If this upper bound is substantially larger than the expected value of exposure doses, then a linear approximation for probability of illness is reasonable. If conditions are appropriate for using the linear dose‐response approximation, for example, the expected value for exposure doses is two to three logs10 smaller than the upper bound of the linear portion of the dose‐response function, then predicting the risk‐reducing effectiveness of a proposed policy is trivial. Simple examples illustrate how this approximation can be used to inform policy decisions and improve an analyst's understanding of risk.  相似文献   

10.
Interspecies scaling factors (ISFs) are numbers used to adjust the potency factor (for example, the q1* for carcinogens or reference doses for compounds eliciting other toxic endpoints) determined in experimental animals to account for expected differences in potency between test animals and people. ISFs have been developed for both cancer and non-cancer risk assessments in response to a common issue: toxicologists often determine adverse effects of chemicals in test animals and then they, or more commonly risk assessors and risk managers, have to draw inferences about what these observations mean for the human population. This perspective briefly reviews the development of ISFs and their applications in health risk assessments over the past 20 years, examining the impact of pharmacokinetic principles in altering current perceptions of the ISFs applied in these health risk assessments, and assessing future directions in applying both pharmacokinetic and pharmacodynamic principles for developing ISFs.  相似文献   

11.
The BMD (benchmark dose) method that is used in risk assessment of chemical compounds was introduced by Crump (1984) and is based on dose-response modeling. To take uncertainty in the data and model fitting into account, the lower confidence bound of the BMD estimate (BMDL) is suggested to be used as a point of departure in health risk assessments. In this article, we study how to design optimum experiments for applying the BMD method for continuous data. We exemplify our approach by considering the class of Hill models. The main aim is to study whether an increased number of dose groups and at the same time a decreased number of animals in each dose group improves conditions for estimating the benchmark dose. Since Hill models are nonlinear, the optimum design depends on the values of the unknown parameters. That is why we consider Bayesian designs and assume that the parameter vector has a prior distribution. A natural design criterion is to minimize the expected variance of the BMD estimator. We present an example where we calculate the value of the design criterion for several designs and try to find out how the number of dose groups, the number of animals in the dose groups, and the choice of doses affects this value for different Hill curves. It follows from our calculations that to avoid the risk of unfavorable dose placements, it is good to use designs with more than four dose groups. We can also conclude that any additional information about the expected dose-response curve, e.g., information obtained from studies made in the past, should be taken into account when planning a study because it can improve the design.  相似文献   

12.
Prediction of human cancer risk from the results of rodent bioassays requires two types of extrapolation: a qualitative extrapolation from short-lived rodent species to long-lived humans, and a quantitative extrapolation from near-toxic doses in the bioassay to low-level human exposures. Experimental evidence on the accuracy of prediction between closely related species tested under similar experimental conditions (rats, mice, and hamsters) indicates that: (1) if a chemical is positive in one species, it will be positive in the second species about 75% of the time; however, since about 50% of test chemicals are positive in each species, by chance alone one would expect a predictive value between species of about 50%. (2) If a chemical induces tumors in a particular target organ in one species, it will induce tumors in the same organ in the second species about 50% of the time. Similar predictive values are obtained in an analysis of prediction from humans to rats or from humans to mice for known human carcinogens. Limitations of bioassay data for use in quantitative extrapolation are discussed, including constraints on both estimates of carcinogenic potency and of the dose-response in experiments with only two doses and a control. Quantitative extrapolation should be based on an understanding of mechanisms of carcinogenesis, particularly mitogenic effects that are present at high and not low doses.  相似文献   

13.
Human populations are generally exposed simultaneously to a number of toxicants present in the environment, including complex mixtures of unknown and variable origin. While scientific methods for evaluating the potential carcinogenic risks of pure compounds are relatively well established, methods for assessing the risks of complex mixtures are somewhat less developed. This article provides a report of a recent workshop on carcinogenic mixtures sponsored by the Committee on Toxicology of the U.S. National Research Council, in which toxicological, epidemiological, and statistical approaches to carcinogenic risk assessment for mixtures were discussed. Complex mixtures, such as diesel emissions and tobacco smoke, have been shown to have carcinogenic potential. Bioassay-directed fractionation based on short-term screening test for genotoxicity has also been used in identifying carcinogenic components of mixtures. Both toxicological and epidemiological studies have identified clear interactions between chemical carcinogens, including synergistic effects at moderate to high doses. To date, laboratory studies have demonstrated over 900 interactions involving nearly 200 chemical carcinogens. At lower doses, theoretical arguments suggest that risks may be near additive. Thus, additivity at low doses has been invoked as as a working hypothesis by regulatory authorities in the absence of evidence to the contrary. Future studies of the joint effects of carcinogenic agents may serve to elucidate the mechanisms by which interactions occur at higher doses.  相似文献   

14.
In the absence of data from multiple-compound exposure experiments, the health risk from exposure to a mixture of chemical carcinogens is generally based on the results of the individual single-compound experiments. A procedure to obtain an upper confidence limit on the total risk is proposed under the assumption that total risk for the mixture is additive. It is shown that the current practice of simply summing the individual upper-confidence-limit risk estimates as the upper-confidence-limit estimate on the total excess risk of the mixture may overestimate the true upper bound. In general, if the individual upper-confidence-limit risk estimates are on the same order of magnitude, the proposed method gives a smaller upper-confidence-limit risk estimate than the estimate based on summing the individual upper-confidence-limit estimates; the difference increases as the number of carcinogenic components increases.  相似文献   

15.
Recent advances in risk assessment have led to the development of joint dose-response models to describe prenatal death and fetal malformation rates in developmental toxicity experiments. These models can be used to estimate the effective dose corresponding to a 5% excess risk for both these toxicological endpoints, as well as for overall toxicity. In this article, we develop optimal experimental designs for the estimation of the effective dose for developmental toxicity using joint Weibull dose-response models for prenatal death and fetal malformation. Based on an extended series of developmental studies, near-optimal designs for prenatal death, malformation, and overall toxicity were found to involve three dose groups: an unexposed control group, a high dose equal to the maximum tolerated dose, and a low dose above or comparable to the effective dose. The effect on the optimal designs of changing the number of implants and the degree of intra-litter correlation is also investigated. Although the optimal design has only three dose groups in most cases, practical considerations involving model lack of fit and estimation of the shape of the dose-response curve suggest that, in practice, suboptimal designs with more than three doses will often be preferred.  相似文献   

16.
Increased cell proliferation increases the opportunity for transformations of normal cells to malignant cells via intermediate cells. Nongenotoxic cytotoxic carcinogens that increase cell proliferation rates to replace necrotic cells are likely to have a threshold dose for cytotoxicity below which necrosis and hence, carcinogenesis do not occur. Thus, low dose cancer risk estimates based upon nonthreshold, linear extrapolation are inappropriate for this situation. However, a threshold dose is questionable if a nongenotoxic carcinogen acts via a cell receptor. Also, a nongenotoxic carcinogen that increases the cell proliferation rate, via the cell division rate and/or cell removal rate by apoptosis, by augmenting an existing endogenous mechanism is not likely to have a threshold dose. Whether or not a threshold dose exists for nongenotoxic carcinogens, it is of interest to study the relationship between lifetime tumor incidence and the cell proliferation rate. The Moolgavkar–Venzon–Knudson biologically based stochastic two-stage clonal expansion model is used to describe a carcinogenic process. Because the variability in cell proliferation rates among animals often makes it impossible to detect changes of less than 20% in the rate, it is shown that small changes in the cell proliferation rate, that may be obscured by the background noise in rates, can produce large changes in the lifetime tumor incidence as calculated from the Moolgavkar–Venzon–Knudson model. That is, dose response curves for cell proliferation and tumor incidence do not necessarily mimic each other. This makes the use of no observed effect levels (NOELs) for cell proliferation rates often inadmissible for establishing acceptable daily intakes (ADIs) of nongenotoxic carcinogens. In those cases where low dose linearity is not likely, a potential alternative to a NOEL is a benchmark dose corresponding to a small increase in the cell proliferation rate, e. g., 1%, to which appropriate safety (uncertainty) factors can be applied to arrive at an ADI.  相似文献   

17.
In this paper we describe a simulation, by Monte Carlo methods, of the results of rodent carcinogenicity bioassays. Our aim is to study how the observed correlation between carcinogenic potency (beta or 1n2/TD50) and maximum tolerated dose (MTD) arises, and whether the existence of this correlation leads to an artificial correlation between carcinogenic potencies in rats and mice. The validity of the bioassay results depends upon, among other things, certain biases in the experimental design of the bioassays. These include selection of chemicals for bioassay and details of the experimental protocol, including dose levels. We use as variables in our simulation the following factors: (1) dose group size, (2) number of dose groups, (3) tumor rate in the control (zero-dose) group, (4) distribution of the MTD values of the group of chemicals as specified by the mean and standard deviation, (5) the degree of correlation between beta and the MTD, as given by the standard deviation of the random error term in the linear regression of log beta on log (1/MTD), and (6) an upper limit on the number of animals with tumors. Monte Carlo simulation can show whether the information present in the existing rodent bioassay database is sufficient to reject the validity of the proposed interspecies correlations at a given level of stringency. We hope that such analysis will be useful for future bioassay design, and more importantly, for discussion of the whole NCI/NTP program.  相似文献   

18.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

19.
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually must be estimated using data from experiments conducted with individual chemicals. In estimating such risk, it is commonly assumed that the total risk due to the mixture is the sum of the risks of the individual components, provided that the risks associated with individual chemicals at levels present in the mixture are low. This assumption, while itself not necessarily conservative, has led to the conservative practice of summing individual upper-bound risk estimates in order to obtain an upper bound on the total excess cancer risk for a mixture. Less conservative procedures are described here and are illustrated for the case of a mixture of four carcinogens.  相似文献   

20.
Driven by differing statutory mandates and programmatic separation of regulatory responsibilities between federal, state, and tribal agencies, distinct chemical and radiation risk management strategies have evolved. In the field this separation poses real challenges since many of the major environmental risk management decisions we face today require the evaluation of both types of risks. Over the last decade, federal, state, and tribal agencies have continued to discuss their different approaches and explore areas where their activities could be harmonized. The current framework for managing public exposures to chemical carcinogens has been referred to as a "bottom up approach." Risk between 10(-4) and 10(-6) is established as an upper bound goal. In contrast, a "top down" approach that sets an upper bound dose limit and couples with site specific As Low As Reasonably Achievable Principle (ALARA), is in place to manage individual exposure to radiation. While radiation risk are typically managed on a cumulative basis, exposure to chemicals is generally managed on a chemical-by-chemical, medium-by-medium basis. There are also differences in the nature and size of sites where chemical and radiation contamination is found. Such differences result in divergent management concerns. In spite of these differences, there are several common and practical concerns among radiation and chemical risk managers. They include 1) the issue of cost for site redevelopment and long-term stewardship, 2) public acceptance and involvement, and 3) the need for flexible risk management framework to address the first two issues. This article attempts to synthesize key differences, opportunities for harmonization, and challenges ahead.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号