首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

2.
In a series of articles and a health-risk assessment report, scientists at the CIIT Hamner Institutes developed a model (CIIT model) for estimating respiratory cancer risk due to inhaled formaldehyde within a conceptual framework incorporating extensive mechanistic information and advanced computational methods at the toxicokinetic and toxicodynamic levels. Several regulatory bodies have utilized predictions from this model; on the other hand, upon detailed evaluation the California EPA has decided against doing so. In this article, we study the CIIT model to identify key biological and statistical uncertainties that need careful evaluation if such two-stage clonal expansion models are to be used for extrapolation of cancer risk from animal bioassays to human exposure. Broadly, these issues pertain to the use and interpretation of experimental labeling index and tumor data, the evaluation and biological interpretation of estimated parameters, and uncertainties in model specification, in particular that of initiated cells. We also identify key uncertainties in the scale-up of the CIIT model to humans, focusing on assumptions underlying model parameters for cell replication rates and formaldehyde-induced mutation. We discuss uncertainties in identifying parameter values in the model used to estimate and extrapolate DNA protein cross-link levels. The authors of the CIIT modeling endeavor characterized their human risk estimates as "conservative in the face of modeling uncertainties." The uncertainties discussed in this article indicate that such a claim is premature.  相似文献   

3.
To quantify the health benefits of environmental policies, economists generally require estimates of the reduced probability of illness or death. For policies that reduce exposure to carcinogenic substances, these estimates traditionally have been obtained through the linear extrapolation of experimental dose-response data to low-exposure scenarios as described in the U.S. Environmental Protection Agency's Guidelines for Carcinogen Risk Assessment (1986). In response to evolving scientific knowledge, EPA proposed revisions to the guidelines in 1996. Under the proposed revisions, dose-response relationships would not be estimated for carcinogens thought to exhibit nonlinear modes of action. Such a change in cancer-risk assessment methods and outputs will likely have serious consequences for how benefit-cost analyses of policies aimed at reducing cancer risks are conducted. Any tendency for reduced quantification of effects in environmental risk assessments, such as those contemplated in the revisions to EPA's cancer-risk assessment guidelines, impedes the ability of economic analysts to respond to increasing calls for benefit-cost analysis. This article examines the implications for benefit-cost analysis of carcinogenic exposures of the proposed changes to the 1986 Guidelines and proposes an approach for bounding dose-response relationships when no biologically based models are available. In spite of the more limited quantitative information provided in a carcinogen risk assessment under the proposed revisions to the guidelines, we argue that reasonable bounds on dose-response relationships can be estimated for low-level exposures to nonlinear carcinogens. This approach yields estimates of reduced illness for use in a benefit-cost analysis while incorporating evidence of nonlinearities in the dose-response relationship. As an illustration, the bounding approach is applied to the case of chloroform exposure.  相似文献   

4.
Formaldehyde induced squamous-cell carcinomas in the nasal passages of F344 rats in two inhalation bioassays at exposure levels of 6 ppm and above. Increases in rates of cell proliferation were measured by T. M. Monticello and colleagues at exposure levels of 0.7 ppm and above in the same tissues from which tumors arose. A risk assessment for formaldehyde was conducted at the CIIT Centers for Health Research, in collaboration with investigators from Toxicological Excellence in Risk Assessment (TERA) and the U.S. Environmental Protection Agency (U.S. EPA) in 1999. Two methods for dose-response assessment were used: a full biologically based modeling approach and a statistically oriented analysis by benchmark dose (BMD) method. This article presents the later approach, the purpose of which is to combine BMD and pharmacokinetic modeling to estimate human cancer risks from formaldehyde exposure. BMD analysis was used to identify points of departure (exposure levels) for low-dose extrapolation in rats for both tumor and the cell proliferation endpoints. The benchmark concentrations for induced cell proliferation were lower than for tumors. These concentrations were extrapolated to humans using two mechanistic models. One model used computational fluid dynamics (CFD) alone to determine rates of delivery of inhaled formaldehyde to the nasal lining. The second model combined the CFD method with a pharmacokinetic model to predict tissue dose with formaldehyde-induced DNA-protein cross-links (DPX) as a dose metric. Both extrapolation methods gave similar results, and the predicted cancer risk in humans at low exposure levels was found to be similar to that from a risk assessment conducted by the U.S. EPA in 1991. Use of the mechanistically based extrapolation models lends greater certainty to these risk estimates than previous approaches and also identifies the uncertainty in the measured dose-response relationship for cell proliferation at low exposure levels, the dose-response relationship for DPX in monkeys, and the choice between linear and nonlinear methods of extrapolation as key remaining sources of uncertainty.  相似文献   

5.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

6.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

7.
United States regulatory agencies use no-threshold models for estimating carcinogenic risks. Other countries use no-threshold models for carcinogens that are genotoxic and threshold models for carcinogens that are not genotoxic, such as 2, 3, 7, 8-tetrachlorodibenzo-p-dioxin (TCDD or "dioxin"). The U.S. Environmental Protection Agency has proposed a revision of the carcinogenic potency estimate for TCDD that is based on neither a threshold nor a no-threshold model; instead, it is a compromise between risk numbers generated by the two irreconcilably different models. This paper discusses the revision and its implications.  相似文献   

8.
Experimental Design of Bioassays for Screening and Low Dose Extrapolation   总被引:1,自引:0,他引:1  
Relatively high doses of chemicals generally are employed in animal bioassays to detect potential carcinogens with relatively small numbers of animals. The problem investigated here is the development of experimental designs which are effective for high to low dose extrapolation for tumor incidence as well as for screening (detecting) carcinogens. Several experimental designs are compared over a wide range of different dose response curves. Linear extrapolation is used below the experimental data range to establish an upper bound on carcinogenic risk at low doses. The goal is to find experimental designs which minimize the upper bound on low dose risk estimates (i.e., maximize the allowable dose for a given level of risk). The maximum tolerated dose (MTD) is employed for screening purposes. Among the designs investigated, experiments with doses at the MTD, 1/2 MTD, 1/4 MTD, and controls generally provide relatively good data for low dose extrapolation with relatively good power for detecting carcinogens. For this design, equal numbers of animals per dose level perform as well as unequal allocations.  相似文献   

9.
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.  相似文献   

10.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1795-1806
The National Research Council 2009 “Silver Book” panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ~7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. “Does EPA underestimate cancer risks by ignoring susceptibility differences?,” Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10?8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.  相似文献   

11.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose-response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose-response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose-response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose-response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose-response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

12.
Use of Mechanistic Models to Estimate Low-Dose Cancer Risks   总被引:1,自引:0,他引:1  
Kenny S. Crump 《Risk analysis》1994,14(6):1033-1038
The utility of mechanistic models of cancer for predicting cancer risks at low doses is examined. Based upon a general approximation to the dose-response that is valid at low doses, it is shown that at low doses the dose-response predicted by a mechanistic model is a linear combination of the dose-responses for each of the physiological parameters in the model that are affected by exposure. This demonstrates that, unless the mechanistic model provides a theoretical basis for determining the dose-responses for these parameters, the extrapolation of risks to low doses using a mechanistic model is basically "curve fitting," just as is the case when extrapolating using statistical models. This suggests that experiments to generate data for use in mechanistic models should emphasize measuring the dose-response for dose-related parameters as accurately as possible and at the lowest feasible doses.  相似文献   

13.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose–response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose–response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose–response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose–response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose–response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

14.
Hoover  Sara M. 《Risk analysis》1999,19(4):527-545
Exposure to persistent organochlorines in breast milk was estimated probabilistically for Canadian infants. Noncancer health effects were evaluated by comparing the predicted exposure distributions to published guidance values. For chemicals identified as potential human carcinogens, cancer risks were evaluated using standard methodology typically applied in Canada, as well as an alternative method developed under the Canadian Environmental Protection Act. Potential health risks associated with exposure to persistent organochlorines were quantitatively and qualitatively weighed against the benefits of breast-feeding. Current levels of the majority of contaminants identified in Canadian breast milk do not pose unacceptable risks to infants. Benefits of breast-feeding are well documented and qualitatively appear to outweigh potential health concerns associated with organochlorine exposure. Furthermore, the risks of mortality from not breast-feeding estimated by Rogan and colleagues exceed the theoretical cancer risks estimated for infant exposure to potential carcinogens in Canadian breast milk. Although levels of persistent compounds have been declining in Canadian breast milk, potentially significant risks were estimated for exposure to polychlorinated biphenyls, dibenzo-p-dioxins, and dibenzofurans. Follow-up work is suggested that would involve the use of a physiologically based toxicokinetic model with probabilistic inputs to predict dioxin exposure to the infant. A more detailed risk analysis could be carried out by coupling the exposure estimates with a dose–response analysis that accounts for uncertainty.  相似文献   

15.
We compare the regulatory implications of applying the traditional (linearized) and exact two-stage dose–response models to animal carcinogenic data. We analyze dose–response data from six studies, representing five different substances, and we determine the goodness-of-fit of each model as well as the 95% confidence lower limit of the dose corresponding to a target excess risk of 10–5 (the target risk dose TRD). For the two concave datasets, we find that the exact model gives a substantially better fit to the data than the traditional model, and that the exact model gives a TRD that is an order of magnitude lower than that given by the traditional model. In the other cases, the exact model gives a fit equivalent to or better than the traditional model. We also show that although the exact two-stage model may exhibit dose–response concavity at moderate dose levels, it is always linear or sublinear, and never supralinear, in the low-dose limit. Because regulatory concern is almost always confined to the low-dose region extrapolation, supralinear behavior seems not to be of regulatory concern in the exact two-stage model. Finally, we find that when performing this low-dose extrapolation in cases of dose–response concavity, extrapolating the model fit leads to a more conservative TRD than taking a linear extrapolation from 10% excess risk. We conclude with a set of recommendations.  相似文献   

16.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

17.
A mechanistic model and associated procedures are proposed for cancer risk assessment of genotoxic chemicals. As previously shown for ionizing radiation, a linear multiplicative model was found to be compatible with published experimental data for ethylene oxide, acrylamide, and butadiene. The validity of this model was anticipated in view of the multiplicative interaction of mutation with inherited and acquired growth-promoting conditions. Concurrent analysis led to rejection of an additive model (i.e. the model commonly applied for cancer risk assessment). A reanalysis of data for radiogenic cancer in mouse, dog and man shows that the relative risk coefficient is approximately the same (0.4 to 0.5 percent per rad) for tumours induced in the three species.Doses in vivo, defined as the time-integrated concentrations of ultimate mutagens, expressed in millimol × kg–1 × h (mMh) are, like radiation doses given in Gy or rad, proportional to frequencies of potentially mutagenic events. The radiation dose equivalents of chemical doses are, calculated by multiplying chemical doses (in mMh) with the relative genotoxic potencies (in rad × mMh–1) determined in vitro. In this way the relative cancer incidence increments in rats and mice exposed to ethylene oxide were shown to be about 0.4 percent per rad-equivalent, in agreement with the data for radiogenic cancer.Our analyses suggest that values of the relative risk coefficients for genotoxic chemicals are independent of species and that relative cancer risks determined in animal tests apply also to humans. If reliable animal test data are not available, cancer risks may be estimated by the relative potency. In both cases exposure dose/target dose relationships, the latter via macromolecule adducts, should be determined.  相似文献   

18.
Computational models support environmental regulatory activities by providing the regulator an ability to evaluate available knowledge, assess alternative regulations, and provide a framework to assess compliance. But all models face inherent uncertainties because human and natural systems are always more complex and heterogeneous than can be captured in a model. Here, we provide a summary discussion of the activities, findings, and recommendations of the National Research Council's Committee on Regulatory Environmental Models, a committee funded by the U.S. Environmental Protection Agency to provide guidance on the use of computational models in the regulatory process. Modeling is a difficult enterprise even outside the potentially adversarial regulatory environment. The demands grow when the regulatory requirements for accountability, transparency, public accessibility, and technical rigor are added to the challenges. Moreover, models cannot be validated (declared true) but instead should be evaluated with regard to their suitability as tools to address a specific question. The committee concluded that these characteristics make evaluation of a regulatory model more complex than simply comparing measurement data with model results. The evaluation also must balance the need for a model to be accurate with the need for a model to be reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs requires model evaluation to be applied over the "life cycle" of a regulatory model with an approach that includes different forms of peer review, uncertainty analysis, and extrapolation methods than those for nonregulatory models.  相似文献   

19.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

20.
Moolgavkar  Suresh H.  Luebeck  E. Georg  Turim  Jay  Hanna  Linda 《Risk analysis》1999,19(4):599-611
We present the results of a quantitative assessment of the lung cancer risk associated with occupational exposure to refractory ceramic fibers (RCF). The primary sources of data for our risk assessment were two long-term oncogenicity studies in male Fischer rats conducted to assess the potential pathogenic effects associated with prolonged inhalation of RCF. An interesting feature of the data was the availability of the temporal profile of fiber burden in the lungs of experimental animals. Because of this information, we were able to conduct both exposure–response and dose–response analyses. Our risk assessment was conducted within the framework of a biologically based model for carcinogenesis, the two-stage clonal expansion model, which allows for the explicit incorporation of the concepts of initiation and promotion in the analyses. We found that a model positing that RCF was an initiator had the highest likelihood. We proposed an approach based on biological considerations for the extrapolation of risk to humans. This approach requires estimation of human lung burdens for specific exposure scenarios, which we did by using an extension of a model due to Yu. Our approach acknowledges that the risk associated with exposure to RCF depends on exposure to other lung carcinogens. We present estimates of risk in two populations: (1) a population of nonsmokers and (2) an occupational cohort of steelworkers not exposed to coke oven emissions, a mixed population that includes both smokers and nonsmokers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号