首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a series of articles and a health-risk assessment report, scientists at the CIIT Hamner Institutes developed a model (CIIT model) for estimating respiratory cancer risk due to inhaled formaldehyde within a conceptual framework incorporating extensive mechanistic information and advanced computational methods at the toxicokinetic and toxicodynamic levels. Several regulatory bodies have utilized predictions from this model; on the other hand, upon detailed evaluation the California EPA has decided against doing so. In this article, we study the CIIT model to identify key biological and statistical uncertainties that need careful evaluation if such two-stage clonal expansion models are to be used for extrapolation of cancer risk from animal bioassays to human exposure. Broadly, these issues pertain to the use and interpretation of experimental labeling index and tumor data, the evaluation and biological interpretation of estimated parameters, and uncertainties in model specification, in particular that of initiated cells. We also identify key uncertainties in the scale-up of the CIIT model to humans, focusing on assumptions underlying model parameters for cell replication rates and formaldehyde-induced mutation. We discuss uncertainties in identifying parameter values in the model used to estimate and extrapolate DNA protein cross-link levels. The authors of the CIIT modeling endeavor characterized their human risk estimates as "conservative in the face of modeling uncertainties." The uncertainties discussed in this article indicate that such a claim is premature.  相似文献   

2.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

3.
In the evaluation of chemical compounds for carcinogenic risk, regulatory agencies such as the U.S. Environmental Protection Agency and National Toxicology Program (NTP) have traditionally fit a dose-response model to data from rodent bioassays, and then used the fitted model to estimate a Virtually Safe Dose or the dose corresponding to a very small increase (usually 10(-6)) in risk over background. Much recent interest has been directed at incorporating additional scientific information regarding the properties of the specific chemical under investigation into the risk assessment process, including biological mechanisms of cancer induction, metabolic pathways, and chemical structure and activity. Despite the fact that regulatory agencies are currently poised to allow use of nonlinear dose-response models based on the concept of an underlying threshold for nongenotoxic chemicals, there have been few attempts to investigate the overall relationship between the shape of dose-response curves and mutagenicity. Using data from an historical database of NTP cancer bioassays, the authors conducted a repeated-measures Analysis of the estimated shape from fitting extended Weibull dose-response curves. It was concluded that genotoxic chemicals have dose-response curves that are closer to linear than those for nongenotoxic chemicals, though on average, both types of compounds have dose-response curves that are convex and the effect of genotoxicity is small.  相似文献   

4.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

5.
D. Krewski  Y. Zhu 《Risk analysis》1994,14(4):613-627
Reproductive and developmental anomalies induced by toxic chemicals may be identified using laboratory experiments with small mammalian species such as rats, mice, and rabbits. In this paper, dose-response models for correlated multinomial data arising in studies of developmental toxicity are discussed. These models provide a joint characterization of dose-response relationships for both embryolethality and teratogenicity. Generalized estimating equations are used for model fitting, incorporating overdispersion relative to the multinomial variation due to correlation among littermates. The fitted dose-response models are used to estimate benchmark doses in a series of experiments conducted by the U.S. National Toxicology Program. Joint analysis of prenatal death and fetal malformation using an extended Dirichlet-trinomial covariance function to characterize overdispersion appears to have statistical and computational advantages over separate analysis of these two end points. Benchmark doses based on overall toxicity are below the minimum of those for prenatal death and fetal malformation and may, thus, be preferred for risk assessment purposes.  相似文献   

6.
The qualitative and quantitative evaluation of risk in developmental toxicology has been discussed in several recent publications.(1–3) A number of issues still are to be resolved in this area. The qualitative evaluation and interpretation of end points in developmental toxicology depends on an understanding of the biological events leading to the end points observed, the relationships among end points, and their relationship to dose and to maternal toxicity. The interpretation of these end points is also affected by the statistical power of the experiments used for detecting the various end points observed. The quantitative risk assessment attempts to estimate human risk for developmental toxicity as a function of dose. The current approach is to apply safety (uncertainty) factors to die no observed effect level (NOEL). An alternative presented and discussed here is to model the experimental data and apply a safety factor to an estimated risk level to achieve an “acceptable” level of risk. In cases where the dose-response curves upward, this approach provides a conservative estimate of risk. This procedure does not preclude the existence of a threshold dose. More research is needed to develop appropriate dose-response models that can provide better estimates for low-dose extrapolation of developmental effects.  相似文献   

7.
Reassessing Benzene Cancer Risks Using Internal Doses   总被引:1,自引:0,他引:1  
Human cancer risks from benzene exposure have previously been estimated by regulatory agencies based primarily on epidemiological data, with supporting evidence provided by animal bioassay data. This paper reexamines the animal-based risk assessments for benzene using physiologically-based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. It demonstrates that internal doses (interpreted as total benzene metabolites formed) from oral gavage experiments in mice are well predicted by a PBPK model developed by Travis et al. Both the data and the model outputs can also be accurately described by the simple nonlinear regression model total metabolites = 76.4x/(80.75 + x), where x = administered dose in mg/kg/day. Thus, PBPK modeling validates the use of such nonlinear regression models, previously used by Bailer and Hoel. An important finding is that refitting the linearized multistage (LMS) model family to internal doses and observed responses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to cubic, leading to low-dose risk estimates smaller than in previous risk assessments. This is consistent with the conclusion for mice from the Bailer and Hoel analysis. An innovation in this paper is estimation of internal doses for humans based on a PBPK model (and the regression model approximating it) rather than on interspecies dose conversions. Estimates of human risks at low doses are reduced by the use of internal dose estimates when the estimates are obtained from a PBPK model, in contrast to Bailer and Hoel's findings based on interspecies dose conversion. Sensitivity analyses and comparisons with epidemiological data and risk models suggest that our finding of a nonlinear MLE dose-response curve at low doses is robust to changes in assumptions and more consistent with epidemiological data than earlier risk models.  相似文献   

8.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose-response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose-response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose-response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose-response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose-response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

9.
Because experiments with Bacillus anthracis are costly and dangerous, the scientific, public health, and engineering communities are served by thorough collation and analysis of experiments reported in the open literature. This study identifies available dose-response data from the open literature for inhalation exposure to B. anthracis and, via dose-response modeling, characterizes the response of nonhuman animal models to challenges. Two studies involving four data sets amenable to dose-response modeling were found in the literature: two data sets of response of guinea pigs to intranasal dosing with the Vollum and ATCC-6605 strains, one set of responses of rhesus monkeys to aerosol exposure to the Vollum strain, and one data set of guinea pig response to aerosol exposure to the Vollum strain. None of the data sets exhibited overdispersion and all but one were best fit by an exponential dose-response model. The beta-Poisson dose-response model provided the best fit to the remaining data set. As indicated in prior studies, the response to aerosol challenges is a strong function of aerosol diameter. For guinea pigs, the LD50 increases with aerosol size for aerosols at and above 4.5 μm. For both rhesus monkeys and guinea pigs there is about a 15-fold increase in LD50 when aerosol size is increased from 1 μm to 12 μm. Future experimental research and dose-response modeling should be performed to quantify differences in responses of subpopulations to B. anthracis and to generate data allowing development of interspecies correction factors.  相似文献   

10.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

11.
《Risk analysis》2018,38(10):2073-2086
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real‐world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation.  相似文献   

12.
Quantitative Cancer Risk Estimation for Formaldehyde   总被引:2,自引:0,他引:2  
Of primary concern are irreversible effects, such as cancer induction, that formaldehyde exposure could have on human health. Dose-response data from human exposure situations would provide the most solid foundation for risk assessment, avoiding problematic extrapolations from the health effects seen in nonhuman species. However, epidemiologic studies of human formaldehyde exposure have provided little definitive information regarding dose-response. Reliance must consequently be placed on laboratory animal evidence. An impressive array of data points to significantly nonlinear relationships between rodent tumor incidence and administered dose, and between target tissue dose and administered dose (the latter for both rodents and Rhesus monkeys) following exposure to formaldehyde by inhalation. Disproportionately less formaldehyde binds covalently to the DNA of nasal respiratory epithelium at low than at high airborne concentrations. Use of this internal measure of delivered dose in analyses of rodent bioassay nasal tumor response yields multistage model estimates of low-dose risk, both point and upper bound, that are lower than equivalent estimates based upon airborne formaldehyde concentration. In addition, risk estimates obtained for Rhesus monkeys appear at least 10-fold lower than corresponding estimates for identically exposed Fischer-344 rats.  相似文献   

13.
The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose‐lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross‐species comparison. For this reason, species‐specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species‐specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species‐specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species.  相似文献   

14.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

15.
Brand  Kevin P.  Rhomberg  Lorenz  Evans  John S. 《Risk analysis》1999,19(2):295-308
The prominent role of animal bioassay evidence in environmental regulatory decisions compels a careful characterization of extrapolation uncertainties. In noncancer risk assessment, uncertainty factors are incorporated to account for each of several extrapolations required to convert a bioassay outcome into a putative subthreshold dose for humans. Measures of relative toxicity taken between different dosing regimens, different endpoints, or different species serve as a reference for establishing the uncertainty factors. Ratios of no observed adverse effect levels (NOAELs) have been used for this purpose; statistical summaries of such ratios across sets of chemicals are widely used to guide the setting of uncertainty factors. Given the poor statistical properties of NOAELs, the informativeness of these summary statistics is open to question. To evaluate this, we develop an approach to calibrate the ability of NOAEL ratios to reveal true properties of a specified distribution for relative toxicity. A priority of this analysis is to account for dependencies of NOAEL ratios on experimental design and other exogenous factors. Our analysis of NOAEL ratio summary statistics finds (1) that such dependencies are complex and produce pronounced systematic errors and (2) that sampling error associated with typical sample sizes (50 chemicals) is non-negligible. These uncertainties strongly suggest that NOAEL ratio summary statistics cannot be taken at face value; conclusions based on such ratios reported in well over a dozen published papers should be reconsidered.  相似文献   

16.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

17.
Legionnaires' disease (LD), first reported in 1976, is an atypical pneumonia caused by bacteria of the genus Legionella, and most frequently by L. pneumophila (Lp). Subsequent research on exposure to the organism employed various animal models, and with quantitative microbial risk assessment (QMRA) techniques, the animal model data may provide insights on human dose-response for LD. This article focuses on the rationale for selection of the guinea pig model, comparison of the dose-response model results, comparison of projected low-dose responses for guinea pigs, and risk estimates for humans. Based on both in vivo and in vitro comparisons, the guinea pig (Cavia porcellus) dose-response data were selected for modeling human risk. We completed dose-response modeling for the beta-Poisson (approximate and exact), exponential, probit, logistic, and Weibull models for Lp inhalation, mortality, and infection (end point elevated body temperature) in guinea pigs. For mechanistic reasons, including low-dose exposure probability, further work on human risk estimates for LD employed the exponential and beta-Poisson models. With an exposure of 10 colony-forming units (CFU) (retained dose), the QMRA model predicted a mild infection risk of 0.4 (as evaluated by seroprevalence) and a clinical severity LD case (e.g., hospitalization and supportive care) risk of 0.0009. The calculated rates based on estimated human exposures for outbreaks used for the QMRA model validation are within an order of magnitude of the reported LD rates. These validation results suggest the LD QMRA animal model selection, dose-response modeling, and extension to human risk projections were appropriate.  相似文献   

18.
Quantitative risk assessment involves the determination of a safe level of exposure. Recent techniques use the estimated dose-response curve to estimate such a safe dose level. Although such methods have attractive features, a low-dose extrapolation is highly dependent on the model choice. Fractional polynomials, basically being a set of (generalized) linear models, are a nice extension of classical polynomials, providing the necessary flexibility to estimate the dose-response curve. Typically, one selects the best-fitting model in this set of polynomials and proceeds as if no model selection were carried out. We show that model averaging using a set of fractional polynomials reduces bias and has better precision in estimating a safe level of exposure (say, the benchmark dose), as compared to an estimator from the selected best model. To estimate a lower limit of this benchmark dose, an approximation of the variance of the model-averaged estimator, as proposed by Burnham and Anderson, can be used. However, this is a conservative method, often resulting in unrealistically low safe doses. Therefore, a bootstrap-based method to more accurately estimate the variance of the model averaged parameter is proposed.  相似文献   

19.
The methods currently used to evaluate the risk of developmental defects in humans from exposure to potential toxic agents do not reflect biological processes in extrapolating estimated risks to low doses and from test species to humans. We develop a mathematical model to describe aspects of the dynamic process of organogenesis, based on branching process models of cell kinetics. The biological information that can be incorporated into the model includes timing and rates of dynamic cell processes such as differentiation, migration, growth, and replication. The dose-response models produced can explain patterns of malformation rates as a function of both dose and time of exposure, resulting in improvements in risk assessment and understanding of the underlying mechanistic processes. To illustrate the use of the model, we apply it to the prediction of the effects of methylmercury on brain development in rats.  相似文献   

20.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号