首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Biologically Motivated Cancer Risk Models   总被引:3,自引:0,他引:3  
A two-stage dose response model is proposed for use in cancer risk assessment. The model assumes that transformation probabilities and cellular dynamics are exposure- and time-dependent.  相似文献   

2.
For the vast majority of chemicals that have cancer potency estimates on IRIS, the underlying database is deficient with respect to early-life exposures. This data gap has prevented derivation of cancer potency factors that are relevant to this time period, and so assessments may not fully address children's risks. This article provides a review of juvenile animal bioassay data in comparison to adult animal data for a broad array of carcinogens. This comparison indicates that short-term exposures in early life are likely to yield a greater tumor response than short-term exposures in adults, but similar tumor response when compared to long-term exposures in adults. This evidence is brought into a risk assessment context by proposing an approach that: (1) does not prorate children's exposures over the entire life span or mix them with exposures that occur at other ages; (2) applies the cancer slope factor from adult animal or human epidemiology studies to the children's exposure dose to calculate the cancer risk associated with the early-life period; and (3) adds the cancer risk for young children to that for older children/adults to yield a total lifetime cancer risk. The proposed approach allows for the unique exposure and pharmacokinetic factors associated with young children to be fully weighted in the cancer risk assessment. It is very similar to the approach currently used by U.S. EPA for vinyl chloride. The current analysis finds that the database of early life and adult cancer bioassays supports extension of this approach from vinyl chloride to other carcinogens of diverse mode of action. This approach should be enhanced by early-life data specific to the particular carcinogen under analysis whenever possible.  相似文献   

3.
Although analysis of in vivo pharmacokinetic data necessitates use of time-dependent physiologically-based pharmacokinetic (PBPK) models, risk assessment applications are often driven primarily by steady-state and/or integrated (e.g., AUC) dosimetry. To that end, we present an analysis of steady-state solutions to a PBPK model for a generic volatile chemical metabolized in the liver. We derive an equivalent model that is much simpler and contains many fewer parameters than the full PBPK model. The state of the system can be specified by two state variables-the rate of metabolism and the rate of clearance by exhalation. For a given oral dose rate or inhalation exposure concentration, the system state only depends on the blood-air partition coefficient, metabolic constants, and the rates of blood flow to the liver and of alveolar ventilation. At exposures where metabolism is close to linear, only the effective first-order metabolic rate is needed. Furthermore, in this case, the relationship between cumulative exposure and average internal dose (e.g., AUCs) remains the same for time-varying exposures. We apply our analysis to oral-inhalation route extrapolation, showing that for any dose metric, route equivalence only depends on the parameters that determine the system state. Even if the appropriate dose metric is unknown, bounds can be placed on the route-to-route equivalence with very limited data. We illustrate this analysis by showing that it reproduces exactly the PBPK-model-based route-to-route extrapolation in EPA's 2000 risk assessment for vinyl chloride. Overall, we find that in many cases, steady-state solutions exactly reproduce or closely approximate the solutions using the full PBPK model, while being substantially more transparent. Subsequent work will examine the utility of steady-state solutions for analyzing cross-species extrapolation and intraspecies variability.  相似文献   

4.
A Distributional Approach to Characterizing Low-Dose Cancer Risk   总被引:2,自引:0,他引:2  
Since cancer risk at very low doses cannot be directly measured in humans or animals, mathematical extrapolation models and scientific judgment are required. This article demonstrates a probabilistic approach to carcinogen risk assessment that employs probability trees, subjective probabilities, and standard bootstrapping procedures. The probabilistic approach is applied to the carcinogenic risk of formaldehyde in environmental and occupational settings. Sensitivity analyses illustrate conditional estimates of risk for each path in the probability tree. Fundamental mechanistic uncertainties are characterized. A strength of the analysis is the explicit treatment of alternative beliefs about pharmacokinetics and pharmacodynamics. The resulting probability distributions on cancer risk are compared with the point estimates reported by federal agencies. Limitations of the approach are discussed as well as future research directions.  相似文献   

5.
We review approaches for characterizing “peak” exposures in epidemiologic studies and methods for incorporating peak exposure metrics in dose–response assessments that contribute to risk assessment. The focus was on potential etiologic relations between environmental chemical exposures and cancer risks. We searched the epidemiologic literature on environmental chemicals classified as carcinogens in which cancer risks were described in relation to “peak” exposures. These articles were evaluated to identify some of the challenges associated with defining and describing cancer risks in relation to peak exposures. We found that definitions of peak exposure varied considerably across studies. Of nine chemical agents included in our review of peak exposure, six had epidemiologic data used by the U.S. Environmental Protection Agency (US EPA) in dose–response assessments to derive inhalation unit risk values. These were benzene, formaldehyde, styrene, trichloroethylene, acrylonitrile, and ethylene oxide. All derived unit risks relied on cumulative exposure for dose–response estimation and none, to our knowledge, considered peak exposure metrics. This is not surprising, given the historical linear no‐threshold default model (generally based on cumulative exposure) used in regulatory risk assessments. With newly proposed US EPA rule language, fuller consideration of alternative exposure and dose–response metrics will be supported. “Peak” exposure has not been consistently defined and rarely has been evaluated in epidemiologic studies of cancer risks. We recommend developing uniform definitions of “peak” exposure to facilitate fuller evaluation of dose response for environmental chemicals and cancer risks, especially where mechanistic understanding indicates that the dose response is unlikely linear and that short‐term high‐intensity exposures increase risk.  相似文献   

6.
To assess the maximum possible impact of further government regulation of asbestos exposure, projections were made of the use of asbestos in nine product categories for the years 1985-2000. A life table risk assessment model was then developed to estimate the excess cases of cancer and lost person-years of life likely to occur among those occupationally and nonoccupationally exposed to the nine asbestos product categories manufactured in 1985-2000. These estimates were made under the assumption that government regulation remains at its 1985 level. Use of asbestos in the nine product categories was predicted to decline in all cases except for friction products. The risk assessment results show that, although the cancer risks from future exposure to asbestos are significantly less than those from past exposures, in the absence of more stringent regulations, a health risk remains.  相似文献   

7.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

8.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1780-1784
A 2009 report of the National Research Council (NRC) recommended that the U.S. Environmental Protection Agency (EPA) increase its estimates of increased cancer risk from exposure to environmental agents by ~7‐fold, due to an approximate ~25‐fold typical ratio between the median and upper 95th percentile persons’ cancer sensitivity assuming approximately lognormally distributed sensitivities. EPA inaction on this issue has raised concerns that cancer risks to environmentally exposed populations remain systematically underestimated. This concern is unwarranted, however, because EPA point estimates of cancer risk have always pertained to the average, not the median, person in each modeled exposure group. Nevertheless, EPA has yet to explain clearly how its risk characterization and risk management policies concerning individual risks from environmental chemical carcinogens do appropriately address broad variability in human cancer susceptibility that has been a focus of two major NRC reports to EPA concerning its risk assessment methods.  相似文献   

9.
Physiologically‐based pharmacokinetic (PBPK) models are often submitted to or selected by agencies, such as the U.S. Environmental Protection Agency (U.S. EPA) and Agency for Toxic Substances and Disease Registry, for consideration for application in human health risk assessment (HHRA). Recently, U.S. EPA evaluated the human PBPK models for perchlorate and radioiodide for their ability to estimate the relative sensitivity of perchlorate inhibition on thyroidal radioiodide uptake for various population groups and lifestages. The most well‐defined mode of action of the environmental contaminant, perchlorate, is competitive inhibition of thyroidal iodide uptake by the sodium‐iodide symporter (NIS). In this analysis, a six‐step framework for PBPK model evaluation was followed, and with a few modifications, the models were determined to be suitable for use in HHRA to evaluate relative sensitivity among human lifestages. Relative sensitivity to perchlorate was determined by comparing the PBPK model predicted percent inhibition of thyroidal radioactive iodide uptake (RAIU) by perchlorate for different lifestages. A limited sensitivity analysis indicated that model parameters describing urinary excretion of perchlorate and iodide were particularly important in prediction of RAIU inhibition; therefore, a range of biologically plausible values available in the peer‐reviewed literature was evaluated. Using the updated PBPK models, the greatest sensitivity to RAIU inhibition was predicted to be the near‐term fetus (gestation week 40) compared to the average adult and other lifestages; however, when exposure factors were taken into account, newborns were found to be populations that need further evaluation and consideration in a risk assessment for perchlorate.  相似文献   

10.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

11.
Quantitative risk assessment involves the determination of a safe level of exposure. Recent techniques use the estimated dose-response curve to estimate such a safe dose level. Although such methods have attractive features, a low-dose extrapolation is highly dependent on the model choice. Fractional polynomials, basically being a set of (generalized) linear models, are a nice extension of classical polynomials, providing the necessary flexibility to estimate the dose-response curve. Typically, one selects the best-fitting model in this set of polynomials and proceeds as if no model selection were carried out. We show that model averaging using a set of fractional polynomials reduces bias and has better precision in estimating a safe level of exposure (say, the benchmark dose), as compared to an estimator from the selected best model. To estimate a lower limit of this benchmark dose, an approximation of the variance of the model-averaged estimator, as proposed by Burnham and Anderson, can be used. However, this is a conservative method, often resulting in unrealistically low safe doses. Therefore, a bootstrap-based method to more accurately estimate the variance of the model averaged parameter is proposed.  相似文献   

12.
For diseases with more than one risk factor, the sum of probabilistic estimates of the number of cases caused by each individual factor may exceed the total number of cases observed, especially when uncertainties about exposure and dose response for some risk factors are high. In this study, we outline a method of bounding the fraction of lung cancer fatalities not due to specific well-studied causes. Such information serves as a "reality check" for estimates of the impacts of the minor risk factors, and, as such, complements the traditional risk analysis. With lung cancer as our example, we allocate portions of the observed lung cancer mortality to known causes (such as smoking, residential radon, and asbestos fibers) and describe the uncertainty surrounding those estimates. The interactions among the risk factors are also quantified, to the extent possible. We then infer an upper bound on the residual mortality due to "other" causes, using a consistency constraint on the total number of deaths, the maximum uncertainty principle, and the mathematics originally developed of imprecise probabilities.  相似文献   

13.
Reassessing Benzene Cancer Risks Using Internal Doses   总被引:1,自引:0,他引:1  
Human cancer risks from benzene exposure have previously been estimated by regulatory agencies based primarily on epidemiological data, with supporting evidence provided by animal bioassay data. This paper reexamines the animal-based risk assessments for benzene using physiologically-based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. It demonstrates that internal doses (interpreted as total benzene metabolites formed) from oral gavage experiments in mice are well predicted by a PBPK model developed by Travis et al. Both the data and the model outputs can also be accurately described by the simple nonlinear regression model total metabolites = 76.4x/(80.75 + x), where x = administered dose in mg/kg/day. Thus, PBPK modeling validates the use of such nonlinear regression models, previously used by Bailer and Hoel. An important finding is that refitting the linearized multistage (LMS) model family to internal doses and observed responses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to cubic, leading to low-dose risk estimates smaller than in previous risk assessments. This is consistent with the conclusion for mice from the Bailer and Hoel analysis. An innovation in this paper is estimation of internal doses for humans based on a PBPK model (and the regression model approximating it) rather than on interspecies dose conversions. Estimates of human risks at low doses are reduced by the use of internal dose estimates when the estimates are obtained from a PBPK model, in contrast to Bailer and Hoel's findings based on interspecies dose conversion. Sensitivity analyses and comparisons with epidemiological data and risk models suggest that our finding of a nonlinear MLE dose-response curve at low doses is robust to changes in assumptions and more consistent with epidemiological data than earlier risk models.  相似文献   

14.
Quantitative Approaches in Use to Assess Cancer Risk   总被引:4,自引:0,他引:4  
  相似文献   

15.
Use of Acute Toxicity to Estimate Carcinogenic Risk   总被引:1,自引:0,他引:1  
Data on the effects of human exposure to carcinogens are limited, so that estimation of the risks of carcinogens must be obtained indirectly. Current risk estimates are generally based on lifetime animal bioassays which are expensive and which take more than two years to complete. We here show how data on acute toxicity can be used to make a preliminary estimate of carcinogenic risk and give an idea of the uncertainty in that risk estimate. The estimates obtained are biased upwards, and so are useful for setting interim standards and determining whether further study is worthwhile. A general scheme which incorporates the use of such estimates is outlined, and it is shown by example how adoption of the procedures suggested could have prevented regulatory hiatus in the past.  相似文献   

16.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

17.
We develop a model for bacterial cross-contamination during food preparation in the domestic kitchen and apply this to the case of Campylobacter-contaminated chicken breast. Building blocks of the model are the routines performed during food preparation, with their associated probabilities of bacterial transfer between food items and kitchen utensils. The model is used in a quantitative microbiological risk assessment (QMRA) of Campylobacter in the Netherlands. Using parameter values from the literature and performing elementary sensitivity analyses, we show that cross-contamination can contribute significantly to the risk of Campylobacter infection and find that cleaning frequency of kitchen utensils and thoroughness of rinsing of raw food items after preparation has more impact on cross-contamination than previously emphasized. Furthermore, we argue that especially more behavioral data on hygiene during food preparation is needed for a comprehensive Campylobacter risk assessment.  相似文献   

18.
Anne Chapman 《Risk analysis》2006,26(3):603-616
Under current European Union legislation, action to restrict the production and use of a chemical is only justified if there is evidence that the chemical poses a risk to human health or the environment. Risk is understood as being a matter of the magnitude and probability of specifiable harms. An examination of how risks from chemicals are assessed shows the process to be fraught with uncertainty, with the result that evidence that commands agreement as to whether a chemical poses a risk or not is often not available. Hence the frequent disputes as to whether restrictions on chemicals are justified. Rather than trying to assess the risks from a chemical, I suggest that we should aim to assess how risky a chemical is in a more everyday sense, where riskiness is a matter of the possibility of harm. Risky chemicals are those where, given our state of knowledge, it is possible that they cause harm. I discuss four things that make a chemical more risky: (1) its capacity to cause harm; (2) its novelty; (3) its persistence; and (4) its mobility. Regulation of chemicals should aim to reduce the production and use of risky chemicals by requiring that the least risky substance or method is always used for any particular purpose. Any use of risky substances should be justifiable in terms of the public benefits of that use.  相似文献   

19.
Comparison of Six Dose-Response Models for Use with Food-Borne Pathogens   总被引:6,自引:0,他引:6  
Food-related illness in the United States is estimated to affect over six million people per year and cost the economy several billion dollars. These illnesses and costs could be reduced if minimum infectious doses were established and used as the basis of regulations and monitoring. However, standard methodologies for dose-response assessment are not yet formulated for microbial risk assessment. The objective of this study was to compare dose-response models for food-borne pathogens and determine which models were most appropriate for a range of pathogens. The statistical models proposed in the literature and chosen for comparison purposes were log-normal, log-logistic, exponential, -Poisson and Weibull-Gamma. These were fit to four data sets also taken from published literature, Shigella flexneri, Shigella dysenteriae,Campylobacter jejuni, and Salmonella typhosa, using the method of maximum likelihood. The Weibull-gamma, the only model with three parameters, was also the only model capable of fitting all the data sets examined using the maximum likelihood estimation for comparisons. Infectious doses were also calculated using each model. Within any given data set, the infectious dose estimated to affect one percent of the population ranged from one order of magnitude to as much as nine orders of magnitude, illustrating the differences in extrapolation of the dose response models. More data are needed to compare models and examine extrapolation from high to low doses for food-borne pathogens.  相似文献   

20.
A cancer risk assessment methodology based upon the Armitage–Doll multistage model of cancer is applied to animal bioassay data. The method utilizes the exact time-dependent dose pattern used in a bioassay rather than some single measure of dose such as average dose rate or cumulative dose. The methodology can be used to predict risks from arbitrary exposure patterns including, for example, intermittent exposure and short-term exposure occurring at an arbitrary age. The methodology is illustrated by applying it to a National Cancer Institute bioassay of ethylene dibromide in which dose rates were modified several times during the course of the experiment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号