首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

2.
Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations.  相似文献   

3.
Correlation Between Carcinogenic Potency of Chemicals in Animals and Humans   总被引:4,自引:0,他引:4  
Twenty-three chemicals were selected for comparison of the carcinogenic potencies estimated from epidemiological data to those estimated from animal carcinogenesis bioassays. The chemicals were all those for which reasonably strong evidence of carcinogenicity could be found in humans or animals and for which suitable data could be obtained for quantifying carcinogenic potencies in both humans and animals. Many alternative methods of analyzing the bioassay data were investigated. Almost all of the methods yielded potency estimates that were highly correlated with potencies estimated from epidemiological data; correlations were highly statistically significant (p less than 0.001), with the corresponding correlation coefficients ranging as high as 0.9. These findings provide support for the general use of animal data to evaluate carcinogenic potential in humans and also for the use of animal data to quantify human risk.  相似文献   

4.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

5.
Measurements of intermediate end points in the carcinogenic process may reduce uncertainty in human risk assessment from bioassay data, by identifying sources of interspecies variation and dose nonlinearity. This paper describes desirable properties of such markers: persistence, predictive power, temporal relevance, and consistency across dose rate and species. We illustrate these properties by evaluating markers for squamous cell nasal carcinoma in rodents exposed to formaldehyde. We also discuss design issues for bioassays that evaluate markers and tumors simultaneously at necropsy.  相似文献   

6.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1795-1806
The National Research Council 2009 “Silver Book” panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ~7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. “Does EPA underestimate cancer risks by ignoring susceptibility differences?,” Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10?8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.  相似文献   

7.
A Distributional Approach to Characterizing Low-Dose Cancer Risk   总被引:2,自引:0,他引:2  
Since cancer risk at very low doses cannot be directly measured in humans or animals, mathematical extrapolation models and scientific judgment are required. This article demonstrates a probabilistic approach to carcinogen risk assessment that employs probability trees, subjective probabilities, and standard bootstrapping procedures. The probabilistic approach is applied to the carcinogenic risk of formaldehyde in environmental and occupational settings. Sensitivity analyses illustrate conditional estimates of risk for each path in the probability tree. Fundamental mechanistic uncertainties are characterized. A strength of the analysis is the explicit treatment of alternative beliefs about pharmacokinetics and pharmacodynamics. The resulting probability distributions on cancer risk are compared with the point estimates reported by federal agencies. Limitations of the approach are discussed as well as future research directions.  相似文献   

8.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

9.
To quantify the health benefits of environmental policies, economists generally require estimates of the reduced probability of illness or death. For policies that reduce exposure to carcinogenic substances, these estimates traditionally have been obtained through the linear extrapolation of experimental dose-response data to low-exposure scenarios as described in the U.S. Environmental Protection Agency's Guidelines for Carcinogen Risk Assessment (1986). In response to evolving scientific knowledge, EPA proposed revisions to the guidelines in 1996. Under the proposed revisions, dose-response relationships would not be estimated for carcinogens thought to exhibit nonlinear modes of action. Such a change in cancer-risk assessment methods and outputs will likely have serious consequences for how benefit-cost analyses of policies aimed at reducing cancer risks are conducted. Any tendency for reduced quantification of effects in environmental risk assessments, such as those contemplated in the revisions to EPA's cancer-risk assessment guidelines, impedes the ability of economic analysts to respond to increasing calls for benefit-cost analysis. This article examines the implications for benefit-cost analysis of carcinogenic exposures of the proposed changes to the 1986 Guidelines and proposes an approach for bounding dose-response relationships when no biologically based models are available. In spite of the more limited quantitative information provided in a carcinogen risk assessment under the proposed revisions to the guidelines, we argue that reasonable bounds on dose-response relationships can be estimated for low-level exposures to nonlinear carcinogens. This approach yields estimates of reduced illness for use in a benefit-cost analysis while incorporating evidence of nonlinearities in the dose-response relationship. As an illustration, the bounding approach is applied to the case of chloroform exposure.  相似文献   

10.
The existence of correlation between the carcinogenic potency and the maximum tolerated dose has been the subject of many investigations in recent years. Several attempts have been made to quantify this correlation in different bioassay experiments. By using some distributional assumptions, Krewski et al .(1) derive an analytic expression for the coefficient of correlation between the carcinogenic potency TD50 and the maximum tolerated dose. Here, we discuss the deviation that may result in using their analytical expression. By taking a more general approach we derive an expression for the correlation coefficient which includes the result of Krewski et al .(1) as a special case, and show that their expression may overestimate the correlation in some instances and yet underestimate the correlation in other instances. The proposed method is illustrated by application to a real dataset.  相似文献   

11.
Uncertainty in Cancer Risk Estimates   总被引:1,自引:0,他引:1  
Several existing databases compiled by Gold et al.(1–3) for carcinogenesis bioassays are examined to obtain estimates of the reproducibility of cancer rates across experiments, strains, and rodent species. A measure of carcinogenic potency is given by the TD50 (daily dose that causes a tumor type in 50% of the exposed animals that otherwise would not develop the tumor in a standard lifetime). The lognormal distribution can be used to model the uncertainty of the estimates of potency (TD50) and the ratio of TD50's between two species. For near-replicate bioassays, approximately 95% of the TD50's are estimated to be within a factor of 4 of the mean. Between strains, about 95% of the TD50's are estimated to be within a factor of 11 of their mean, and the pure genetic component of variability is accounted for by a factor of 6.8. Between rats and mice, about 95% of the TD50's are estimated to be within a factor of 32 of the mean, while between humans and experimental animals the factor is 110 for 20 chemicals reported by Allen et al.(4) The common practice of basing cancer risk estimates on the most sensitive rodent species-strain-sex and using interspecies dose scaling based on body surface area appears to overestimate cancer rates for these 20 human carcinogens by about one order of magnitude on the average. Hence, for chemicals where the dose-response is nearly linear below experimental doses, cancer risk estimates based on animal data are not necessarily conservative and may range from a factor of 10 too low for human carcinogens up to a factor of 1000 too high for approximately 95% of the chemicals tested to date. These limits may need to be modified for specific chemicals where additional mechanistic or pharmacokinetic information may suggest alterations or where particularly sensitive subpopu-lations may be exposed. Supralinearity could lead to anticonservative estimates of cancer risk. Underestimating cancer risk by a specific factor has a much larger impact on the actual number of cancer cases than overestimates of smaller risks by the same factor. This paper does not address the uncertainties in high to low dose extrapolation. If the dose-response is sufficiently nonlinear at low doses to produce cancer risks near zero, then low-dose risk estimates based on linear extrapolation are likely to overestimate risk and the limits of uncertainty cannot be established.  相似文献   

12.
Quantitative Cancer Risk Estimation for Formaldehyde   总被引:2,自引:0,他引:2  
Of primary concern are irreversible effects, such as cancer induction, that formaldehyde exposure could have on human health. Dose-response data from human exposure situations would provide the most solid foundation for risk assessment, avoiding problematic extrapolations from the health effects seen in nonhuman species. However, epidemiologic studies of human formaldehyde exposure have provided little definitive information regarding dose-response. Reliance must consequently be placed on laboratory animal evidence. An impressive array of data points to significantly nonlinear relationships between rodent tumor incidence and administered dose, and between target tissue dose and administered dose (the latter for both rodents and Rhesus monkeys) following exposure to formaldehyde by inhalation. Disproportionately less formaldehyde binds covalently to the DNA of nasal respiratory epithelium at low than at high airborne concentrations. Use of this internal measure of delivered dose in analyses of rodent bioassay nasal tumor response yields multistage model estimates of low-dose risk, both point and upper bound, that are lower than equivalent estimates based upon airborne formaldehyde concentration. In addition, risk estimates obtained for Rhesus monkeys appear at least 10-fold lower than corresponding estimates for identically exposed Fischer-344 rats.  相似文献   

13.
I use an analogy with the history of physical measurements, population and energy projections, and analyze the trends in several data sets to quantify the overconfidence of the experts in the reliability of their uncertainty estimates. Data sets include (i) time trends in the sequential measurements of the same physical quantity; (ii) national population projections; and (iii) projections for the U.S., energy sector. Probabilities of large deviations for the true values are parametrized by an exponential distribution with the slope determined by the data. Statistics of past errors can be used in probabilistic risk assessment to hedge against unsuspected uncertainties and to include the possibility of human error into the framework of uncertainty analysis. By means of a sample Monte Carlo simulation of cancer risk caused by ingestion of benzene in soil, I demonstrate how the upper 95th percentiles of risk are changed when unsuspected uncertainties are included. I recommend to inflate the estimated uncertainties by default safety factors determined from the relevant historical data sets.  相似文献   

14.
For the vast majority of chemicals that have cancer potency estimates on IRIS, the underlying database is deficient with respect to early-life exposures. This data gap has prevented derivation of cancer potency factors that are relevant to this time period, and so assessments may not fully address children's risks. This article provides a review of juvenile animal bioassay data in comparison to adult animal data for a broad array of carcinogens. This comparison indicates that short-term exposures in early life are likely to yield a greater tumor response than short-term exposures in adults, but similar tumor response when compared to long-term exposures in adults. This evidence is brought into a risk assessment context by proposing an approach that: (1) does not prorate children's exposures over the entire life span or mix them with exposures that occur at other ages; (2) applies the cancer slope factor from adult animal or human epidemiology studies to the children's exposure dose to calculate the cancer risk associated with the early-life period; and (3) adds the cancer risk for young children to that for older children/adults to yield a total lifetime cancer risk. The proposed approach allows for the unique exposure and pharmacokinetic factors associated with young children to be fully weighted in the cancer risk assessment. It is very similar to the approach currently used by U.S. EPA for vinyl chloride. The current analysis finds that the database of early life and adult cancer bioassays supports extension of this approach from vinyl chloride to other carcinogens of diverse mode of action. This approach should be enhanced by early-life data specific to the particular carcinogen under analysis whenever possible.  相似文献   

15.
A Monte Carlo simulation is incorporated into a risk assessment for trichloroethylene (TCE) using physiologically-based pharmacokinetic (PBPK) modeling coupled with the linearized multistage model to derive human carcinogenic risk extrapolations. The Monte Carlo technique incorporates physiological parameter variability to produce a statistically derived range of risk estimates which quantifies specific uncertainties associated with PBPK risk assessment approaches. Both inhalation and ingestion exposure routes are addressed. Simulated exposure scenarios were consistent with those used by the Environmental Protection Agency (EPA) in their TCE risk assessment. Mean values of physiological parameters were gathered from the literature for both mice (carcinogenic bioassay subjects) and for humans. Realistic physiological value distributions were assumed using existing data on variability. Mouse cancer bioassay data were correlated to total TCE metabolized and area-under-the-curve (blood concentration) trichloroacetic acid (TCA) as determined by a mouse PBPK model. These internal dose metrics were used in a linearized multistage model analysis to determine dose metric values corresponding to 10-6 lifetime excess cancer risk. Using a human PBPK model, these metabolized doses were then extrapolated to equivalent human exposures (inhalation and ingestion). The Monte Carlo iterations with varying mouse and human physiological parameters produced a range of human exposure concentrations producing a 10-6 risk.  相似文献   

16.
Formaldehyde induced squamous-cell carcinomas in the nasal passages of F344 rats in two inhalation bioassays at exposure levels of 6 ppm and above. Increases in rates of cell proliferation were measured by T. M. Monticello and colleagues at exposure levels of 0.7 ppm and above in the same tissues from which tumors arose. A risk assessment for formaldehyde was conducted at the CIIT Centers for Health Research, in collaboration with investigators from Toxicological Excellence in Risk Assessment (TERA) and the U.S. Environmental Protection Agency (U.S. EPA) in 1999. Two methods for dose-response assessment were used: a full biologically based modeling approach and a statistically oriented analysis by benchmark dose (BMD) method. This article presents the later approach, the purpose of which is to combine BMD and pharmacokinetic modeling to estimate human cancer risks from formaldehyde exposure. BMD analysis was used to identify points of departure (exposure levels) for low-dose extrapolation in rats for both tumor and the cell proliferation endpoints. The benchmark concentrations for induced cell proliferation were lower than for tumors. These concentrations were extrapolated to humans using two mechanistic models. One model used computational fluid dynamics (CFD) alone to determine rates of delivery of inhaled formaldehyde to the nasal lining. The second model combined the CFD method with a pharmacokinetic model to predict tissue dose with formaldehyde-induced DNA-protein cross-links (DPX) as a dose metric. Both extrapolation methods gave similar results, and the predicted cancer risk in humans at low exposure levels was found to be similar to that from a risk assessment conducted by the U.S. EPA in 1991. Use of the mechanistically based extrapolation models lends greater certainty to these risk estimates than previous approaches and also identifies the uncertainty in the measured dose-response relationship for cell proliferation at low exposure levels, the dose-response relationship for DPX in monkeys, and the choice between linear and nonlinear methods of extrapolation as key remaining sources of uncertainty.  相似文献   

17.
Abstract

Supply chain design is a complex and relatively poorly structured process, involving choosing many decisional parameters and it usually requires consideration of numerous sources of uncertainty. Many conventional processes of supply chain design involve taking a deterministic approach, using point estimates, on important measures of supply chain effectiveness such as cost, quality, delivery reliability and service levels. Supply chain disruptions are often separately considered as risks, both in the research literature and in practice, meaning that a purely traditional risk management and risk minimization approach is taken. We have developed and applied an approach that combines the intellect and experience of the supply chain designer with the power of evaluation provided by a Monte Carlo simulation model, which uses decision analysis techniques to explicitly incorporate the full spectrum of uncertain quantities across the set of alternative supply chain designs being considered. After defining and setting out the general decision variables and uncertainty factors for 16 distinct supply chain design decision categories, we then apply that approach to combine the decision-makers’ heuristics with the probabilistic modeling approach, iteratively, to achieve the best of both elements of such an approach. This novel approach to fully integrating performance and risk elements of supply chain designs is then illustrated with a case study. Finally, we call for further developmental research and field work to refine this approach.  相似文献   

18.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

19.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

20.
Use of Acute Toxicity to Estimate Carcinogenic Risk   总被引:1,自引:0,他引:1  
Data on the effects of human exposure to carcinogens are limited, so that estimation of the risks of carcinogens must be obtained indirectly. Current risk estimates are generally based on lifetime animal bioassays which are expensive and which take more than two years to complete. We here show how data on acute toxicity can be used to make a preliminary estimate of carcinogenic risk and give an idea of the uncertainty in that risk estimate. The estimates obtained are biased upwards, and so are useful for setting interim standards and determining whether further study is worthwhile. A general scheme which incorporates the use of such estimates is outlined, and it is shown by example how adoption of the procedures suggested could have prevented regulatory hiatus in the past.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号