首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Parodi et al. (1) and Zeise et al. (2) found a surprising statistical correlation (or association) between acute toxicity and carcinogenic potency. In order to shed light on the questions of whether or not it is a causal correlation, and whether or not it is a statistical or tautological artifact, we have compared the correlations for the NCI/NTP data set with those for chemicals not in this set. Carcinogenic potencies were taken from the Gold et al. database. We find a weak correlation with an average value of TD50/LD50= 0.04 for the non-NCI data set, compared with TD50/LD50= 0.15 for the NCI data set. We conclude that it is not easy to distinguish types of carcinogens on the basis of whether or not they are acutely toxic.  相似文献   

2.
Current practice in carcinogen bioassay calls for exposure of experimental animals at doses up to and including the maximum tolerated dose (MTD). Such studies have been used to compute measures of carcinogenic potency such as the TD50 as well as unit risk factors such as q 1 * for predicting low-dose risks. Recent studies have indicated that these measures of carcinogenic potency are highly correlated with the MTD. Carcinogenic potency has also been shown to be correlated with indicators of mutagenicity and toxicity. Correlation of the MTDs for rats and mice implies a corresponding correlation in TD50 values for these two species. The implications of these results for cancer risk assessment are examined in light of the large variation in potency among chemicals known to induce tumors in rodents.  相似文献   

3.
Trichloroacetic acid (TCA) is major metabolite of trichloroethylene (TRI) thought to contribute to its hepatocarcinogenic effects in mice. Recent studies have shown that peak blood concentrations of TCA in rats do not occur until approximately 12 hours following an oral dose of TRI. However, blood concentrations of TRI reach maximum within an hour and are nondetectable after 2 hours.(1) The results of study which examined the enterohepatic recirculation (EHC) of the principle TRI metabolited(2) was used to develop physiologically-based pharmacokinetic model for TRI, which includes enterohepatic recirculation of its metabolites. The model quantitatively predicts the uptake, distribution and elimination of TRI, trichloroethanol, trichloroethanol-glucuronide, and TCA and includes production of metabolites through the enterohepatic recirculation pathway. Physiologic parameters used in the model were obtained from the literature.(3.4) Parameters for TRI metabolism were taken from Fisher et al.(5) Other kinetic parameters were found in the literature or estimated from experimental data.(2) The model was calibrated to data from experiments of an earlier study where TRI was orally administered(2) Verification of the model was conducted using data on the enterohepatic recirculation of TCEOH and TCA(2) chloral hydrate data (infusion doses) from Merdink,(1) and TRI data from Templin(l) and Larson and Bull.(1)  相似文献   

4.
Lifetime cancer potency of alfatoxin was assessed based on the Yeh et al. study from China in which both aflatoxin exposure and hepatitis B prevalence were measured. This study provides the best available information for estimating the carcinogenic risk posed by aflatoxin to the U.S. population. Cancer potency of aflatoxin was estimated using a biologically motivated risk assessment model. The best estimate of aflatoxin potency was 9 (mg/kg/day)−1 for individuals negative for hepatitis B and 230 (mg/kg/day)−1 for individuals positive for hepatitis B.  相似文献   

5.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

6.
There is continuing concern for the exposure of persons to various chlorinated organics via the environment, for example, chlorinated disinfection byproducts in drinking water.(1) Some of these may be carcinogenic,(2) although the evidence is far from strong.(3) There is an accumulating body of evidence that one of the normal human immunological responses to foreign agents is the generation of hypochlorous acid. This evidence will be summarized. The possibility that this HOCl generated in vivo could result in the formation of organo-chlorine compounds does not appear to have been seriously considered. Based on best available information, the amount of such byproduct formation will be estimated.  相似文献   

7.
What Do We Know About Making Risk Comparisons?   总被引:2,自引:0,他引:2  
The risks of unfamiliar technologies are often evaluated by comparing them with the risks of more familiar ones. Such risk comparisons have been criticized for neglecting critical dimensions of risky decisions. In a guide written for the Chemical Manufacturers Association, Covello et al. (1) have summarized these critiques and developed a taxonomy that characterizes possible risk comparisons in terms of their acceptability (or objectionableness). We asked four diverse groups of subjects to judge the acceptability of 14 statements produced by Covello et al. as examples of their categories. We found no correlation between the judgments of acceptability produced by our subjects and those predicted by Covello et al. .  相似文献   

8.
A Monte Carlo simulation is incorporated into a risk assessment for trichloroethylene (TCE) using physiologically-based pharmacokinetic (PBPK) modeling coupled with the linearized multistage model to derive human carcinogenic risk extrapolations. The Monte Carlo technique incorporates physiological parameter variability to produce a statistically derived range of risk estimates which quantifies specific uncertainties associated with PBPK risk assessment approaches. Both inhalation and ingestion exposure routes are addressed. Simulated exposure scenarios were consistent with those used by the Environmental Protection Agency (EPA) in their TCE risk assessment. Mean values of physiological parameters were gathered from the literature for both mice (carcinogenic bioassay subjects) and for humans. Realistic physiological value distributions were assumed using existing data on variability. Mouse cancer bioassay data were correlated to total TCE metabolized and area-under-the-curve (blood concentration) trichloroacetic acid (TCA) as determined by a mouse PBPK model. These internal dose metrics were used in a linearized multistage model analysis to determine dose metric values corresponding to 10-6 lifetime excess cancer risk. Using a human PBPK model, these metabolized doses were then extrapolated to equivalent human exposures (inhalation and ingestion). The Monte Carlo iterations with varying mouse and human physiological parameters produced a range of human exposure concentrations producing a 10-6 risk.  相似文献   

9.
The recent decision of the U.S. Supreme Court on the regulation of CO2 emissions from new motor vehicles( 1 ) shows the need for a robust methodology to evaluate the fraction of attributable risk from such emissions. The methodology must enable decisionmakers to reach practically relevant conclusions on the basis of expert assessments the decisionmakers see as an expression of research in progress, rather than as knowledge consolidated beyond any reasonable doubt.( 2,3,4 ) This article presents such a methodology and demonstrates its use for the Alpine heat wave of 2003. In a Bayesian setting, different expert assessments on temperature trends and volatility can be formalized as probability distributions, with initial weights (priors) attached to them. By Bayesian learning, these weights can be adjusted in the light of data. The fraction of heat wave risk attributable to anthropogenic climate change can then be computed from the posterior distribution. We show that very different priors consistently lead to the result that anthropogenic climate change has contributed more than 90% to the probability of the Alpine summer heat wave in 2003. The present method can be extended to a wide range of applications where conclusions must be drawn from divergent assessments under uncertainty.  相似文献   

10.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

11.
U.S. Government agencies have adopted a linear-no-threshold dose-response relationship for chemical carcinogens, and have set up a Carcinogen Assessment Group (CAG) to determine the proportionality constants in these relationships. Their results are summarized for the carcinogenic elements Be, Cr, Ni, As, and Cd. It is shown that when effects are integrated over ∼105 years, an atom of these elements in the ground has a reasonable chance (10-4-10-1)of being ingested orally by a human. From this it is shown that, over this time period, producing electricity by coal burning causes 320 fatalities/GWe-yr and all coal burning in the United States causes 74,000 fatalities/year. Commercial use of these carcinogenic elements causes the following numbers of fatalities/yr: Be–900, Cr–87,000, Ni–10,000, As–62,000, and Cd–230,000. Use of CdS and GaAs photovoltaics would cause 2200 and 66 fatalities/GWe-yr, respectively, and production of construction materials for photovoltaic arrays would cause 11 fatalities/GWe-yr through use of coal. The calculational methods are derived from those used in risk assessments of radioactive wastes, and their questionable aspects apply equally to those assessments. It is shown that, contrary to present beliefs, it is much safer to dump these elements into rivers than to bury them in the ground, and by far the safest procedure is to dump them in the oceans.  相似文献   

12.
We estimated benzene risk using a novel framework of risk assessment that employed the measurement of radiation dose equivalents to benzene metabolites and a PBPK model. The highest risks for 1 μg/m3 and 3.2 mg/m3 life time exposure of benzene estimated with a linear regression were 5.4 × 10−7 and 1.3 × 10−3, respectively. Even though these estimates were based on in vitro chromosome aberration test data, they were about one-sixth to one-fourteenth that from other studies and represent a fairly good estimate by using radiation equivalent coefficient as an "internal standard."  相似文献   

13.
We present a critical assessment of the benchmark dose (BMD) method introduced by Crump(1) as an alternative method for setting a characteristic dose level for toxicant risk assessment. The no-observed-adverse-effect-level (NOAEL) method has been criticized because it does not use all of the data and because the characteristic dose level obtained depends on the dose levels and the statistical precision (sample sizes) of the study design. Defining the BMD in terms of a confidence bound on a point estimate results in a characteristic dose that also varies with the statistical precision and still depends on the study dose levels.(2) Indiscriminate choice of benchmark response level may result in a BMD that reflects little about the dose-response behavior available from using all of the data. Another concern is that the definition of the BMD for the quantal response case is different for the continuous response case. Specifically, defining the BMD for continuous data using a ratio of increased effect divided by the background response results in an arbitrary dependence on the natural background for the endpoint being studied, making comparison among endpoints less meaningful and standards more arbitrary. We define a modified benchmark dose as a point estimate using the ratio of increased effect divided by the full adverse response range which enables consistent placement of the benchmark response level and provides a BMD with a more consistent relationship to the dose-response curve shape.  相似文献   

14.
The relative contribution of four influenza virus exposure pathways—(1) virus-contaminated hand contact with facial membranes, (2) inhalation of respirable cough particles, (3) inhalation of inspirable cough particles, and (4) spray of cough droplets onto facial membranes—must be quantified to determine the potential efficacy of nonpharmaceutical interventions of transmission. We used a mathematical model to estimate the relative contributions of the four pathways to infection risk in the context of a person attending a bed-ridden family member ill with influenza. Considering the uncertainties in the sparse human subject influenza dose-response data, we assumed alternative ratios of 3,200:1 and 1:1 for the infectivity of inhaled respirable virus to intranasally instilled virus. For the 3,200:1 ratio, pathways (1), (2), and (4) contribute substantially to influenza risk: at a virus saliva concentration of 106 mL−1, pathways (1), (2), (3), and (4) contribute, respectively, 31%, 17%, 0.52%, and 52% of the infection risk. With increasing virus concentrations, pathway (2) increases in importance, while pathway (4) decreases in importance. In contrast, for the 1:1 infectivity ratio, pathway (1) is the most important overall: at a virus saliva concentration of 106 mL−1, pathways (1), (2), (3), and (4) contribute, respectively, 93%, 0.037%, 3.3%, and 3.7% of the infection risk. With increasing virus concentrations, pathway (3) increases in importance, while pathway (4) decreases in importance. Given the sparse knowledge concerning influenza dose and infectivity via different exposure pathways, nonpharmaceutical interventions for influenza should simultaneously address potential exposure via hand contact to the face, inhalation, and droplet spray.  相似文献   

15.
Pharmacokinetic models which incorporate independently measured anatomical characteristics and physiological flows have been widely used to predict the pharmacokinetic behavior of drugs, anesthetics, and other chemicals. Models appearing in the literature have included as many as 18,(1) or as few as 5 tissue compartments.(2) With the exception of the multiple-compartment delay trains used by Bischoff(3) to model the delays inherent to the appearance of drug metabolites in bile and segments of the intestinal lumen, very little effort has been made to incorporate the available information on gastrointestinal anatomy and physiology into more accurate gastrointestinal absorption/enterohepatic recirculation submodels. Since several authors have shown that the lymphatic system is the most significant route of absorption for highly lipophilic chemicals, we have constructed a model of gastrointestinal absorption that emphasizes chylomicron production and transport as the most significant route of absorption for nonvolatile, lipophilic chemicals. The absorption and distribution of hexachlorobenzene after intravenous vs. oral dosing are used to demonstrate features of this model.  相似文献   

16.
Applying a hockey stick parametric dose-response model to data on late or retarded development in Iraqi children exposed in utero to methylmercury, with mercury (Hg) exposure characterized by the peak Hg concentration in mothers'hair during pregnancy, Cox et al. calculated the "best statistical estimate" of the threshold for health effects as 10 ppm Hg in hair with a 95% range of uncertainty of between 0 and 13.6 ppm.(1)A new application of the hockey stick model to the Iraqi data shows, however, that the statistical upper limit of the threshold based on the hockey stick model could be as high as 255 ppm. Furthermore, the maximum likelihood estimate of the threshold using a different parametric model is virtually zero. These and other analyses demonstrate that threshold estimates based on parametric models exhibit high statistical variability and model dependency, and are highly sensitive to the precise definition of an abnormal response. Consequently, they are not a reliable basis for setting a reference dose (RfD) for methylmercury. Benchmark analyses and statistical analyses useful for deriving NOAELs are also presented. We believe these latter analyses—particularly the benchmark analyses—generally form a sounder basis for determining RfDs than the type of hockey stick analysis presented by Cox et al. However, the acute nature of the exposures, as well as other limitations in the Iraqi data suggest that other data may be more appropriate for determining acceptable human exposures to methylmercury.  相似文献   

17.
Cross-Cultural Differences in Risk Perception: A Model-Based Approach   总被引:4,自引:0,他引:4  
The present study assessed cross-cultural differences in the perception of financial risks. Students at large universities in Hong Kong, Taiwan, the Netherlands, and the U.S., as well as a group of Taiwanese security analysts rated the riskiness of a set of monetary lotteries. Risk judgments differed with nationality, but not with occupation (students vs. security analysts) and were modeled by the Conjoint Expected Risk (CER) model.(1) Consistent with cultural differences in country uncertainty avoidance,(2) CER model parameters of respondents from the two Western countries differed from those of respondents from the two countries with Chinese cultural roots: The risk judgments of respondents from Hong Kong and Taiwan were more sensitive to the magnitude of potential losses and less mitigated by the probability of positive outcomes.  相似文献   

18.
This paper demonstrates a new methodology for probabilistic public health risk assessment using the first-order reliability method. The method provides the probability that incremental lifetime cancer risk exceeds a threshold level, and the probabilistic sensitivity quantifying the relative impact of considering the uncertainty of each random variable on the exceedance probability. The approach is applied to a case study given by Thompson et al. (1) on cancer risk caused by ingestion of benzene-contaminated soil, and the results are compared to that of the Monte Carlo method. Parametric sensitivity analyses are conducted to assess the sensitivity of the probabilistic event with respect to the distribution parameters of the basic random variables, such as the mean and standard deviation. The technique is a novel approach to probabilistic risk assessment, and can be used in situations when Monte Carlo analysis is computationally expensive, such as when the simulated risk is at the tail of the risk probability distribution.  相似文献   

19.
In this paper we describe a simulation, by Monte Carlo methods, of the results of rodent carcinogenicity bioassays. Our aim is to study how the observed correlation between carcinogenic potency (beta or 1n2/TD50) and maximum tolerated dose (MTD) arises, and whether the existence of this correlation leads to an artificial correlation between carcinogenic potencies in rats and mice. The validity of the bioassay results depends upon, among other things, certain biases in the experimental design of the bioassays. These include selection of chemicals for bioassay and details of the experimental protocol, including dose levels. We use as variables in our simulation the following factors: (1) dose group size, (2) number of dose groups, (3) tumor rate in the control (zero-dose) group, (4) distribution of the MTD values of the group of chemicals as specified by the mean and standard deviation, (5) the degree of correlation between beta and the MTD, as given by the standard deviation of the random error term in the linear regression of log beta on log (1/MTD), and (6) an upper limit on the number of animals with tumors. Monte Carlo simulation can show whether the information present in the existing rodent bioassay database is sufficient to reject the validity of the proposed interspecies correlations at a given level of stringency. We hope that such analysis will be useful for future bioassay design, and more importantly, for discussion of the whole NCI/NTP program.  相似文献   

20.
Crouch and Wilson demonstrated a strong correlation between carcinogenic potencies in rats and mice, supporting the extrapolation from mouse to man. Bernstein et al. , however, show that the observed correlation is mainly a statistical artifact of bioassay design. Crouch et al. have a comeback. This paper will review the arguments and present some new data. The correlation is largely (but not totally) tautological, confirming results in Bernstein et al.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号