首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Formaldehyde induced squamous-cell carcinomas in the nasal passages of F344 rats in two inhalation bioassays at exposure levels of 6 ppm and above. Increases in rates of cell proliferation were measured by T. M. Monticello and colleagues at exposure levels of 0.7 ppm and above in the same tissues from which tumors arose. A risk assessment for formaldehyde was conducted at the CIIT Centers for Health Research, in collaboration with investigators from Toxicological Excellence in Risk Assessment (TERA) and the U.S. Environmental Protection Agency (U.S. EPA) in 1999. Two methods for dose-response assessment were used: a full biologically based modeling approach and a statistically oriented analysis by benchmark dose (BMD) method. This article presents the later approach, the purpose of which is to combine BMD and pharmacokinetic modeling to estimate human cancer risks from formaldehyde exposure. BMD analysis was used to identify points of departure (exposure levels) for low-dose extrapolation in rats for both tumor and the cell proliferation endpoints. The benchmark concentrations for induced cell proliferation were lower than for tumors. These concentrations were extrapolated to humans using two mechanistic models. One model used computational fluid dynamics (CFD) alone to determine rates of delivery of inhaled formaldehyde to the nasal lining. The second model combined the CFD method with a pharmacokinetic model to predict tissue dose with formaldehyde-induced DNA-protein cross-links (DPX) as a dose metric. Both extrapolation methods gave similar results, and the predicted cancer risk in humans at low exposure levels was found to be similar to that from a risk assessment conducted by the U.S. EPA in 1991. Use of the mechanistically based extrapolation models lends greater certainty to these risk estimates than previous approaches and also identifies the uncertainty in the measured dose-response relationship for cell proliferation at low exposure levels, the dose-response relationship for DPX in monkeys, and the choice between linear and nonlinear methods of extrapolation as key remaining sources of uncertainty.  相似文献   

2.
Scientists at the CIIT Centers for Health Research (Conolly et al., 2000, 2003; Kimbell et al., 2001a, 2001b) developed a two-stage clonal expansion model of formaldehyde-induced nasal cancers in the F344 rat that made extensive use of mechanistic information. An inference of their modeling approach was that formaldehyde-induced tumorigenicity could be optimally explained without the role of formaldehyde's mutagenic action. In this article, we examine the strength of this result and modify select features to examine the sensitivity of the predicted dose response to select assumptions. We implement solutions to the two-stage cancer model that are valid for nonhomogeneous models (i.e., models with time-dependent parameters), thus accounting for time dependence in variables. In this reimplementation, we examine the sensitivity of model predictions to pooling historical and concurrent control data, and to lumping sacrificed animals in which tumors were discovered incidentally with those in which death was caused by the tumors. We found the CIIT model results were not significantly altered with the nonhomogeneous solutions. Dose-response predictions below the range of exposures where tumors occurred in the bioassays were highly sensitive to the choice of control data. In the range of exposures where tumors were observed, the model attributed up to 74% of the added tumor probability to formaldehyde's mutagenic action when our reanalysis restricted the use of the National Toxicology Program (NTP) historical control data to only those obtained from inhalation exposures. Model results were insensitive to hourly or daily temporal variations in DNA protein cross-link (DPX) concentration, a surrogate for the dose-metric linked to formaldehyde-induced mutations, prompting us to utilize weekly averages for this quantity. Various other biological and mathematical uncertainties in the model have been retained unmodified in this analysis. These include model specification of initiated cell division and death rates, and uncertainty and variability in the dose response for cell replication rates, issues that will be considered in a future paper.  相似文献   

3.
A Monte Carlo simulation is incorporated into a risk assessment for trichloroethylene (TCE) using physiologically-based pharmacokinetic (PBPK) modeling coupled with the linearized multistage model to derive human carcinogenic risk extrapolations. The Monte Carlo technique incorporates physiological parameter variability to produce a statistically derived range of risk estimates which quantifies specific uncertainties associated with PBPK risk assessment approaches. Both inhalation and ingestion exposure routes are addressed. Simulated exposure scenarios were consistent with those used by the Environmental Protection Agency (EPA) in their TCE risk assessment. Mean values of physiological parameters were gathered from the literature for both mice (carcinogenic bioassay subjects) and for humans. Realistic physiological value distributions were assumed using existing data on variability. Mouse cancer bioassay data were correlated to total TCE metabolized and area-under-the-curve (blood concentration) trichloroacetic acid (TCA) as determined by a mouse PBPK model. These internal dose metrics were used in a linearized multistage model analysis to determine dose metric values corresponding to 10-6 lifetime excess cancer risk. Using a human PBPK model, these metabolized doses were then extrapolated to equivalent human exposures (inhalation and ingestion). The Monte Carlo iterations with varying mouse and human physiological parameters produced a range of human exposure concentrations producing a 10-6 risk.  相似文献   

4.
Terje Aven 《Risk analysis》2011,31(10):1515-1525
Few policies for risk management have created more controversy than the precautionary principle. A main problem is the extreme number of different definitions and interpretations. Almost all definitions of the precautionary principle identify “scientific uncertainties” as the trigger or criterion for its invocation; however, the meaning of this concept is not clear. For applying the precautionary principle it is not sufficient that the threats or hazards are uncertain. A stronger requirement is needed. This article provides an in‐depth analysis of this issue. We question how the scientific uncertainties are linked to the interpretation of the probability concept, expected values, the results from probabilistic risk assessments, the common distinction between aleatory uncertainties and epistemic uncertainties, and the problem of establishing an accurate prediction model (cause‐effect relationship). A new classification structure is suggested to define what scientific uncertainties mean.  相似文献   

5.
Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic models, raise the issue of how to evaluate whether the models are adequate for proposed uses, including safety or risk assessment. A six-step process for model evaluation is described. It relies on multidisciplinary expertise to address the biological, toxicological, mathematical, statistical, and risk assessment aspects of the modeling and its application. The first step is to have a clear definition of the purpose(s) of the model in the particular assessment; this provides critical perspectives on all subsequent steps. The second step is to evaluate the biological characterization described by the model structure based on the intended uses of the model and available information on the compound being modeled or related compounds. The next two steps review the mathematical equations used to describe the biology and their implementation in an appropriate computer program. At this point, the values selected for the model parameters (i.e., model calibration) must be evaluated. Thus, the fifth step is a combination of evaluating the model parameterization and calibration against data and evaluating the uncertainty in the model outputs. The final step is to evaluate specialized analyses that were done using the model, such as modeling of population distributions of parameters leading to population estimates for model outcomes or inclusion of early pharmacodynamic events. The process also helps to define the kinds of documentation that would be needed for a model to facilitate its evaluation and implementation.  相似文献   

6.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

7.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

8.
A screening approach is developed for volatile organic compounds (VOCs) to estimate exposures that correspond to levels measured in fluids and/or tissues in human biomonitoring studies. The approach makes use of a generic physiologically-based pharmacokinetic (PBPK) model coupled with exposure pattern characterization, Monte Carlo analysis, and quantitative structure property relationships (QSPRs). QSPRs are used for VOCs with minimal data to develop chemical-specific parameters needed for the PBPK model. The PBPK model is capable of simulating VOC kinetics following multiple routes of exposure, such as oral exposure via water ingestion and inhalation exposure during shower events. Using published human biomonitoring data of trichloroethylene (TCE), the generic model is evaluated to determine how well it estimates TCE concentrations in blood based on the known drinking water concentrations. In addition, Monte Carlo analysis is conducted to characterize the impact of the following factors: (1) uncertainties in the QSPR-estimated chemical-specific parameters; (2) variability in physiological parameters; and (3) variability in exposure patterns. The results indicate that uncertainty in chemical-specific parameters makes only a minor contribution to the overall variability and uncertainty in the predicted TCE concentrations in blood. The model is used in a reverse dosimetry approach to derive estimates of TCE concentrations in drinking water based on given measurements of TCE in blood, for comparison to the U.S. EPA's Maximum Contaminant Level in drinking water. This example demonstrates how a reverse dosimetry approach can be used to facilitate interpretation of human biomonitoring data in a health risk context by deriving external exposures that are consistent with a biomonitoring data set, thereby permitting comparison with health-based exposure guidelines.  相似文献   

9.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

10.
Experimental data from the Chemical Industry Institute of Toxicology (CIIT) are used to estimate the risk of squamous cell carcinoma of the nasal cavity in Fischer 344 (F344) rats over a range of ambient air concentrations of formaldehyde that includes current exposure guidelines for the workplace and home. These values are presented as a best estimate envelope obtained from five mathematical dose-response formulation. The response of Sprague-Dawley (SD) rats dosed at 15 ppm in a separate study at New York University is consistent with the predicted lifetime response for F344 rats at a slightly lower concentration (13-14 ppm). A dose-related mortality effect beyond what is attributable to the occurrence of nasal carcinomas is found in F344 rats at all CIIT exposure levels (2, 6, and 15 ppm). There is no evidence of a mortality effect in B6C3F1 mice of the CIIT study, and data for SD rats of the NYU experiment are inconclusive. In the CIIT study, rats exposed to 15 ppm exhibited a high incidence of nasal cavity squamous cell carcinomas and polypoid adenomas. Polypoid adenomas were also observed with increased incidences at 2 ppm and 6 ppm. Statistical comparisons with matched controls, and the low historical rate of spontaneous occurrence both suggest that polypoid adenomas may be a risk to F344 rats at exposure levels below the current Occupational Safety and Health Administration (OSHA) standard of 3 ppm. Squamous cell carcinomas were observed in two mice exposed to 15 ppm. This finding may be biologically significant since this tumor is rare and has not been previously reported in 4932 untreated B6C3F1 mice from recent National Toxicology Program (NTP) feeding studies.  相似文献   

11.
Security risk management is essential for ensuring effective airport operations. This article introduces AbSRiM, a novel agent‐based modeling and simulation approach to perform security risk management for airport operations that uses formal sociotechnical models that include temporal and spatial aspects. The approach contains four main steps: scope selection, agent‐based model definition, risk assessment, and risk mitigation. The approach is based on traditional security risk management methodologies, but uses agent‐based modeling and Monte Carlo simulation at its core. Agent‐based modeling is used to model threat scenarios, and Monte Carlo simulations are then performed with this model to estimate security risks. The use of the AbSRiM approach is demonstrated with an illustrative case study. This case study includes a threat scenario in which an adversary attacks an airport terminal with an improvised explosive device. The approach provides a promising way to include important elements, such as human aspects and spatiotemporal aspects, in the assessment of risk. More research is still needed to better identify the strengths and weaknesses of the AbSRiM approach in different case studies, but results demonstrate the feasibility of the approach and its potential.  相似文献   

12.
Louis Anthony Cox  Jr  . 《Risk analysis》2006,26(6):1581-1599
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.  相似文献   

13.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

14.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

15.
In recent years physiologically based pharmacokinetic models have come to play an increasingly important role in risk assessment for carcinogens. The hope is that they can help open the black box between external exposure and carcinogenic effects to experimental observations, and improve both high-dose to low-dose and interspecies projections of risk. However, to date, there have been only relatively preliminary efforts to assess the uncertainties in current modeling results. In this paper we compare the physiologically based pharmacokinetic models (and model predictions of risk-related overall metabolism) that have been produced by seven different sets of authors for perchloroethylene (tetrachloroethylene). The most striking conclusion from the data is that most of the differences in risk-related model predictions are attributable to the choice of the data sets used for calibrating the metabolic parameters. Second, it is clear that the bottom-line differences among the model predictions are appreciable. Overall, the ratios of low-dose human to bioassay rodent metabolism spanned a 30-fold range for the six available human/rat comparisons, and the seven predicted ratios of low-dose human to bioassay mouse metabolism spanned a 13-fold range. (The greater range for the rat/human comparison is attributable to a structural assumption by one author group of competing linear and saturable pathways, and their conclusion that the dangerous saturable pathway constitutes a minor fraction of metabolism in rats.) It is clear that there are a number of opportunities for modelers to make different choices of model structure, interpretive assumptions, and calibrating data in the process of constructing pharmacokinetic models for use in estimating "delivered" or "biologically effective" dose for carcinogenesis risk assessments. We believe that in presenting the results of such modeling studies, it is important for researchers to explore the results of alternative, reasonably likely approaches for interpreting the available data--and either show that any conclusions they make are relatively insensitive to particular interpretive choices, or to acknowledge the differences in conclusions that would result from plausible alternative views of the world.  相似文献   

16.
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ~75% of the total exposure range, which corresponds to an exposure estimate of 20–22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well‐documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.  相似文献   

17.
The streptogramin antimicrobial combination Quinupristin-Dalfopristin (QD) has been used in the United States since late 1999 to treat patients with vancomycin-resistant Enterococcus faecium (VREF) infections. Another streptogramin, virginiamycin (VM), is used as a growth promoter and therapeutic agent in farm animals in the United States and other countries. Many chickens test positive for QD-resistant E. faecium, raising concern that VM use in chickens might compromise QD effectiveness against VREF infections by promoting development of QD-resistant strains that can be transferred to human patients. Despite the potential importance of this threat to human health, quantifying the risk via traditional farm-to-fork modeling has proved extremely difficult. Enough key data (mainly on microbial loads at each stage) are lacking so that such modeling amounts to little more than choosing a set of assumptions to determine the answer. Yet, regulators cannot keep waiting for more data. Patients prescribed QD are typically severely ill, immunocompromised people for whom other treatment options have not readily been available. Thus, there is a pressing need for sound risk assessment methods to inform risk management decisions for VM/QD using currently available data. This article takes a new approach to the QD-VM risk modeling challenge. Recognizing that the usual farm-to-fork ("forward chaining") approach commonly used in antimicrobial risk assessment for food animals is unlikely to produce reliable results soon enough to be useful, we instead draw on ideas from traditional fault tree analysis ("backward chaining") to reverse the farm-to-fork process and start with readily available human data on VREF case loads and QD resistance rates. Combining these data with recent genogroup frequency data for humans, chickens, and other sources (Willems et al., 2000, 2001) allows us to quantify potential human health risks from VM in chickens in both the United States and Australia, two countries where regulatory action for VM is being considered. We present a risk simulation model, thoroughly grounded in data, that incorporates recent nosocomial transmission and genetic typing data. The model is used to estimate human QD treatment failures over the next five years with and without continued VM use in chickens. The quantitative estimates and probability distributions were implemented in a Monte Carlo simulation model for a five-year horizon beginning in the first quarter of 2002. In Australia, a Q1-2002 ban of virginiamycin would likely reduce average attributable treatment failures by 0.35 x 10(-3) cases, expected mortalities by 5.8 x 10(-5) deaths, and life years lost by 1.3 x 10(-3) for the entire population over five years. In the United States, where the number of cases of VRE is much higher, a 1Q-2002 ban on VM is predicted to reduce average attributable treatment failures by 1.8 cases in the entire population over five years; expected mortalities by 0.29 cases; and life years lost by 6.3 over a five-year period. The model shows that the theoretical statistical human health benefits of a VM ban range from zero to less than one statistical life saved in both Australia and the United States over the next five years and are rapidly decreasing. Sensitivity analyses indicate that this conclusion is robust to key data gaps and uncertainties, e.g., about the extent of resistance transfer from chickens to people.  相似文献   

18.
The qualitative and quantitative evaluation of risk in developmental toxicology has been discussed in several recent publications.(1–3) A number of issues still are to be resolved in this area. The qualitative evaluation and interpretation of end points in developmental toxicology depends on an understanding of the biological events leading to the end points observed, the relationships among end points, and their relationship to dose and to maternal toxicity. The interpretation of these end points is also affected by the statistical power of the experiments used for detecting the various end points observed. The quantitative risk assessment attempts to estimate human risk for developmental toxicity as a function of dose. The current approach is to apply safety (uncertainty) factors to die no observed effect level (NOEL). An alternative presented and discussed here is to model the experimental data and apply a safety factor to an estimated risk level to achieve an “acceptable” level of risk. In cases where the dose-response curves upward, this approach provides a conservative estimate of risk. This procedure does not preclude the existence of a threshold dose. More research is needed to develop appropriate dose-response models that can provide better estimates for low-dose extrapolation of developmental effects.  相似文献   

19.
Wildfires are a global phenomenon that in some circumstances can result in human casualties, economic loss, and ecosystem service degradation. In this article we spatially identify wildfire risk transmission pathways and locate the areas of highest exposure of human populations to wildland fires under severe, but not uncommon, weather events. We quantify varying levels of exposure in terms of population potentially affected and tie the exposure back to the spatial source of the risk for the Front Range of Colorado, USA. We use probabilistic fire simulation modeling to address where fire ignitions are most likely to cause the highest impact to human communities, and to explore the role that various landowners play in that transmission of risk. Our results indicated that, given an ignition and the right fire weather conditions, large areas along the Front Range in Colorado could be exposed to wildfires with high potential to impact human populations, and that overall private ignitions have the potential to impact more people than federal ignitions. These results can be used to identify high‐priority areas for wildfire risk mitigation using various mitigation tools.  相似文献   

20.
Noncancer risk assessment traditionally relies on applied dose measures, such as concentration in inhaled air or in drinking water, to characterize no-effect levels or low-effect levels in animal experiments. Safety factors are then incorporated to address the uncertainties associated with extrapolating across species, dose levels, and routes of exposure, as well as to account for the potential impact of variability of human response. A risk assessment for chloropentafluorobenzene (CPFB) was performed in which a physiologically based pharmacokinetic model was employed to calculate an internal measure of effective tissue dose appropriate to each toxic endpoint. The model accurately describes the kinetics of CPFB in both rodents and primates. The model calculations of internal dose at the no-effect and low-effect levels in animals were compared with those calculated for potential human exposure scenarios. These calculations were then used in place of default interspecies and route-to-route safety factors to determine safe human exposure conditions. Estimates of the impact of model parameter uncertainty, as estimated by a Monte Carlo technique, also were incorporated into the assessment. The approach used for CPFB is recommended as a general methodology for noncancer risk assessment whenever the necessary pharmacokinetic data can be obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号