首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Ethylene oxide is a gas produced in large quantities in the United States that is used primarily as a chemical intermediate in the production of ethylene glycol, propylene glycol, non-ionic surfactants, ethanolamines, glycol ethers, and other chemicals. It has been well established that ethylene oxide can induce cancer, genetic, reproductive and developmental, and acute health effects in animals. The U.S. Environmental Protection Agency is currently developing both a cancer potency factor and a reference concentration (RfC) for ethylene oxide. This study used the rich database on the reproductive and developmental effects of ethylene oxide to develop a probabilistic characterization of possible regulatory thresholds for ethylene oxide. This analysis was based on the standard regulatory approach for noncancer risk assessment, but involved several innovative elements, such as: (1) the use of advanced statistical methods to account for correlations in developmental outcomes among littermates and allow for simultaneous control of covariates (such as litter size); (2) the application of a probabilistic approach for characterizing the uncertainty in extrapolating the animal results to humans; and (3) the use of a quantitative approach to account for the variation in heterogeneity among the human population. This article presents several classes of results, including: (1) probabilistic characterizations of ED10s for two quantal reproductive outcomes-resorption and fetal death, (2) probabilistic characterizations of one developmental outcome-the dose expected to yield a 5% reduction in fetal (or pup) weight, (3) estimates of the RfCs that would result from using these values in the standard regulatory approach for noncancer risk assessment, and (4) a probabilistic characterization of the level of ethylene oxide exposure that would be expected to yield a 1/1,000 increase in the risk of reproductive or developmental outcomes in exposed human populations.  相似文献   

2.
At the request of the U.S. Environmental Protection Agency (EPA), the National Research Council (NRC) recently completed a major report, Science and Decisions: Advancing Risk Assessment, that is intended to strengthen the scientific basis, credibility, and effectiveness of risk assessment practices and subsequent risk management decisions. The report describes the challenges faced by risk assessment and the need to consider improvements in both the technical analyses of risk assessments (i.e., the development and use of scientific information to improve risk characterization) and the utility of risk assessments (i.e., making assessments more relevant and useful for risk management decisions). The report tackles a number of topics relating to improvements in the process, including the design and framing of risk assessments, uncertainty and variability characterization, selection and use of defaults, unification of cancer and noncancer dose‐response assessment, cumulative risk assessment, and the need to increase EPA's capacity to address these improvements. This article describes and summarizes the NRC report, with an eye toward its implications for risk assessment practices at EPA.  相似文献   

3.
Methods for evaluating the hazards associated with noncancer responses with epidemiologic data are considered. The methods for noncancer risk assessment have largely been developed for experimental data, and are not always suitable for the more complex structure of epidemiologic data. In epidemiology, the measurement of the response and the exposure is often either continuous or dichotomous. For a continuous noncancer response modeled with multiple regression, a variety of endpoints may be examined: (1) the concentration associated with absolute or relative decrements in response; (2) a threshold concentration associated with no change in response; and (3) the concentration associated with a particular added risk of impairment. For a dichotomous noncancer response modeled with logistic regression, concentrations associated with specified added/extra risk or with a threshold responses may be estimated. No-observed-effect concentrations may also be estimated for categorizations of exposures for both continuous and dichotomous responses but these may depend on the arbitrary categories chosen. Respiratory function in miners exposed to coal dust is used to illustrate these methods.  相似文献   

4.
Risk characterization in a study population relies on cases of disease or death that are causally related to the exposure under study. The number of such cases, so-called "excess" cases, is not just an indicator of the impact of the risk factor in the study population, but also an important determinant of statistical power for assessing aspects of risk such as age-time trends and susceptible subgroups. In determining how large a population to study and/or how long to follow a study population to accumulate sufficient excess cases, it is necessary to predict future risk. In this study, focusing on models involving excess risk with possible effect modification, we describe a method for predicting the expected magnitude of numbers of excess cases and assess the uncertainty in those predictions. We do this by extending Bayesian APC models for rate projection to include exposure-related excess risk with possible effect modification by, e.g., age at exposure and attained age. The method is illustrated using the follow-up study of Japanese Atomic-Bomb Survivors, one of the primary bases for determining long-term health effects of radiation exposure and assessment of risk for radiation protection purposes. Using models selected by a predictive-performance measure obtained on test data reserved for cross-validation, we project excess counts due to radiation exposure and lifetime risk measures (risk of exposure-induced deaths (REID) and loss of life expectancy (LLE)) associated with cancer and noncancer disease deaths in the A-Bomb survivor cohort.  相似文献   

5.
There is increasing interest in the integration of quantitative risk analysis with benefit-cost and cost-effectiveness methods to evaluate environmental health policy making and perform comparative analyses. However, the combined use of these methods has revealed deficiencies in the available methods, and the lack of useful analytical frameworks currently constrains the utility of comparative risk and policy analyses. A principal issue in integrating risk and economic analysis is the lack of common performance metrics, particularly when conducting comparative analyses of regulations with disparate health endpoints (e.g., cancer and noncancer effects or risk-benefit analysis) and quantitative estimation of cumulative risk, whether from exposure to single agents with multiple health impacts or from exposure to mixtures. We propose a general quantitative framework and examine assumptions required for performing analyses of health risks and policies. We review existing and proposed risk and health-impact metrics for evaluating policies designed to protect public health from environmental exposures, and identify their strengths and weaknesses with respect to their use in a general comparative risk and policy analysis framework. Case studies are presented to demonstrate applications of this framework with risk-benefit and air pollution risk analyses. Through this analysis, we hope to generate discussions regarding the data requirements, analytical approaches, and assumptions required for general models to be used in comparative risk and policy analysis.  相似文献   

6.
Of the 188 hazardous air pollutants (HAPs) listed in the Clean Air Act, only a handful have information on human health effects, derived primarily from animal and occupational studies. Lack of consistent monitoring data on ambient air toxics makes it difficult to assess the extent of low-level, chronic, ambient exposures to HAPs that could affect human health, and limits attempts to prioritize and evaluate policy initiatives for emissions reduction. Modeled outdoor HAP concentration estimates from the U.S. Environmental Protection Agency's Cumulative Exposure Project were used to characterize the extent of the air toxics problem in California for the base year of 1990. These air toxics concentration estimates were used with chronic toxicity data to estimate cancer and noncancer hazards for individual HAPs and the risks posed by multiple pollutants. Although hazardous air pollutants are ubiquitous in the environment, potential cancer and noncancer health hazards posed by ambient exposures are geographically concentrated in three urbanized areas and in a few rural counties. This analysis estimated a median excess individual cancer risk of 2.7E-4 for all air toxics concentrations and 8600 excess lifetime cancer cases, 70% of which were attributable to four pollutants: polycyclic organic matter, 1,3 butadiene, formaldehyde, and benzene. For noncancer effects, the analysis estimated a total hazard index representing the combined effect of all HAPs considered. Each pollutant contributes to the index a ratio of estimated concentration to reference concentration. The median value of the index across census tracts was 17, due primarily to acrolein and chromium concentration estimates. On average, HAP concentrations and cancer and noncancer health risks originate mostly from area and mobile source emissions, although there are several locations in the state where point sources account for a large portion of estimated concentrations and health risks. Risk estimates from this study can provide guidance for prioritizing research, monitoring, and regulatory intervention activities to reduce potential hazards to the general population. Improved ambient monitoring efforts can help clarify uncertainties inherent in this analysis.  相似文献   

7.
《Risk analysis》2018,38(5):1052-1069
This study investigated whether, in the absence of chronic noncancer toxicity data, short‐term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose–response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best‐fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short‐term (three months) toxicity data. The findings indicate that short‐term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data.  相似文献   

8.
In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.  相似文献   

9.
Noncancer risk assessment traditionally relies on applied dose measures, such as concentration in inhaled air or in drinking water, to characterize no-effect levels or low-effect levels in animal experiments. Safety factors are then incorporated to address the uncertainties associated with extrapolating across species, dose levels, and routes of exposure, as well as to account for the potential impact of variability of human response. A risk assessment for chloropentafluorobenzene (CPFB) was performed in which a physiologically based pharmacokinetic model was employed to calculate an internal measure of effective tissue dose appropriate to each toxic endpoint. The model accurately describes the kinetics of CPFB in both rodents and primates. The model calculations of internal dose at the no-effect and low-effect levels in animals were compared with those calculated for potential human exposure scenarios. These calculations were then used in place of default interspecies and route-to-route safety factors to determine safe human exposure conditions. Estimates of the impact of model parameter uncertainty, as estimated by a Monte Carlo technique, also were incorporated into the assessment. The approach used for CPFB is recommended as a general methodology for noncancer risk assessment whenever the necessary pharmacokinetic data can be obtained.  相似文献   

10.
Six multi‐decade‐long members of SRA reflect on the 1983 Red Book in order to examine the evolving relationship between risk assessment and risk management; the diffusion of risk assessment practice to risk areas such as homeland security and transportation; the quality of chemical risk databases; challenges from other groups to elements at the core of risk assessment practice; and our collective efforts to communicate risk assessment to a diverse set of critical groups that do not understand risk, risk assessment, or many other risk‐related issues. The authors reflect on the 10 recommendations in the Red Book and present several pressing challenges for risk assessment practitioners.  相似文献   

11.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

12.
Standard experimental designs for conducting developmental toxicity studies typically include three- or four-dose levels in addition to a control group. Some researchers have suggested that designs with more exposure groups would improve dose-response characterization and risk estimation. Such proposals have not, however, been supported by the results of simulation studies, which instead back the use of fewer dose levels. This discrepancy is partly due to using a known dose–response pattern to generate data, making model choice obvious. While the carcinogenicity literature has explored implications of different study designs, little attention has been given to the role of design in developmental toxicity risk assessment (or noncancer toxicology in general). In this research, we explore the implications of various experimental designs for developmental toxicity by resampling data from a large study of 2,4,5-trichlorophenoxyacetic acid in mice. We compare the properties of benchmark dose (BMD) estimation for different design strategies by randomly selecting animals within particular dose groups from the entire 2,4,5-T database of over 77,000 birth outcomes to create smaller "pseudo-studies" that are representative of standard bioassay sample sizes. Our results show that experimental designs which include more dose levels have advantages in terms of risk characterization and estimation.  相似文献   

13.
《Risk analysis》2018,38(4):724-754
A bounding risk assessment is presented that evaluates possible human health risk from a hypothetical scenario involving a 10,000‐gallon release of flowback water from horizontal fracturing of Marcellus Shale. The water is assumed to be spilled on the ground, infiltrates into groundwater that is a source of drinking water, and an adult and child located downgradient drink the groundwater. Key uncertainties in estimating risk are given explicit quantitative treatment using Monte Carlo analysis. Chemicals that contribute significantly to estimated health risks are identified, as are key uncertainties and variables to which risk estimates are sensitive. The results show that hypothetical exposure via drinking water impacted by chemicals in Marcellus Shale flowback water, assumed to be spilled onto the ground surface, results in predicted bounds between 10−10 and 10−6 (for both adult and child receptors) for excess lifetime cancer risk. Cumulative hazard indices (HICUMULATIVE) resulting from these hypothetical exposures have predicted bounds (5th to 95th percentile) between 0.02 and 35 for assumed adult receptors and 0.1 and 146 for assumed child receptors. Predicted health risks are dominated by noncancer endpoints related to ingestion of barium and lithium in impacted groundwater. Hazard indices above unity are largely related to exposure to lithium. Salinity taste thresholds are likely to be exceeded before drinking water exposures result in adverse health effects. The findings provide focus for policy discussions concerning flowback water risk management. They also indicate ways to improve the ability to estimate health risks from drinking water impacted by a flowback water spill (i.e., reducing uncertainty).  相似文献   

14.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

15.
In recent years, the healthcare sector has adopted the use of operational risk assessment tools to help understand the systems issues that lead to patient safety incidents. But although these problem‐focused tools have improved the ability of healthcare organizations to identify hazards, they have not translated into measurable improvements in patient safety. One possible reason for this is a lack of support for the solution‐focused process of risk control. This article describes a content analysis of the risk management strategies, policies, and procedures at all acute (i.e., hospital), mental health, and ambulance trusts (health service organizations) in the East of England area of the British National Health Service. The primary goal was to determine what organizational‐level guidance exists to support risk control practice. A secondary goal was to examine the risk evaluation guidance provided by these trusts. With regard to risk control, we found an almost complete lack of useful guidance to promote good practice. With regard to risk evaluation, the trusts relied exclusively on risk matrices. A number of weaknesses were found in the use of this tool, especially related to the guidance for scoring an event's likelihood. We make a number of recommendations to address these concerns. The guidance assessed provides insufficient support for risk control and risk evaluation. This may present a significant barrier to the success of risk management approaches in improving patient safety.  相似文献   

16.
Approaches to risk assessment have been shown to vary among regulatory agencies and across jurisdictional boundaries according to the different assumptions and justifications used. Approaches to screening-level risk assessment from six international agencies were applied to an urban case study focusing on benzo[a]pyrene (B[a]P) exposure and compared in order to provide insight into the differences between agency methods, assumptions, and justifications. Exposure estimates ranged four-fold, with most of the dose stemming from exposure to animal products (8-73%) and plant products (24-88%). Total cancer risk across agencies varied by two orders of magnitude, with exposure to air and plant and animal products contributing most to total cancer risk, while the air contribution showed the greatest variability (1-99%). Variability in cancer risk of 100-fold was attributed to choices of toxicological reference values (TRVs), either based on a combination of epidemiological and animal data, or on animal data. The contribution and importance of the urban exposure pathway for cancer risk varied according to the TRV and, ultimately, according to differences in risk assessment assumptions and guidance. While all agency risk assessment methods are predicated on science, the study results suggest that the largest impact on the differential assessment of risk by international agencies comes from policy and judgment, rather than science.  相似文献   

17.
18.
The extensive data from the Blair et al.((1)) epidemiology study of occupational acrylonitrile exposure among 25460 workers in eight plants in the United States provide an excellent opportunity to update quantitative risk assessments for this widely used commodity chemical. We employ the semiparametric Cox relative risk (RR) regression model with a cumulative exposure metric to model cause-specific mortality from lung cancer and all other causes. The separately estimated cause-specific cumulative hazards are then combined to provide an overall estimate of age-specific mortality risk. Age-specific estimates of the additional risk of lung cancer mortality associated with several plausible occupational exposure scenarios are obtained. For age 70, these estimates are all markedly lower than those generated with the cancer potency estimate provided in the USEPA acrylonitrile risk assessment.((2)) This result is consistent with the failure of recent occupational studies to confirm elevated lung cancer mortality among acrylonitrile-exposed workers as was originally reported by O'Berg,((3)) and it calls attention to the importance of using high-quality epidemiology data in the risk assessment process.  相似文献   

19.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

20.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号