首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When assessing risks posed by environmental chemical mixtures, whole mixture approaches are preferred to component approaches. When toxicological data on whole mixtures as they occur in the environment are not available, Environmental Protection Agency guidance states that toxicity data from a mixture considered “sufficiently similar” to the environmental mixture can serve as a surrogate. We propose a novel method to examine whether mixtures are sufficiently similar, when exposure data and mixture toxicity study data from at least one representative mixture are available. We define sufficient similarity using equivalence testing methodology comparing the distance between benchmark dose estimates for mixtures in both data‐rich and data‐poor cases. We construct a “similar mixtures risk indicator”(SMRI) (analogous to the hazard index) on sufficiently similar mixtures linking exposure data with mixtures toxicology data. The methods are illustrated using pyrethroid mixtures occurrence data collected in child care centers (CCC) and dose‐response data examining acute neurobehavioral effects of pyrethroid mixtures in rats. Our method shows that the mixtures from 90% of the CCCs were sufficiently similar to the dose‐response study mixture. Using exposure estimates for a hypothetical child, the 95th percentile of the (weighted) SMRI for these sufficiently similar mixtures was 0.20 (i.e., where SMRI <1, less concern; >1, more concern).  相似文献   

2.
Chemical alternatives assessment is a method rapidly developing for use by businesses, governments, and nongovernment organizations seeking to substitute chemicals of concern in production processes and products. Chemical alternatives assessment is defined as a process for identifying, comparing, and selecting safer alternatives to chemicals of concern (including those in materials, processes, or technologies) on the basis of their hazards, performance, and economic viability. The process is intended to provide guidance for assuring that chemicals of concern are replaced with safer alternatives that are not likely to be later regretted. Conceptually, the assessment methods are developed from a set of three foundational pillars and five common principles. Based on a number of emerging alternatives assessment initiatives, in this commentary, we outline a chemical alternatives assessment blueprint structured around three broad steps: Scope, Assessment, and Selection and Implementation. Specific tasks and tools are identified for each of these three steps. While it is recognized that on‐going practice will further refine and develop the method and tools, it is important that the structure of the assessment process remain flexible, adaptive, and focused on the substitution of chemicals of concern with safer alternatives.  相似文献   

3.
Very little quantitative analysis is currently available on the cumulative effects of exposure to multiple hazardous agents that have either similar or different mechanisms of action. Over the past several years, efforts have been made to develop the methodologies for risk assessment of chemical mixtures, but mixed exposures to two or more dissimilar agents such as radiation and one or more chemical agents have not yet been addressed in any substantive way. This article reviews the current understanding of the health risks arising from mixed exposures to ionizing radiation and specific chemicals. Specifically discussed is how mixed radiation/chemical exposures, when evaluated in aggregation, were linked to chronic health endpoints such as cancer and intermediate health outcomes such as chromosomal aberrations. Also considered is the extent to which the current practices are consistent with the scientific understanding of the health risks associated with mixed-agent exposures. From this the discussion moves to the research needs for assessing the cumulative health risks from aggregate exposures to ionizing radiation and chemicals. The evaluation indicates that essentially no guidance has been provided for conducting risk assessment for two agents with different mechanisms of action (i.e., energy deposition from ionizing radiation versus DNA interactions with chemicals) but similar biological endpoints (i.e., chromosomal aberrations, mutations, and cancer). The literature review also reveals the problems caused by the absence of both the basic science and an appropriate evaluation framework for the combined effects of mixed-agent exposures. This makes it difficult to determine whether there is truly no interaction or somehow the interaction is masked by the scale of effect observation or inappropriate dose-response assumptions.  相似文献   

4.
The awareness of potential risks emerging from the use of chemicals in all parts of daily life has increased the need for risk assessments that are able to cover a high number of exposure situations and thereby ensure the safety of workers and consumers. In the European Union (EU), the practice of risk assessments for chemicals is laid down in a Technical Guidance Document; it is designed to consider environmental and human occupational and residential exposure. Almost 70 EU risk assessment reports (RARs) have been finalized for high-production-volume chemicals during the last decade. In the present study, we analyze the assessment of occupational and consumer exposure to trichloroethylene and phthalates presented in six EU RARs. Exposure scenarios in these six RARs were compared to scenarios used in applications of the scenario-based risk assessment approach to the same set of chemicals. We find that scenarios used in the selected EU RARs to represent typical exposure situations in occupational or private use of chemicals and products do not necessarily represent worst-case conditions. This can be due to the use of outdated information on technical equipment and conditions in workplaces or omission of pathways that can cause consumer exposure. Considering the need for exposure and risk assessments under the new chemicals legislation of the EU, we suggest that a transparent process of collecting data on exposure situations and of generating representative exposure scenarios is implemented to improve the accuracy of risk assessments. Also, the data sets used to assess human exposure should be harmonized, summarized in a transparent fashion, and made accessible for all risk assessors and the public.  相似文献   

5.
Ten years ago, the National Academy of Science released its risk assessment/risk management (RA/RM) “paradigm” that served to crystallize much of the early thinking about these concepts. By defining RA as a four-step process, operationally independent from RM, the paradigm has presented society with a scheme, or a conceptually common framework, for addressing many risky situations (e.g., carcinogens, noncarcinogens, and chemical mixtures). The procedure has facilitated decision-making in a wide variety of situations and has identified the most important research needs. The past decade, however, has revealed that additional progress is needed. These areas include addressing the appropriate interaction (not isolation) between RA and RM, improving the methods for assessing risks from mixtures, dealing with “adversity of effect,” deciding whether “hazard” should imply an exposure to environmental conditions or to laboratory conditions, and evolving the concept to include both health and ecological risk. Interest in and expectations of risk assessment are increasing rapidly. The emerging concept of “comparative risk” (i.e., distinguishing between large risks and smaller risks that may be qualitatively different) is at a level comparable to that held by the concept of “risk” just 10 years ago. Comparative risk stands in need of a paradigm of its own, especially given the current economic limitations. “Times are tough; Brother, can you paradigm?”  相似文献   

6.
Polycyclic aromatic hydrocarbons (PAHs) have been labeled contaminants of concern due to their carcinogenic potential, insufficient toxicological data, environmental ubiquity, and inconsistencies in the composition of environmental mixtures. The Environmental Protection Agency is reevaluating current methods for assessing the toxicity of PAHs, including the assumption of toxic additivity in mixtures. This study was aimed at testing mixture interactions through in vitro cell culture experimentation, and modeling the toxicity using quantitative structure‐activity relationships (QSAR). Clone‐9 rat liver cells were used to analyze cellular proliferation, viability, and genotoxicity of 15 PAHs in single doses and binary mixtures. Tests revealed that many mixtures have nonadditive toxicity, but display varying mixture effects depending on the mixture composition. Many mixtures displayed antagonism, similar to other published studies. QSARs were then developed using the genetic function approximation algorithm to predict toxic activity both in single PAH congeners and in binary mixtures. Effective concentrations inhibiting 50% of the cell populations were modeled, with R2 = 0.90, 0.99, and 0.84, respectively. The QSAR mixture algorithms were then adjusted to account for the observed mixture interactions as well as the mixture composition (ratios) to assess the feasibility of QSARs for mixtures. Based on these results, toxic addition is improbable and therefore environmental PAH mixtures are likely to see nonadditive responses when complex interactions occur between components. Furthermore, QSAR may be a useful tool to help bridge these data gaps surrounding the assessment of human health risks that are associated with PAH exposures.  相似文献   

7.
One of the most dynamic and fruitful areas of current health‐related research concerns the various roles of the human microbiome in disease. Evidence is accumulating that interactions between substances in the environment and the microbiome can affect risks of disease, in both beneficial and adverse ways. Although most of the research has concerned the roles of diet and certain pharmaceutical agents, there is increasing interest in the possible roles of environmental chemicals. Chemical risk assessment has, to date, not included consideration of the influence of the microbiome. We suggest that failure to consider the possible roles of the microbiome could lead to significant error in risk assessment results. Our purpose in this commentary is to summarize some of the evidence supporting our hypothesis and to urge the risk assessment community to begin considering and influencing how results from microbiome‐related research could be incorporated into chemical risk assessments. An additional emphasis in our commentary concerns the distinct possibility that research on chemical–microbiome interactions will also reduce some of the significant uncertainties that accompany current risk assessments. Of particular interest is evidence suggesting that the microbiome has an influence on variability in disease risk across populations and (of particular interest to chemical risk) in animal and human responses to chemical exposure. The possible explanatory power of the microbiome regarding sources of variability could reduce what might be the most significant source of uncertainty in chemical risk assessment.  相似文献   

8.
Kevin M. Crofton 《Risk analysis》2012,32(10):1784-1797
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose‐dependent interaction. However, the corresponding likelihood‐ratio‐based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds‐optimal second‐stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds‐optimal second‐stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice.  相似文献   

9.
Richard A. Canady 《Risk analysis》2010,30(11):1663-1670
A September 2008 workshop sponsored by the Society for Risk Analysis( 1 ) on risk assessment methods for nanoscale materials explored “nanotoxicology” in risk assessment. A general conclusion of the workshop was that, while research indicates that some nanoscale materials are toxic, the information presented at the workshop does not indicate the need for a conceptually different approach for risk assessment on nanoscale materials, compared to other materials. However, the toxicology discussions did identify areas of uncertainty that present a challenge for the assessment of nanoscale materials. These areas include novel metrics, characterizing multivariate dynamic mixtures, identification of toxicologically relevant properties and “impurities” for nanoscale characteristics, and characterizing persistence, toxicokinetics, and weight of evidence in consideration of the dynamic nature of the mixtures. The discussion also considered “nanomaterial uncertainty factors” for health risk values like the Environmental Protection Agency's reference dose (RfD). Similar to the general opinions for risk assessment, participants expressed that completing a data set regarding toxicity, or extrapolation between species, sensitive individuals, or durations of exposure, were not qualitatively different considerations for nanoscale materials in comparison to all chemicals, and therefore, a “nanomaterial uncertainty factor” for all nanomaterials does not seem appropriate. However, the quantitative challenges may require new methods and approaches to integrate the information and the uncertainty.  相似文献   

10.
A conceptual framework is presented for conducting exposure assessments under the U.S. EPA's Voluntary Children's Chemical Evaluation Program (VCCEP). The VCCEP is a voluntary program whereby companies that manufacture chemicals of potential concern are asked to conduct hazard, exposure, and risk assessments for the chemicals. The VCCEP is unique in its risk-based, tiered approach, and because it focuses on children and requires a comprehensive consideration of all reasonably foreseeable exposure pathways for a particular chemical. The consideration of all potential exposure pathways for some commonly used chemicals presents a daunting challenge for the exposure assessor. This article presents a framework for managing this complicated process, and illustrates the application of the framework with a hypothetical case study. The framework provides guidance for interpreting multiple sources of exposure information and developing a plausible list of exposure pathways for a chemical. Furthermore, the framework provides a means to process all the available information to eliminate pathways of negligible concern from consideration. Finally, the framework provides guidance for utilizing the tiered approach of VCCEP to efficiently conduct an assessment by first using simple, screening-level approaches and then, if necessary, using more complex, refined exposure assessment methods. The case study provides an illustration of the major concepts.  相似文献   

11.
With the growing number and diversity of hazard and risk assessment algorithms, models, databases, and frameworks for chemicals and their applications, risk assessors and managers are challenged to select the appropriate tool for a given need or decision. Some decisions require relatively simple tools to evaluate chemical hazards (e.g., toxicity), such as labeling for safe occupational handling and transport of chemicals. Others require assessment tools that provide relative comparisons among chemical properties, such as selecting the optimum chemical for a particular use among a group of candidates. Still other needs warrant full risk characterization, coupling both hazard and exposure considerations. Examples of these include new chemical evaluations for commercialization, evaluations of existing chemicals for novel uses, and assessments of the adequacy of risk management provisions. Even well-validated tools can be inappropriately applied, with consequences as severe as misguided chemical management, compromised credibility of the tool and its developers and users, and squandered resources. This article describes seven discrete categories of tools based on their information content, function, and the type of outputs produced. It proposes a systematic framework to assist users in selecting hazard and risk assessment tools for given applications. This analysis illustrates the importance of careful selection of assessment tools to achieve responsible chemical assessment communication and sound risk management.  相似文献   

12.
Old industrial landfills are important sources of environmental contamination in Europe, including Finland. In this study, we demonstrated the combination of TRIAD procedure, multicriteria decision analysis (MCDA), and statistical Monte Carlo analysis for assessing the risks to terrestrial biota in a former landfill site contaminated by petroleum hydrocarbons (PHCs) and metals. First, we generated hazard quotients by dividing the concentrations of metals and PHCs in soil by the corresponding risk‐based ecological benchmarks. Then we conducted ecotoxicity tests using five plant species, earthworms, and potworms, and determined the abundance and diversity of soil invertebrates from additional samples. We aggregated the results in accordance to the methods used in the TRIAD procedure, conducted rating of the assessment methods based on their performance in terms of specific criteria, and weighted the criteria using two alternative weighting techniques to produce performance scores for each method. We faced problems in using the TRIAD procedure, for example, the results from the animal counts had to be excluded from the calculation of integrated risk estimates (IREs) because our reference soil sample showed the lowest biodiversity and abundance of soil animals. In addition, hormesis hampered the use of the results from the ecotoxicity tests. The final probabilistic IREs imply significant risks at all sampling locations. Although linking MCDA with TRIAD provided a useful means to study and consider the performance of the alternative methods in predicting ecological risks, some uncertainties involved still remained outside the quantitative analysis.  相似文献   

13.
The assessment of nonresponse bias in survey‐based empirical studies plays an important role in establishing the credibility of research results. Statistical methods that involve the comparison of responses from two groups (e.g., early vs. late respondents) on multiple characteristics, which are relevant to the study, are frequently utilized in the assessment of nonresponse bias. We consider the concepts of individual and complete statistical power used for multiple testing and show their relevance for determining the number of statistical tests to perform when assessing nonresponse bias. Our analysis of factors that influence both individual and complete power levels, yielded recommendations that can be used by operations management (OM) empirical researchers to improve their assessment of nonresponse bias. A power analysis of 61 survey‐based research papers published in three prestigious academic operations management journals, over the last decade, showed the occurrence of very low (<0.4) power levels in some of the statistical tests used for assessing nonresponse bias. Such low power levels can lead to erroneous conclusions about nonresponse bias, and are indicators of the need for more rigor in the assessment of nonresponse bias in OM research.  相似文献   

14.
Todd Bridges 《Risk analysis》2011,31(8):1211-1225
Weight of evidence (WOE) methods are key components of ecological and human health risk assessments. Most WOE applications rely on the qualitative integration of diverse lines of evidence (LOE) representing impact on ecological receptors and humans. Recent calls for transparency in assessments and justifiability of management decisions are pushing the community to consider quantitative methods for integrated risk assessment and management. This article compares and contrasts the type of information required for application of individual WOE techniques and the outcomes that they provide in ecological risk assessment and proposes a multicriteria decision analysis (MCDA) framework for integrating individual LOE in support of management decisions. The use of quantitative WOE techniques is illustrated for a hypothetical but realistic case study of selecting remedial alternatives at a contaminated aquatic site. Use of formal MCDA does not necessarily eliminate biases and judgment calls necessary for selecting remedial alternatives, but allows for transparent evaluation and fusion of individual LOE. It also provides justifiable methods for selecting remedial alternatives consistent with stakeholder and decision‐maker values.  相似文献   

15.
This mixed‐methods study investigated consumers’ knowledge of chemicals in terms of basic principles of toxicology and then related this knowledge, in addition to other factors, to their fear of chemical substances (i.e., chemophobia). Both qualitative interviews and a large‐scale online survey were conducted in the German‐speaking part of Switzerland. A Mokken scale was developed to measure laypeople's toxicological knowledge. The results indicate that most laypeople are unaware of the similarities between natural and synthetic chemicals in terms of certain toxicological principles. Furthermore, their associations with the term “chemical substances” and the self‐reported affect prompted by these associations are mostly negative. The results also suggest that knowledge of basic principles of toxicology, self‐reported affect evoked by the term “chemical substances,” risk‐benefit perceptions concerning synthetic chemicals, and trust in regulation processes are all negatively associated with chemophobia, while general health concerns are positively related to chemophobia. Thus, to enhance informed consumer decisionmaking, it might be necessary to tackle the stigmatization of the term “chemical substances” as well as address and clarify prevalent misconceptions.  相似文献   

16.
Today there are more than 80,000 chemicals in commerce and the environment. The potential human health risks are unknown for the vast majority of these chemicals as they lack human health risk assessments, toxicity reference values, and risk screening values. We aim to use computational toxicology and quantitative high‐throughput screening (qHTS) technologies to fill these data gaps, and begin to prioritize these chemicals for additional assessment. In this pilot, we demonstrate how we were able to identify that benzo[k]fluoranthene may induce DNA damage and steatosis using qHTS data and two separate adverse outcome pathways (AOPs). We also demonstrate how bootstrap natural spline‐based meta‐regression can be used to integrate data across multiple assay replicates to generate a concentration–response curve. We used this analysis to calculate an in vitro point of departure of 0.751 μM and risk‐specific in vitro concentrations of 0.29 μM and 0.28 μM for 1:1,000 and 1:10,000 risk, respectively, for DNA damage. Based on the available evidence, and considering that only a single HSD17B4 assay is available, we have low overall confidence in the steatosis hazard identification. This case study suggests that coupling qHTS assays with AOPs and ontologies will facilitate hazard identification. Combining this with quantitative evidence integration methods, such as bootstrap meta‐regression, may allow risk assessors to identify points of departure and risk‐specific internal/in vitro concentrations. These results are sufficient to prioritize the chemicals; however, in the longer term we will need to estimate external doses for risk screening purposes, such as through margin of exposure methods.  相似文献   

17.
《Risk analysis》2018,38(10):2073-2086
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real‐world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation.  相似文献   

18.
We conducted a regional‐scale integrated ecological and human health risk assessment by applying the relative risk model with Bayesian networks (BN‐RRM) to a case study of the South River, Virginia mercury‐contaminated site. Risk to four ecological services of the South River (human health, water quality, recreation, and the recreational fishery) was evaluated using a multiple stressor–multiple endpoint approach. These four ecological services were selected as endpoints based on stakeholder feedback and prioritized management goals for the river. The BN‐RRM approach allowed for the calculation of relative risk to 14 biotic, human health, recreation, and water quality endpoints from chemical and ecological stressors in five risk regions of the South River. Results indicated that water quality and the recreational fishery were the ecological services at highest risk in the South River. Human health risk for users of the South River was low relative to the risk to other endpoints. Risk to recreation in the South River was moderate with little spatial variability among the five risk regions. Sensitivity and uncertainty analysis identified stressors and other parameters that influence risk for each endpoint in each risk region. This research demonstrates a probabilistic approach to integrated ecological and human health risk assessment that considers the effects of chemical and ecological stressors across the landscape.  相似文献   

19.
Quantitative Approaches in Use to Assess Cancer Risk   总被引:4,自引:0,他引:4  
  相似文献   

20.
The selection and use of chemicals and materials with less hazardous profiles reflects a paradigm shift from reliance on risk minimization through exposure controls to hazard avoidance. This article introduces risk assessment and alternatives assessment frameworks in order to clarify a misconception that alternatives assessment is a less effective tool to guide decision making, discusses factors promoting the use of each framework, and also identifies how and when application of each framework is most effective. As part of an assessor's decision process to select one framework over the other, it is critical to recognize that each framework is intended to perform different functions. Although the two frameworks share a number of similarities (such as identifying hazards and assessing exposure), an alternatives assessment provides a more realistic framework with which to select environmentally preferable chemicals because of its primary reliance on assessing hazards and secondary reliance on exposure assessment. Relevant to other life cycle impacts, the hazard of a chemical is inherent, and although it may be possible to minimize exposure (and subsequently reduce risk), it is challenging to assess such exposures through a chemical's life cycle. Through increased use of alternatives assessments at the initial stage of material or product design, there will be less reliance on post facto risk‐based assessment techniques because the potential for harm is significantly reduced, if not avoided, negating the need for assessing risk in the first place.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号