首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study assessed the health risks via inhalation and derived the occupational exposure limit (OEL) for the carbon nanotube (CNT) group rather than individual CNT material. We devised two methods: the integration of the intratracheal instillation (IT) data with the inhalation (IH) data, and the “biaxial approach.” A four‐week IH test and IT test were performed in rats exposed to representative materials to obtain the no observed adverse effect level, based on which the OEL was derived. We used the biaxial approach to conduct a relative toxicity assessment of six types of CNTs. An OEL of 0.03 mg/m3 was selected as the criterion for the CNT group. We proposed that the OEL be limited to 15 years. We adopted adaptive management, in which the values are reviewed whenever new data are obtained. The toxicity level was found to be correlated with the Brunauer‐Emmett‐Teller (BET)‐specific surface area (BET‐SSA) of CNT, suggesting the BET‐SSA to have potential for use in toxicity estimation. We used the published exposure data and measurement results of dustiness tests to compute the risk in relation to particle size at the workplace and showed that controlling micron‐sized respirable particles was of utmost importance. Our genotoxicity studies indicated that CNT did not directly interact with genetic materials. They supported the concept that, even if CNT is genotoxic, it is secondary genotoxicity mediated via a pathway of genotoxic damage resulting from oxidative DNA attack by free radicals generated during CNT‐elicited inflammation. Secondary genotoxicity appears to involve a threshold.  相似文献   

2.
《Risk analysis》2018,38(5):1052-1069
This study investigated whether, in the absence of chronic noncancer toxicity data, short‐term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose–response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best‐fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short‐term (three months) toxicity data. The findings indicate that short‐term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data.  相似文献   

3.
Essential elements such as copper and manganese may demonstrate U‐shaped exposure‐response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure‐response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U‐shaped exposure‐response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed‐form expression for the point at which the exposure‐response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure‐response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U‐shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest.  相似文献   

4.
Dose‐response analysis of binary developmental data (e.g., implant loss, fetal abnormalities) is best done using individual fetus data (identified to litter) or litter‐specific statistics such as number of offspring per litter and proportion abnormal. However, such data are not often available to risk assessors. Scientific articles usually present only dose‐group summaries for the number or average proportion abnormal and the total number of fetuses. Without litter‐specific data, it is not possible to estimate variances correctly (often characterized as a problem of overdispersion, intralitter correlation, or “litter effect”). However, it is possible to use group summary data when the design effect has been estimated for each dose group. Previous studies have demonstrated useful dose‐response and trend test analyses based on design effect estimates using litter‐specific data from the same study. This simplifies the analysis but does not help when litter‐specific data are unavailable. In the present study, we show that summary data on fetal malformations can be adjusted satisfactorily using estimates of the design effect based on historical data. When adjusted data are then analyzed with models designed for binomial responses, the resulting benchmark doses are similar to those obtained from analyzing litter‐level data with nested dichotomous models.  相似文献   

5.
This study presents a tree‐based logistic regression approach to assessing work zone casualty risk, which is defined as the probability of a vehicle occupant being killed or injured in a work zone crash. First, a decision tree approach is employed to determine the tree structure and interacting factors. Based on the Michigan M‐94\I‐94\I‐94BL\I‐94BR highway work zone crash data, an optimal tree comprising four leaf nodes is first determined and the interacting factors are found to be airbag, occupant identity (i.e., driver, passenger), and gender. The data are then split into four groups according to the tree structure. Finally, the logistic regression analysis is separately conducted for each group. The results show that the proposed approach outperforms the pure decision tree model because the former has the capability of examining the marginal effects of risk factors. Compared with the pure logistic regression method, the proposed approach avoids the variable interaction effects so that it significantly improves the prediction accuracy.  相似文献   

6.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

7.
Standard experimental designs for conducting developmental toxicity studies typically include three- or four-dose levels in addition to a control group. Some researchers have suggested that designs with more exposure groups would improve dose-response characterization and risk estimation. Such proposals have not, however, been supported by the results of simulation studies, which instead back the use of fewer dose levels. This discrepancy is partly due to using a known dose–response pattern to generate data, making model choice obvious. While the carcinogenicity literature has explored implications of different study designs, little attention has been given to the role of design in developmental toxicity risk assessment (or noncancer toxicology in general). In this research, we explore the implications of various experimental designs for developmental toxicity by resampling data from a large study of 2,4,5-trichlorophenoxyacetic acid in mice. We compare the properties of benchmark dose (BMD) estimation for different design strategies by randomly selecting animals within particular dose groups from the entire 2,4,5-T database of over 77,000 birth outcomes to create smaller "pseudo-studies" that are representative of standard bioassay sample sizes. Our results show that experimental designs which include more dose levels have advantages in terms of risk characterization and estimation.  相似文献   

8.
Young people are exposed to and engage in online risky activities, such as disclosing personal information and making unknown friends online. Little research has examined the psychological mechanisms underlying young people's online risk taking. Drawing on fuzzy trace theory, we examined developmental differences in adolescents’ and young adults’ online risk taking and assessed whether differential reliance on gist representations (based on vague, intuitive knowledge) or verbatim representations (based on specific, factual knowledge) could explain online risk taking. One hundred and twenty two adolescents (ages 13–17) and 172 young adults (ages 18–24) were asked about their past online risk‐taking behavior, intentions to engage in future risky online behavior, and gist and verbatim representations. Adolescents had significantly higher intentions to take online risks than young adults. Past risky online behaviors were positively associated with future intentions to take online risks for adolescents and negatively for young adults. Gist representations about risk negatively correlated with intentions to take risks online in both age groups, while verbatim representations positively correlated with online risk intentions, particularly among adolescents. Our results provide novel insights about the underlying mechanisms involved in adolescent and young adults’ online risk taking, suggesting the need to tailor the representation of online risk information to different age groups.  相似文献   

9.
10.
We construct measures of net private and public capital flows for a large cross‐section of developing countries considering both creditor and debtor side of the international debt transactions. Using these measures, we demonstrate that sovereign‐to‐sovereign transactions account for upstream capital flows and global imbalances. Specifically, we find that (i) international net private capital flows (inflows minus outflows of private capital) are positively correlated with countries' productivity growth, (ii) net sovereign debt flows (government borrowing minus reserves) are negatively correlated with growth only if net public debt is financed by another sovereign, (iii) net public debt financed by private creditors is positively correlated with growth, (iv) public savings are strongly positively correlated with growth, whereas correlation between private savings and growth is flat and statistically insignificant. These empirical facts contradict the conventional wisdom and constitute a challenge for the existing theories on upstream capital flows and global imbalances.  相似文献   

11.
Developmental anomalies induced by toxic chemicals may be identified using laboratory experiments with rats, mice or rabbits. Multinomial responses of fetuses from the same mother are often positively correlated, resulting in overdispersion relative to multinomial variation. In this article, a simple data transformation based on the concept of generalized design effects due to Rao-Scott is proposed for dose-response modeling of developmental toxicity. After scaling the original multinomial data using the average design effect, standard methods for analysis of uncorrected multinomial data can be applied. Benchmark doses derived using this approach are comparable to those obtained using generalized estimating equations with an extended Dirichlet-trinomial covariance function to describe the dispersion of the original data. This empirical agreement, coupled with a large sample theoretical justification of the Rao-Scott transformation, confirms the applicability of the statistical methods proposed in this article for developmental toxicity risk assessment.  相似文献   

12.
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.  相似文献   

13.
The benchmark dose (BMD)4 approach is emerging as replacement to determination of the No Observed Adverse Effect Level (NOAEL) in noncancer risk assessment. This possibility raises the issue as to whether current study designs for endpoints such as developmental toxicity, optimized for detecting pair wise comparisons, could be improved for the purpose of calculating BMDs. In this paper, we examine various aspects of study design (number of dose groups, dose spacing, dose placement, and sample size per dose group) on BMDs for two endpoints of developmental toxicity (the incidence of abnormalities and of reduced fetal weight). Design performance was judged by the mean-squared error (reflective of the variance and bias) of the maximum likelihood estimate (MLE) from the log-logistic model of the 5% added risk level (the likely target risk for a benchmark calculation), as well as by the length of its 95% confidence interval (the lower value of which is the BMD). We found that of the designs evaluated, the best results were obtained when two dose levels had response rates above the background level, one of which was near the ED05, were present. This situation is more likely to occur with more, rather than fewer dose levels per experiment. In this instance, there was virtually no advantage in increasing the sample size from 10 to 20 litters per dose group. If neither of the two dose groups with response rates above the background level was near the ED05, satisfactory results were also obtained, but the BMDs tended to be more conservative (i.e., lower). If only one dose level with a response rate above the background level was present, and it was near the ED05, reasonable results for the MLE and BMD were obtained, but here we observed benefits of larger dose group sizes. The poorest results were obtained when only a single group with an elevated response rate was present, and the response rate was much greater than the ED05. The results indicate that while the benchmark dose approach is readily applicable to the standard study designs and generally observed dose-responses in developmental assays, some minor design modifications would increase the accuracy and precision of the BMD.  相似文献   

14.
The objectives of this study are to understand tradeoffs between forest carbon and timber values, and evaluate the impact of uncertainty in improved forest management (IFM) carbon offset projects to improve forest management decisions. The study uses probabilistic simulation of uncertainty in financial risk for three management scenarios (clearcutting in 45‐ and 65‐year rotations and no harvest) under three carbon price schemes (historic voluntary market prices, cap and trade, and carbon prices set to equal net present value (NPV) from timber‐oriented management). Uncertainty is modeled for value and amount of carbon credits and wood products, the accuracy of forest growth model forecasts, and four other variables relevant to American Carbon Registry methodology. Calculations use forest inventory data from a 1,740 ha forest in western Washington State, using the Forest Vegetation Simulator (FVS) growth model. Sensitivity analysis shows that FVS model uncertainty contributes more than 70% to overall NPV variance, followed in importance by variability in inventory sample (3–14%), and short‐term prices for timber products (8%), while variability in carbon credit price has little influence (1.1%). At regional average land‐holding costs, a no‐harvest management scenario would become revenue‐positive at a carbon credit break‐point price of $14.17/Mg carbon dioxide equivalent (CO2e). IFM carbon projects are associated with a greater chance of both large payouts and large losses to landowners. These results inform policymakers and forest owners of the carbon credit price necessary for IFM approaches to equal or better the business‐as‐usual strategy, while highlighting the magnitude of financial risk and reward through probabilistic simulation.  相似文献   

15.
This article compares two nonparametric tree‐based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high‐resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2‐km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree‐leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources.  相似文献   

16.
In order to develop a dose‐response model for SARS coronavirus (SARS‐CoV), the pooled data sets for infection of transgenic mice susceptible to SARS‐CoV and infection of mice with murine hepatitis virus strain 1, which may be a clinically relevant model of SARS, were fit to beta‐Poisson and exponential models with the maximum likelihood method. The exponential model (k= 4.1 × l02) could describe the dose‐response relationship of the pooled data sets. The beta‐Poisson model did not provide a statistically significant improvement in fit. With the exponential model, the infectivity of SARS‐CoV was calculated and compared with those of other coronaviruses. The does of SARS‐CoV corresponding to 10% and 50% responses (illness) were estimated at 43 and 280 PFU, respectively. Its estimated infectivity was comparable to that of HCoV‐229E, known as an agent of human common cold, and also similar to those of some animal coronaviruses belonging to the same genetic group. Moreover, the exponential model was applied to the analysis of the epidemiological data of SARS outbreak that occurred at an apartment complex in Hong Kong in 2003. The estimated dose of SARS‐CoV for apartment residents during the outbreak, which was back‐calculated from the reported number of cases, ranged from 16 to 160 PFU/person, depending on the floor. The exponential model developed here is the sole dose‐response model for SARS‐CoV at the present and would enable us to understand the possibility for reemergence of SARS.  相似文献   

17.
Risk‐benefit analyses are introduced as a new paradigm for old problems. However, in many cases it is not always necessary to perform a full comprehensive and expensive quantitative risk‐benefit assessment to solve the problem, nor is it always possible, given the lack of required date. The choice to continue from a more qualitative to a full quantitative risk‐benefit assessment can be made using a tiered approach. In this article, this tiered approach for risk‐benefit assessment will be addressed using a decision tree. The tiered approach described uses the same four steps as the risk assessment paradigm: hazard and benefit identification, hazard and benefit characterization, exposure assessment, and risk‐benefit characterization, albeit in a different order. For the purpose of this approach, the exposure assessment has been moved upward and the dose‐response modeling (part of hazard and benefit characterization) is moved to a later stage. The decision tree includes several stop moments, depending on the situation where the gathered information is sufficient to answer the initial risk‐benefit question. The approach has been tested for two food ingredients. The decision tree presented in this article is useful to assist on a case‐by‐case basis a risk‐benefit assessor and policymaker in making informed choices when to stop or continue with a risk‐benefit assessment.  相似文献   

18.
The percentage of part‐time workers in Italy is very low compared with most European countries. In this paper we try to contribute to an explanation. We use data on the employees of a large Italian company operating in the service sector, and apply a particular econometric framework that allows identification of potential demand and supply. We find that demand and supply are potentially very large on average, but they are difficult to match at the individual worker/job level. The firm might observe a relevant employee’s characteristics that are positively correlated with the employee’s propensity to a part‐time job but are negatively correlated with the profitability of that employee on that job. The firm might also use the revealed willingness to switch to a part‐time job as a sign that the employee is likely to be unprofitable for the company.  相似文献   

19.
《Risk analysis》2018,38(6):1183-1201
In assessing environmental health risks, the risk characterization step synthesizes information gathered in evaluating exposures to stressors together with dose–response relationships, characteristics of the exposed population, and external environmental conditions. This article summarizes key steps of a cumulative risk assessment (CRA) followed by a discussion of considerations for characterizing cumulative risks. Cumulative risk characterizations differ considerably from single chemical‐ or single source‐based risk characterization. CRAs typically focus on a specific population instead of a pollutant or pollutant source and should include an evaluation of all relevant sources contributing to the exposures in the population and other factors that influence dose–response relationships. Second, CRAs may include influential environmental and population‐specific conditions, involving multiple chemical and nonchemical stressors. Third, a CRA could examine multiple health effects, reflecting joint toxicity and the potential for toxicological interactions. Fourth, the complexities often necessitate simplifying methods, including judgment‐based and semi‐quantitative indices that collapse disparate data into numerical scores. Fifth, because of the higher dimensionality and potentially large number of interactions, information needed to quantify risk is typically incomplete, necessitating an uncertainty analysis. Three approaches that could be used for characterizing risks in a CRA are presented: the multiroute hazard index, stressor grouping by exposure and toxicity, and indices for screening multiple factors and conditions. Other key roles of the risk characterization in CRAs are also described, mainly the translational aspect of including a characterization summary for lay readers (in addition to the technical analysis), and placing the results in the context of the likely risk‐based decisions.  相似文献   

20.
In acute toxicity testing, organisms are continuously exposed to progressively increasing concentrations of a chemical and deaths of test organisms are recorded at several selected times. The results of the test are traditionally summarized by a dose-response curve, and the time course of effect is usually ignored for lack of a suitable model. A model which integrates the combined effects of dose and exposure duration on response is derived from the biological mechanisms of aquatic toxicity, and a statistically efficient approach for estimating acute toxicity by fitting the proposed model is developed in this paper. The proposed procedure has been computerized as software and a typical data set is used to illustrate the theory and procedure. The new statistical technique is also tested by a data base of a variety of chemical and fish species.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号