首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Modeling for Risk Assessment of Neurotoxic Effects   总被引:2,自引:0,他引:2  
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels.  相似文献   

2.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

3.
Upper Confidence Limits on Excess Risk for Quantitative Responses   总被引:8,自引:0,他引:8  
The definition and observation of clear-cut adverse health effects for continuous (quantitative) responses, such as altered body weights or organ weights, are difficult propositions. Thus, methods of risk assessment commonly used for binary (quantal) toxic responses such as cancer are not directly applicable. In this paper, two methods for calculating upper confidence limits on excess risk for quantitative toxic effects are proposed, based on a particular definition of an adverse quantitative response. The methods are illustrated with data from a dose-response study, and their performance is evaluated with a Monte Carlo simulation study.  相似文献   

4.
Essential elements such as copper and manganese may demonstrate U‐shaped exposure‐response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure‐response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U‐shaped exposure‐response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed‐form expression for the point at which the exposure‐response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure‐response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U‐shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest.  相似文献   

5.
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with binary and continuous multiple endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Yu and Catalano (2005) describe a method for quantitative risk assessment for bivariate continuous outcomes by extending a univariate method of percentile regression. The model is likelihood based and allows for separate dose‐response models for each outcome while accounting for the bivariate correlation. The approach to benchmark dose (BMD) estimation is analogous to that for quantal data without having to specify arbitrary cutoff values. In this article, we evaluate the behavior of the BMD relative to background rates, sample size, level of bivariate correlation, dose‐response trend, and distributional assumptions. Using simulations, we explore the effects of these factors on the resulting BMD and BMDL distributions. In addition, we illustrate our method with data from a neurotoxicity study of parathion exposure in rats.  相似文献   

6.
Hormetic effects have been observed at low exposure levels based on the dose-response pattern of data from developmental toxicity studies. This indicates that there might actually be a reduced risk of exhibiting toxic effects at low exposure levels. Hormesis implies the existence of a threshold dose level and there are dose-response models that include parameters that account for the threshold. We propose a function that introduces a parameter to account for hormesis. This function is a subset of the set of all functions that could represent a hormetic dose-response relationship at low exposure levels to toxic agents. We characterize the overall dose-response relationship with a piecewise function that consists of a hormetic u-shape curve at low dose levels and a logistic curve at high dose levels. We apply our model to a data set from an experiment conducted at the National Toxicology Program (NTP). We also use the beta-binomial distribution to model the litter response data. It can be seen by observing the structure of these data that current experimental designs for developmental studies employ a limited number of dose groups. These designs may not be satisfactory when the goal is to illustrate the existence of hormesis. In particular, increasing the number of low-level doses improves the power for detecting hormetic effects. Therefore, we also provide the results of simulations that were done to characterize the power of current designs in detecting hormesis and to demonstrate how this power can be improved upon by altering these designs with the addition of only a few low exposure levels.  相似文献   

7.
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose‐response modeling. It is a well‐known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low‐dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal‐response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap‐based confidence limits for the BMD. We explore the confidence limits’ small‐sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty.  相似文献   

8.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

9.
The exposure-response relationship for airborne hexavalent chromium exposure and lung cancer mortality is well described by a linear relative rate model. However, categorical analyses have been interpreted to suggest the presence of a threshold. This study investigates nonlinear features of the exposure response in a cohort of 2,357 chemical workers with 122 lung cancer deaths. In Poisson regression, a simple model representing a two-step carcinogenesis process was evaluated. In a one-stage context, fractional polynomials were investigated. Cumulative exposure dose metrics were examined corresponding to cumulative exposure thresholds, exposure intensity (concentration) thresholds, dose-rate effects, and declining burden of accumulated effect on future risk. A simple two-stage model of carcinogenesis provided no improvement in fit. The best-fitting one-stage models used simple cumulative exposure with no threshold for exposure intensity and had sufficient power to rule out thresholds as large as 30 microg/m3 CrO3 (16 microg/m3 as Cr+6) (one-sided 95% confidence limit, likelihood ratio test). Slightly better-fitting models were observed with cumulative exposure thresholds of 0.03 and 0.5 mg-yr/m3 (as CrO3) with and without an exposure-race interaction term, respectively. With the best model, cumulative exposure thresholds as large as 0.4 mg-yr/m3 CrO3 were excluded (two-sided upper 95% confidence limit, likelihood ratio test). A small departure from dose-rate linearity was observed, corresponding to (intensity)0.8 but was not statistically significant. Models in which risk-inducing damage burdens declined over time, based on half-lives ranging from 0.1 to 40 years, fit less well than assuming a constant burden. A half-life of 8 years or less was excluded (one-sided 95% confidence limit). Examination of nonlinear features of the hexavalent chromium-lung cancer exposure response in a population used in a recent risk assessment supports using the traditional (lagged) cumulative exposure paradigm: no intensity (concentration) threshold, linearity in intensity, and constant increment in risk following exposure.  相似文献   

10.
We review approaches to dose-response modeling and risk assessment for binary data from developmental toxicity studies. In particular, we focus on jointly modeling fetal death and malformation and use a continuation ratio formulation of the multinomial distribution to provide a model for risk. Generalized estimating equations are used to account for clustering of animals within litters. The fitted model is then used to calculate doses corresponding to a specified level of excess risk. Two methods of arriving at a lower confidence limit or Benchmark dose are illustrated and compared. We also discuss models based on single binary end points and compare our approach to a binary analysis of whether or not the animal was 'affected' (either dead or malformed). The models are illustrated using data from four developmental toxicity studies in EG, DEHP, TGDM, and DYME conducted through the National Toxicology Program.  相似文献   

11.
Risk assessment is the process of estimating the likelihood that an adverse effect may result from exposure to a specific health hazard. The process traditionally involves hazard identification, dose-response assessment, exposure assessment, and risk characterization to answer “How many excess cases of disease A will occur in a population of size B due to exposure to agent C at dose level D?” For natural hazards, however, we modify the risk assessment paradigm to answer “How many excess cases of outcome Y will occur in a population of size B due to natural hazard event E of severity D?” Using a modified version involving hazard identification, risk factor characterization, exposure characterization, and risk characterization, we demonstrate that epidemiologic modeling and measures of risk can quantify the risks from natural hazard events. We further extend the paradigm to address mitigation, the equivalent of risk management, to answer “What is the risk for outcome Y in the presence of prevention intervention X relative to the risk for Y in the absence of X?” We use the preventable fraction to estimate the efficacy of mitigation, or reduction in adverse health outcomes as a result of a prevention strategy under ideal circumstances, and further estimate the effectiveness of mitigation, or reduction in adverse health outcomes under typical community-based settings. By relating socioeconomic costs of mitigation to measures of risk, we illustrate that prevention effectiveness is useful for developing cost-effective risk management options.  相似文献   

12.
This article develops a computationally and analytically convenient form of the profile likelihood method for obtaining one-sided confidence limits on scalar-valued functions phi = phi(psi) of the parameters psi in a multiparameter statistical model. We refer to this formulation as the likelihood contour method (LCM). In general, the LCM procedure requires iterative solution of a system of nonlinear equations, and good starting values are critical because the equations have at least two solutions corresponding to the upper and lower confidence limits. We replace the LCM equations by the lowest order terms in their asymptotic expansions. The resulting equations can be solved explicitly and have exactly two solutions that are used as starting values for obtaining the respective confidence limits from the LCM equations. This article also addresses the problem of obtaining upper confidence limits for the risk function in a dose-response model in which responses are normally distributed. Because of normality, considerable analytic simplification is possible and solution of the LCM equations reduces to an easy one-dimensional root-finding problem. Simulation is used to study the small-sample coverage of the resulting confidence limits.  相似文献   

13.
Hwang  Jing-Shiang  Chen  James J. 《Risk analysis》1999,19(6):1071-1076
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on an underlying assumption of the normality for the distributions of individual risk estimates. In this paper we evaluated the Gaylor-Chen approach in terms of the coverage probability. The performance of the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some or all individual upper confidence limit estimates are conservative or anti-conservative.  相似文献   

14.
In 2001, the U.S. Environmental Protection Agency derived a reference dose (RfD) for methylmercury, which is a daily intake that is likely to be without appreciable risk of deleterious effects during a lifetime. This derivation used a series of benchmark dose (BMD) analyses provided by a National Research Council (NRC) panel convened to assess the health effects of methylmercury. Analyses were performed for a number of endpoints from three large longitudinal cohort studies of the neuropsychological consequences of in utero exposure to methylmercury: the Faroe Islands, Seychelles Islands, and New Zealand studies. Adverse effects were identified in the Faroe Islands and New Zealand studies, but not in the Seychelles Islands. The NRC also performed an integrative analysis of all three studies. The EPA applied a total uncertainty factor (UF) of 10 for intrahuman toxicokinetic and toxicodynamic variability and uncertainty. Dose conversion from cord blood mercury concentrations to maternal methylmercury intake was performed using a one-compartment model. Derivation of potential RfDs from a number of endpoints from the Faroe Islands study converged on 0.1 microg/kg/day, as did the integrative analysis of all three studies. EPA identified several areas for which further information or analyses is needed. Perhaps the most immediately relevant is the ratio of cord:maternal blood mercury concentration, as well as the variability around this ratio. EPA assumed in its dose conversion that the ratio was 1.0; however, available data suggest it is perhaps 1.5-2.0. Verification of a deviation from unity presumably would be translated directly into comparable reduction in the RfD. Other areas that EPA identified as significant areas requiring further attention are cardiovascular consequences of methylmercury exposure and delayed neurotoxicity during aging as a result of previous developmental or adult exposure.  相似文献   

15.
Public health concerns over the occurrence of developmental abnormalities that can occur as a result of prenatal exposure to drugs, chemicals, and other environmental factors has led to a number of developmental toxicity studies and the use of the benchmark dose (BMD) for risk assessment. To characterize risk from multiple sources, more recent analytic methods involve a joint modeling approach, accounting for multiple dichotomous and continuous outcomes. For some continuous outcomes, evaluating all subjects may not be feasible, and only a subset may be evaluated due to limited resources. The subset can be selected according to a prespecified probability model and the unobserved data can be viewed as intentionally missing in the sense that subset selection results in missingness that is experimentally planned. We describe a subset selection model that allows for sampling pups with malformations and healthy pups at different rates, and includes the well‐known simple random sample (SRS) as a special case. We were interested in understanding how sampling rates that are selected beforehand influence the precision of the BMD. Using simulations we show how improvements over the SRS can be obtained by oversampling malformations, and how some sampling rates can yield precision that is substantially worse than the SRS. We also illustrate the potential for cost saving with oversampling. Simulations are based on a joint mixed effects model, and to account for subset selection, use of case weights to obtain valid dose‐response estimates.  相似文献   

16.
The aim of this study is to estimate the reference level of lifetime cadmium intake (LCd) as the benchmark doses (BMDs) and their 95% lower confidence limits (BMDLs) for various renal effects by applying a hybrid approach. The participants comprised 3,013 (1,362 men and 1,651 women) and 278 (129 men and 149 women) inhabitants of the Cd‐polluted and nonpolluted areas, respectively, in the environmentally exposed Kakehashi River basin. Glucose, protein, aminonitrogen, metallothionein, and β2‐microglobulin in urine were measured as indicators of renal dysfunction. The BMD and BMDL that corresponded to an additional risk of 5% were calculated with background risk at zero exposure set at 5%. The obtained BMDLs of LCd were 3.7 g (glucose), 3.2 g (protein), 3.7 g (aminonitrogen), 1.7 g (metallothionein), and 1.8 g (β2‐microglobulin) in men and 2.9 g (glucose), 2.5 g (protein), 2.0 g (aminonitrogen), 1.6 g (metallothionein), and 1.3 g (β2‐microglobulin) in women. The lowest BMDL was 1.7 g (metallothionein) and 1.3 g (β2‐microglobulin) in men and women, respectively. The lowest BMDL of LCd (1.3 g) was somewhat lower than the representative threshold LCd (2.0 g) calculated in the previous studies. The obtained BMDLs may contribute to further discussion on the health risk assessment of cadmium exposure.  相似文献   

17.
Ethylene oxide (EO) has been identified as a carcinogen in laboratory animals. Although the precise mechanism of action is not known, tumors in animals exposed to EO are presumed to result from its genotoxicity. The overall weight of evidence for carcinogenicity from a large body of epidemiological data in the published literature remains limited. There is some evidence for an association between EO exposure and lympho/hematopoietic cancer mortality. Of these cancers, the evidence provided by two large cohorts with the longest follow-up is most consistent for leukemia. Together with what is known about human leukemia and EO at the molecular level, there is a body of evidence that supports a plausible mode of action for EO as a potential leukemogen. Based on a consideration of the mode of action, the events leading from EO exposure to the development of leukemia (and therefore risk) are expected to be proportional to the square of the dose. In support of this hypothesis, a quadratic dose-response model provided the best overall fit to the epidemiology data in the range of observation. Cancer dose-response assessments based on human and animal data are presented using three different assumptions for extrapolating to low doses: (1) risk is linearly proportionate to dose; (2) there is no appreciable risk at low doses (margin-of-exposure or reference dose approach); and (3) risk below the point of departure continues to be proportionate to the square of the dose. The weight of evidence for EO supports the use of a nonlinear assessment. Therefore, exposures to concentrations below 37 microg/m3 are not likely to pose an appreciable risk of leukemia in human populations. However, if quantitative estimates of risk at low doses are desired and the mode of action for EO is considered, these risks are best quantified using the quadratic estimates of cancer potency, which are approximately 3.2- to 32-fold lower, using alternative points of departure, than the linear estimates of cancer potency for EO. An approach is described for linking the selection of an appropriate point of departure to the confidence in the proposed mode of action. Despite high confidence in the proposed mode of action, a small linear component for the dose-response relationship at low concentrations cannot be ruled out conclusively. Accordingly, a unit risk value of 4.5 x 10(-8) (microg/m3)(-1) was derived for EO, with a range of unit risk values of 1.4 x 10(-8) to 1.4 x 10(-7) (microg/m3)(-1) reflecting the uncertainty associated with a theoretical linear term at low concentrations.  相似文献   

18.
The benchmark dose (BMD)4 approach is emerging as replacement to determination of the No Observed Adverse Effect Level (NOAEL) in noncancer risk assessment. This possibility raises the issue as to whether current study designs for endpoints such as developmental toxicity, optimized for detecting pair wise comparisons, could be improved for the purpose of calculating BMDs. In this paper, we examine various aspects of study design (number of dose groups, dose spacing, dose placement, and sample size per dose group) on BMDs for two endpoints of developmental toxicity (the incidence of abnormalities and of reduced fetal weight). Design performance was judged by the mean-squared error (reflective of the variance and bias) of the maximum likelihood estimate (MLE) from the log-logistic model of the 5% added risk level (the likely target risk for a benchmark calculation), as well as by the length of its 95% confidence interval (the lower value of which is the BMD). We found that of the designs evaluated, the best results were obtained when two dose levels had response rates above the background level, one of which was near the ED05, were present. This situation is more likely to occur with more, rather than fewer dose levels per experiment. In this instance, there was virtually no advantage in increasing the sample size from 10 to 20 litters per dose group. If neither of the two dose groups with response rates above the background level was near the ED05, satisfactory results were also obtained, but the BMDs tended to be more conservative (i.e., lower). If only one dose level with a response rate above the background level was present, and it was near the ED05, reasonable results for the MLE and BMD were obtained, but here we observed benefits of larger dose group sizes. The poorest results were obtained when only a single group with an elevated response rate was present, and the response rate was much greater than the ED05. The results indicate that while the benchmark dose approach is readily applicable to the standard study designs and generally observed dose-responses in developmental assays, some minor design modifications would increase the accuracy and precision of the BMD.  相似文献   

19.
Dose‐response models in microbial risk assessment consider two steps in the process ultimately leading to illness: from exposure to (asymptomatic) infection, and from infection to (symptomatic) illness. Most data and theoretical approaches are available for the exposure‐infection step; the infection‐illness step has received less attention. Furthermore, current microbial risk assessment models do not account for acquired immunity. These limitations may lead to biased risk estimates. We consider effects of both dose dependency of the conditional probability of illness given infection, and acquired immunity to risk estimates, and demonstrate their effects in a case study on exposure to Campylobacter jejuni. To account for acquired immunity in risk estimates, an inflation factor is proposed. The inflation factor depends on the relative rates of loss of protection over exposure. The conditional probability of illness given infection is based on a previously published model, accounting for the within‐host dynamics of illness. We find that at low (average) doses, the infection‐illness model has the greatest impact on risk estimates, whereas at higher (average) doses and/or increased exposure frequencies, the acquired immunity model has the greatest impact. The proposed models are strongly nonlinear, and reducing exposure is not expected to lead to a proportional decrease in risk and, under certain conditions, may even lead to an increase in risk. The impact of different dose‐response models on risk estimates is particularly pronounced when introducing heterogeneity in the population exposure distribution.  相似文献   

20.
This report summarizes the proceedings of a conference on quantitative methods for assessing the risks of developmental toxicants. The conference was planned by a subcommittee of the National Research Council's Committee on Risk Assessment Methodology 4 in conjunction with staff from several federal agencies, including the U.S. Environmental Protection Agency, U.S. Food and Drug Administration, U.S. Consumer Products Safety Commission, and Health and Welfare Canada. Issues discussed at the workshop included computerized techniques for hazard identification, use of human and animal data for defining risks in a clinical setting, relationships between end points in developmental toxicity testing, reference dose calculations for developmental toxicology, analysis of quantitative dose-response data, mechanisms of developmental toxicity, physiologically based pharmacokinetic models, and structure-activity relationships. Although a formal consensus was not sought, many participants favored the evolution of quantitative techniques for developmental toxicology risk assessment, including the replacement of lowest observed adverse effect levels (LOAELs) and no observed adverse effect levels (NOAELs) with the benchmark dose methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号