首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
The BMD (benchmark dose) method that is used in risk assessment of chemical compounds was introduced by Crump (1984) and is based on dose-response modeling. To take uncertainty in the data and model fitting into account, the lower confidence bound of the BMD estimate (BMDL) is suggested to be used as a point of departure in health risk assessments. In this article, we study how to design optimum experiments for applying the BMD method for continuous data. We exemplify our approach by considering the class of Hill models. The main aim is to study whether an increased number of dose groups and at the same time a decreased number of animals in each dose group improves conditions for estimating the benchmark dose. Since Hill models are nonlinear, the optimum design depends on the values of the unknown parameters. That is why we consider Bayesian designs and assume that the parameter vector has a prior distribution. A natural design criterion is to minimize the expected variance of the BMD estimator. We present an example where we calculate the value of the design criterion for several designs and try to find out how the number of dose groups, the number of animals in the dose groups, and the choice of doses affects this value for different Hill curves. It follows from our calculations that to avoid the risk of unfavorable dose placements, it is good to use designs with more than four dose groups. We can also conclude that any additional information about the expected dose-response curve, e.g., information obtained from studies made in the past, should be taken into account when planning a study because it can improve the design.  相似文献   

2.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

3.
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.  相似文献   

4.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

5.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

6.
A current trend in risk assessment for systemic toxicity (noncancer) endpoints is to utilize the observable range of the dose-effect curve in order to estimate the likelihood of obtaining effects at lower concentrations. Methods to accomplish this endeavor are typically based on variability in either the effects of fixed doses (benchmark approaches), or on variability in the doses producing a fixed effect (probabilistic or tolerance-distribution approaches). The latter method may be particularly desirable because it can be used to determine variability in the effect of an agent in a population, which is an important goal of risk assessment. This method of analysis, however, has typically been accomplished using dose-effect data from individual subjects, which can be impractical in toxicology. A new method is therefore presented that can use traditional groups-design data to generate a set of dose-effect functions. Population tolerances for a specific effect can then be estimated from these model dose-effect functions. It is based on the randomization test, which assesses the generality of a data set by comparing it to a data set constructed from randomized combinations of single point estimates. The present article describes an iterative line-fitting program that generates such a data set and then uses it to provide risk assessments for two pesticides, triadimefon and carbaryl. The effects of these pesticides were studied on the locomotor activity of laboratory rats, a common neurobehavioral end point. Triadimefon produced dose-dependent increases in activity, while carbaryl produced dose-dependent decreases in activity. Risk figures derived from the empirical distribution of individual dose-effect functions were compared to those from the iterative line-fitting program. The results indicate that the method generates comparable risk figures, although potential limitations are also described.  相似文献   

7.
Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.  相似文献   

8.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

9.
The use of benchmark dose (BMD) calculations for dichotomous or continuous responses is well established in the risk assessment of cancer and noncancer endpoints. In some cases, responses to exposure are categorized in terms of ordinal severity effects such as none, mild, adverse, and severe. Such responses can be assessed using categorical regression (CATREG) analysis. However, while CATREG has been employed to compare the benchmark approach and the no‐adverse‐effect‐level (NOAEL) approach in determining a reference dose, the utility of CATREG for risk assessment remains unclear. This study proposes a CATREG model to extend the BMD approach to ordered categorical responses by modeling severity levels as censored interval limits of a standard normal distribution. The BMD is calculated as a weighted average of the BMDs obtained at dichotomous cutoffs for each adverse severity level above the critical effect, with the weights being proportional to the reciprocal of the expected loss at the cutoff under the normal probability model. This approach provides a link between the current BMD procedures for dichotomous and continuous data. We estimate the CATREG parameters using a Markov chain Monte Carlo simulation procedure. The proposed method is demonstrated using examples of aldicarb and urethane, each with several categories of severity levels. Simulation studies comparing the BMD and BMDL (lower confidence bound on the BMD) using the proposed method to the correspondent estimates using the existing methods for dichotomous and continuous data are quite compatible; the difference is mainly dependent on the choice of cutoffs for the severity levels.  相似文献   

10.
Model averaging for dichotomous dose–response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose–response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.  相似文献   

11.
Dose‐response analysis of binary developmental data (e.g., implant loss, fetal abnormalities) is best done using individual fetus data (identified to litter) or litter‐specific statistics such as number of offspring per litter and proportion abnormal. However, such data are not often available to risk assessors. Scientific articles usually present only dose‐group summaries for the number or average proportion abnormal and the total number of fetuses. Without litter‐specific data, it is not possible to estimate variances correctly (often characterized as a problem of overdispersion, intralitter correlation, or “litter effect”). However, it is possible to use group summary data when the design effect has been estimated for each dose group. Previous studies have demonstrated useful dose‐response and trend test analyses based on design effect estimates using litter‐specific data from the same study. This simplifies the analysis but does not help when litter‐specific data are unavailable. In the present study, we show that summary data on fetal malformations can be adjusted satisfactorily using estimates of the design effect based on historical data. When adjusted data are then analyzed with models designed for binomial responses, the resulting benchmark doses are similar to those obtained from analyzing litter‐level data with nested dichotomous models.  相似文献   

12.
A benchmark dose (BMD) is the dose of a chemical that corresponds to a predetermined increase in the response (the benchmark response, BMR) of a health effect. In this article, a method (the hybrid approach) for benchmark calculations from continuous dose-response information is investigated. In the formulation of the methodology, a cut-off value for an adverse health effect has to be determined. It is shown that the influence of variance on the hybrid model depends on the choice of determination of the cut-off point. If the cut-off value is determined as corresponding to a specified tail proportion of the control distribution, P(0), the BMD becomes biased upward when the variance is biased upward. On the contrary, if the cut-off value is directly determined to some level of the continuous response variable, the BMD becomes biased upward when the variance is biased downward. A simulation study was also performed in which the accuracy and precision of the BMD was compared for the two ways of determining the cut-off value. In general, considering BMRs of 1, 5, and 10% (additional risk) the precision of the BMD became higher when the cut-off value was estimated by specifying P(0), relative to the case with a direct determination. Use of the square-root of the maximum-likelihood estimator of the variance in BMD estimation may provide a bias that is reflected by the cut-off formulation (downward bias if specifying P(0), and upward bias if specifying the cut-off, c, directly). This feature may be reduced if an unbiased estimator of the standard deviation is used in the calculations.  相似文献   

13.
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.  相似文献   

14.
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments.  相似文献   

15.
The current methods for a reference dose (RfD) determination can be enhanced through the use of biologically-based dose-response analysis. Methods developed here utilizes information from tetrachlorodibenzo- p -dioxin (TCDD) to focus on noncancer endpoints, specifically TCDD mediated immune system alterations and enzyme induction. Dose-response analysis, using the Sigmoid-Emax (EMAX) function, is applied to multiple studies to determine consistency of response. Through the use of multiple studies and statistical comparison of parameter estimates, it was demonstrated that the slope estimates across studies were very consistent. This adds confidence to the subsequent effect dose estimates. This study also compares traditional methods of risk assessment such as the NOAEL/safety factor to a modified benchmark dose approach which is introduced here. Confidence in the estimation of an effect dose (ED10) was improved through the use of multiple datasets. This is key to adding confidence to the benchmark dose estimates. In addition, the Sigmoid-Emax function when applied to dose-response data using nonlinear regression analysis provides a significantly improved fit to data increasing confidence in parameter estimates which subsequently improve effect dose estimates.  相似文献   

16.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

17.
Dose‐response models in microbial risk assessment consider two steps in the process ultimately leading to illness: from exposure to (asymptomatic) infection, and from infection to (symptomatic) illness. Most data and theoretical approaches are available for the exposure‐infection step; the infection‐illness step has received less attention. Furthermore, current microbial risk assessment models do not account for acquired immunity. These limitations may lead to biased risk estimates. We consider effects of both dose dependency of the conditional probability of illness given infection, and acquired immunity to risk estimates, and demonstrate their effects in a case study on exposure to Campylobacter jejuni. To account for acquired immunity in risk estimates, an inflation factor is proposed. The inflation factor depends on the relative rates of loss of protection over exposure. The conditional probability of illness given infection is based on a previously published model, accounting for the within‐host dynamics of illness. We find that at low (average) doses, the infection‐illness model has the greatest impact on risk estimates, whereas at higher (average) doses and/or increased exposure frequencies, the acquired immunity model has the greatest impact. The proposed models are strongly nonlinear, and reducing exposure is not expected to lead to a proportional decrease in risk and, under certain conditions, may even lead to an increase in risk. The impact of different dose‐response models on risk estimates is particularly pronounced when introducing heterogeneity in the population exposure distribution.  相似文献   

18.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

19.
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with binary and continuous multiple endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Yu and Catalano (2005) describe a method for quantitative risk assessment for bivariate continuous outcomes by extending a univariate method of percentile regression. The model is likelihood based and allows for separate dose‐response models for each outcome while accounting for the bivariate correlation. The approach to benchmark dose (BMD) estimation is analogous to that for quantal data without having to specify arbitrary cutoff values. In this article, we evaluate the behavior of the BMD relative to background rates, sample size, level of bivariate correlation, dose‐response trend, and distributional assumptions. Using simulations, we explore the effects of these factors on the resulting BMD and BMDL distributions. In addition, we illustrate our method with data from a neurotoxicity study of parathion exposure in rats.  相似文献   

20.
Remanufacturing practices in closed-loop supply chains (CLSCs) are often characterised by highly variable lead times due to the uncertain quality of returns. However, the impact of such variability on the dynamic benefits derived from adopting circular economy models remains largely unknown in the closed-loop literature. To fill the gap, this work analyses the Bullwhip and inventory performance of a multi-echelon CLSC with variable remanufacturing lead times under different scenarios of return rate and information transparency in the remanufacturing process. Our results reveal that ignoring such variability generally leads to an overestimation of the dynamic performance of CLSCs. We observe that enabling information transparency generally reduces order and inventory variability, but it may have negative effects on average inventory if the duration of the remanufacturing process is highly variable. Our findings result in useful and innovative recommendations for companies wishing to mitigate the negative consequences of lead time variability in CLSCs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号