首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

2.
Multistage models are frequently applied in carcinogenic risk assessment. In their simplest form, these models relate the probability of tumor presence to some measure of dose. These models are then used to project the excess risk of tumor occurrence at doses frequently well below the lowest experimental dose. Upper confidence limits on the excess risk associated with exposures at these doses are then determined. A likelihood-based method is commonly used to determine these limits. We compare this method to two computationally intensive "bootstrap" methods for determining the 95% upper confidence limit on extra risk. The coverage probabilities and bias of likelihood-based and bootstrap estimates are examined in a simulation study of carcinogenicity experiments. The coverage probabilities of the nonparametric bootstrap method fell below 95% more frequently and by wider margins than the better-performing parametric bootstrap and likelihood-based methods. The relative bias of all estimators are seen to be affected by the amount of curvature in the true underlying dose-response function. In general, the likelihood-based method has the best coverage probability properties while the parametric bootstrap is less biased and less variable than the likelihood-based method. Ultimately, neither method is entirely satisfactory for highly curved dose-response patterns.  相似文献   

3.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

4.
Comparing the harmful health effects related to two different tobacco products by applying common risk assessment methods to each individual compound is problematic. We developed a method that circumvents some of these problems by focusing on the change in cumulative exposure (CCE) of the compounds emitted by the two products considered. The method consists of six steps. The first three steps encompass dose-response analysis of cancer data, resulting in relative potency factors with confidence intervals. The fourth step evaluates emission data, resulting in confidence intervals for the expected emission of each compound. The fifth step calculates the change in CCE, probabilistically, resulting in an uncertainty range for the CCE. The sixth step estimates the associated health impact by combining the CCE with relevant dose-response information. As an illustrative case study, we applied the method to eight carcinogens occurring both in the emissions of heated tobacco products (HTPs), a novel class of tobacco products, and tobacco smoke. The CCE was estimated to be 10- to 25-fold lower when using HTPs instead of cigarettes. Such a change indicates a substantially smaller reduction in expected life span, based on available dose-response information in smokers. However, this is a preliminary conclusion, as only eight carcinogens were considered so far. Furthermore, an unfavorable health impact related to HTPs remains as compared to complete abstinence. Our method results in useful information that may help policy makers in better understanding the potential health impact of new tobacco and related products. A similar approach can be used to compare the carcinogenicity of other mixtures.  相似文献   

5.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

6.
To quantify the health benefits of environmental policies, economists generally require estimates of the reduced probability of illness or death. For policies that reduce exposure to carcinogenic substances, these estimates traditionally have been obtained through the linear extrapolation of experimental dose-response data to low-exposure scenarios as described in the U.S. Environmental Protection Agency's Guidelines for Carcinogen Risk Assessment (1986). In response to evolving scientific knowledge, EPA proposed revisions to the guidelines in 1996. Under the proposed revisions, dose-response relationships would not be estimated for carcinogens thought to exhibit nonlinear modes of action. Such a change in cancer-risk assessment methods and outputs will likely have serious consequences for how benefit-cost analyses of policies aimed at reducing cancer risks are conducted. Any tendency for reduced quantification of effects in environmental risk assessments, such as those contemplated in the revisions to EPA's cancer-risk assessment guidelines, impedes the ability of economic analysts to respond to increasing calls for benefit-cost analysis. This article examines the implications for benefit-cost analysis of carcinogenic exposures of the proposed changes to the 1986 Guidelines and proposes an approach for bounding dose-response relationships when no biologically based models are available. In spite of the more limited quantitative information provided in a carcinogen risk assessment under the proposed revisions to the guidelines, we argue that reasonable bounds on dose-response relationships can be estimated for low-level exposures to nonlinear carcinogens. This approach yields estimates of reduced illness for use in a benefit-cost analysis while incorporating evidence of nonlinearities in the dose-response relationship. As an illustration, the bounding approach is applied to the case of chloroform exposure.  相似文献   

7.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

8.
The BMD (benchmark dose) method that is used in risk assessment of chemical compounds was introduced by Crump (1984) and is based on dose-response modeling. To take uncertainty in the data and model fitting into account, the lower confidence bound of the BMD estimate (BMDL) is suggested to be used as a point of departure in health risk assessments. In this article, we study how to design optimum experiments for applying the BMD method for continuous data. We exemplify our approach by considering the class of Hill models. The main aim is to study whether an increased number of dose groups and at the same time a decreased number of animals in each dose group improves conditions for estimating the benchmark dose. Since Hill models are nonlinear, the optimum design depends on the values of the unknown parameters. That is why we consider Bayesian designs and assume that the parameter vector has a prior distribution. A natural design criterion is to minimize the expected variance of the BMD estimator. We present an example where we calculate the value of the design criterion for several designs and try to find out how the number of dose groups, the number of animals in the dose groups, and the choice of doses affects this value for different Hill curves. It follows from our calculations that to avoid the risk of unfavorable dose placements, it is good to use designs with more than four dose groups. We can also conclude that any additional information about the expected dose-response curve, e.g., information obtained from studies made in the past, should be taken into account when planning a study because it can improve the design.  相似文献   

9.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

10.
Food‐borne infection is caused by intake of foods or beverages contaminated with microbial pathogens. Dose‐response modeling is used to estimate exposure levels of pathogens associated with specific risks of infection or illness. When a single dose‐response model is used and confidence limits on infectious doses are calculated, only data uncertainty is captured. We propose a method to estimate the lower confidence limit on an infectious dose by including model uncertainty and separating it from data uncertainty. The infectious dose is estimated by a weighted average of effective dose estimates from a set of dose‐response models via a Kullback information criterion. The confidence interval for the infectious dose is constructed by the delta method, where data uncertainty is addressed by a bootstrap method. To evaluate the actual coverage probabilities of the lower confidence limit, a Monte Carlo simulation study is conducted under sublinear, linear, and superlinear dose‐response shapes that can be commonly found in real data sets. Our model‐averaging method achieves coverage close to nominal in almost all cases, thus providing a useful and efficient tool for accurate calculation of lower confidence limits on infectious doses.  相似文献   

11.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

12.
The current methods for a reference dose (RfD) determination can be enhanced through the use of biologically-based dose-response analysis. Methods developed here utilizes information from tetrachlorodibenzo- p -dioxin (TCDD) to focus on noncancer endpoints, specifically TCDD mediated immune system alterations and enzyme induction. Dose-response analysis, using the Sigmoid-Emax (EMAX) function, is applied to multiple studies to determine consistency of response. Through the use of multiple studies and statistical comparison of parameter estimates, it was demonstrated that the slope estimates across studies were very consistent. This adds confidence to the subsequent effect dose estimates. This study also compares traditional methods of risk assessment such as the NOAEL/safety factor to a modified benchmark dose approach which is introduced here. Confidence in the estimation of an effect dose (ED10) was improved through the use of multiple datasets. This is key to adding confidence to the benchmark dose estimates. In addition, the Sigmoid-Emax function when applied to dose-response data using nonlinear regression analysis provides a significantly improved fit to data increasing confidence in parameter estimates which subsequently improve effect dose estimates.  相似文献   

13.
This paper is concerned with tests and confidence intervals for parameters that are not necessarily point identified and are defined by moment inequalities. In the literature, different test statistics, critical‐value methods, and implementation methods (i.e., the asymptotic distribution versus the bootstrap) have been proposed. In this paper, we compare these methods. We provide a recommended test statistic, moment selection critical value, and implementation method. We provide data‐dependent procedures for choosing the key moment selection tuning parameter κ and a size‐correction factor η.  相似文献   

14.
Formaldehyde induced squamous-cell carcinomas in the nasal passages of F344 rats in two inhalation bioassays at exposure levels of 6 ppm and above. Increases in rates of cell proliferation were measured by T. M. Monticello and colleagues at exposure levels of 0.7 ppm and above in the same tissues from which tumors arose. A risk assessment for formaldehyde was conducted at the CIIT Centers for Health Research, in collaboration with investigators from Toxicological Excellence in Risk Assessment (TERA) and the U.S. Environmental Protection Agency (U.S. EPA) in 1999. Two methods for dose-response assessment were used: a full biologically based modeling approach and a statistically oriented analysis by benchmark dose (BMD) method. This article presents the later approach, the purpose of which is to combine BMD and pharmacokinetic modeling to estimate human cancer risks from formaldehyde exposure. BMD analysis was used to identify points of departure (exposure levels) for low-dose extrapolation in rats for both tumor and the cell proliferation endpoints. The benchmark concentrations for induced cell proliferation were lower than for tumors. These concentrations were extrapolated to humans using two mechanistic models. One model used computational fluid dynamics (CFD) alone to determine rates of delivery of inhaled formaldehyde to the nasal lining. The second model combined the CFD method with a pharmacokinetic model to predict tissue dose with formaldehyde-induced DNA-protein cross-links (DPX) as a dose metric. Both extrapolation methods gave similar results, and the predicted cancer risk in humans at low exposure levels was found to be similar to that from a risk assessment conducted by the U.S. EPA in 1991. Use of the mechanistically based extrapolation models lends greater certainty to these risk estimates than previous approaches and also identifies the uncertainty in the measured dose-response relationship for cell proliferation at low exposure levels, the dose-response relationship for DPX in monkeys, and the choice between linear and nonlinear methods of extrapolation as key remaining sources of uncertainty.  相似文献   

15.
Identification and Review of Sensitivity Analysis Methods   总被引:8,自引:0,他引:8  
Identification and qualitative comparison of sensitivity analysis methods that have been used across various disciplines, and that merit consideration for application to food-safety risk assessment models, are presented in this article. Sensitivity analysis can help in identifying critical control points, prioritizing additional data collection or research, and verifying and validating a model. Ten sensitivity analysis methods, including four mathematical methods, five statistical methods, and one graphical method, are identified. The selected methods are compared on the basis of their applicability to different types of models, computational issues such as initial data requirement and complexity of their application, representation of the sensitivity, and the specific uses of these methods. Applications of these methods are illustrated with examples from various fields. No one method is clearly best for food-safety risk models. In general, use of two or more methods, preferably with dissimilar theoretical foundations, may be needed to increase confidence in the ranking of key inputs.  相似文献   

16.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

17.
Today there are more than 80,000 chemicals in commerce and the environment. The potential human health risks are unknown for the vast majority of these chemicals as they lack human health risk assessments, toxicity reference values, and risk screening values. We aim to use computational toxicology and quantitative high‐throughput screening (qHTS) technologies to fill these data gaps, and begin to prioritize these chemicals for additional assessment. In this pilot, we demonstrate how we were able to identify that benzo[k]fluoranthene may induce DNA damage and steatosis using qHTS data and two separate adverse outcome pathways (AOPs). We also demonstrate how bootstrap natural spline‐based meta‐regression can be used to integrate data across multiple assay replicates to generate a concentration–response curve. We used this analysis to calculate an in vitro point of departure of 0.751 μM and risk‐specific in vitro concentrations of 0.29 μM and 0.28 μM for 1:1,000 and 1:10,000 risk, respectively, for DNA damage. Based on the available evidence, and considering that only a single HSD17B4 assay is available, we have low overall confidence in the steatosis hazard identification. This case study suggests that coupling qHTS assays with AOPs and ontologies will facilitate hazard identification. Combining this with quantitative evidence integration methods, such as bootstrap meta‐regression, may allow risk assessors to identify points of departure and risk‐specific internal/in vitro concentrations. These results are sufficient to prioritize the chemicals; however, in the longer term we will need to estimate external doses for risk screening purposes, such as through margin of exposure methods.  相似文献   

18.
National Statistical Agencies and other data custodian agencies hold a wealth of data regarding individuals and organizations, collected from censuses, surveys and administrative sources. In many cases, these data are made available to external researchers, for the investigation of questions of social and economic importance. To enhance access to this information, several national statistical agencies are developing remote analysis systems (RAS) designed to accept queries from a researcher, run them on data held in a secure environment, and then return the results. RAS prevent a researcher from accessing the underlying data, and most rely on manual checking to ensure the responses have acceptably low disclosure risk. However, the need for scalability and consistency will increasingly require automated methods. We propose a RAS output confidentialization procedure based on statistical bootstrapping that automates disclosure control while achieving a provably good balance between disclosure risk and usefulness of the responses. The bootstrap masking mechanism is easy to implement for most statistical queries, yet the characteristics of the bootstrap distribution assure us that it is also effective in providing both useful responses and low disclosure risk. Interestingly, our proposed bootstrap masking mechanism represents an ideal application of Efron's bootstrap—one that takes advantage of all the theoretical properties of the bootstrap, without ever having to construct the bootstrap distribution.  相似文献   

19.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

20.
Reassessing Benzene Cancer Risks Using Internal Doses   总被引:1,自引:0,他引:1  
Human cancer risks from benzene exposure have previously been estimated by regulatory agencies based primarily on epidemiological data, with supporting evidence provided by animal bioassay data. This paper reexamines the animal-based risk assessments for benzene using physiologically-based pharmacokinetic (PBPK) models of benzene metabolism in animals and humans. It demonstrates that internal doses (interpreted as total benzene metabolites formed) from oral gavage experiments in mice are well predicted by a PBPK model developed by Travis et al. Both the data and the model outputs can also be accurately described by the simple nonlinear regression model total metabolites = 76.4x/(80.75 + x), where x = administered dose in mg/kg/day. Thus, PBPK modeling validates the use of such nonlinear regression models, previously used by Bailer and Hoel. An important finding is that refitting the linearized multistage (LMS) model family to internal doses and observed responses changes the maximum-likelihood estimate (MLE) dose-response curve for mice from linear-quadratic to cubic, leading to low-dose risk estimates smaller than in previous risk assessments. This is consistent with the conclusion for mice from the Bailer and Hoel analysis. An innovation in this paper is estimation of internal doses for humans based on a PBPK model (and the regression model approximating it) rather than on interspecies dose conversions. Estimates of human risks at low doses are reduced by the use of internal dose estimates when the estimates are obtained from a PBPK model, in contrast to Bailer and Hoel's findings based on interspecies dose conversion. Sensitivity analyses and comparisons with epidemiological data and risk models suggest that our finding of a nonlinear MLE dose-response curve at low doses is robust to changes in assumptions and more consistent with epidemiological data than earlier risk models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号