首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.  相似文献   

2.
Quantitative risk assessment proceeds by first estimating a dose‐response model and then inverting this model to estimate the dose that corresponds to some prespecified level of response. The parametric form of the dose‐response model often plays a large role in determining this dose. Consequently, the choice of the proper model is a major source of uncertainty when estimating such endpoints. While methods exist that attempt to incorporate the uncertainty by forming an estimate based upon all models considered, such methods may fail when the true model is on the edge of the space of models considered and cannot be formed from a weighted sum of constituent models. We propose a semiparametric model for dose‐response data as well as deriving a dose estimate associated with a particular response. In this model formulation, the only restriction on the model form is that it is monotonic. We use this model to estimate the dose‐response curve from a long‐term cancer bioassay, as well as compare this to methods currently used to account for model uncertainty. A small simulation study is conducted showing that the method is superior to model averaging when estimating exposure that arises from a quantal‐linear dose‐response mechanism, and is similar to these methods when investigating nonlinear dose‐response patterns.  相似文献   

3.
Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose–response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure–response quantile relationship, which gives the model flexibility to estimate the quantal dose–response function. We describe this methodology and apply it to both epidemiology and toxicology data.  相似文献   

4.
Formaldehyde induced squamous-cell carcinomas in the nasal passages of F344 rats in two inhalation bioassays at exposure levels of 6 ppm and above. Increases in rates of cell proliferation were measured by T. M. Monticello and colleagues at exposure levels of 0.7 ppm and above in the same tissues from which tumors arose. A risk assessment for formaldehyde was conducted at the CIIT Centers for Health Research, in collaboration with investigators from Toxicological Excellence in Risk Assessment (TERA) and the U.S. Environmental Protection Agency (U.S. EPA) in 1999. Two methods for dose-response assessment were used: a full biologically based modeling approach and a statistically oriented analysis by benchmark dose (BMD) method. This article presents the later approach, the purpose of which is to combine BMD and pharmacokinetic modeling to estimate human cancer risks from formaldehyde exposure. BMD analysis was used to identify points of departure (exposure levels) for low-dose extrapolation in rats for both tumor and the cell proliferation endpoints. The benchmark concentrations for induced cell proliferation were lower than for tumors. These concentrations were extrapolated to humans using two mechanistic models. One model used computational fluid dynamics (CFD) alone to determine rates of delivery of inhaled formaldehyde to the nasal lining. The second model combined the CFD method with a pharmacokinetic model to predict tissue dose with formaldehyde-induced DNA-protein cross-links (DPX) as a dose metric. Both extrapolation methods gave similar results, and the predicted cancer risk in humans at low exposure levels was found to be similar to that from a risk assessment conducted by the U.S. EPA in 1991. Use of the mechanistically based extrapolation models lends greater certainty to these risk estimates than previous approaches and also identifies the uncertainty in the measured dose-response relationship for cell proliferation at low exposure levels, the dose-response relationship for DPX in monkeys, and the choice between linear and nonlinear methods of extrapolation as key remaining sources of uncertainty.  相似文献   

5.
Cakmak  Sabit  Burnett  Richard T.  Krewski  Daniel 《Risk analysis》1999,19(3):487-496
The association between daily fluctuations in ambient particulate matter and daily variations in nonaccidental mortality have been extensively investigated. Although it is now widely recognized that such an association exists, the form of the concentration–response model is still in question. Linear, no threshold and linear threshold models have been most commonly examined. In this paper we considered methods to detect and estimate threshold concentrations using time series data of daily mortality rates and air pollution concentrations. Because exposure is measured with error, we also considered the influence of measurement error in distinguishing between these two completing model specifications. The methods were illustrated on a 15-year daily time series of nonaccidental mortality and particulate air pollution data in Toronto, Canada. Nonparametric smoothed representations of the association between mortality and air pollution were adequate to graphically distinguish between these two forms. Weighted nonlinear regression methods for relative risk models were adequate to give nearly unbiased estimates of threshold concentrations even under conditions of extreme exposure measurement error. The uncertainty in the threshold estimates increased with the degree of exposure error. Regression models incorporating threshold concentrations could be clearly distinguished from linear relative risk models in the presence of exposure measurement error. The assumption of a linear model given that a threshold model was the correct form usually resulted in overestimates in the number of averted premature deaths, except for low threshold concentrations and large measurement error.  相似文献   

6.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

7.
Acute Exposure Guideline Level (AEGL) recommendations are developed for 10-minute, 30-minute, 1-hour, 4-hours, and 8-hours exposure durations and are designated for three levels of severity: AEGL-1 represents concentrations above which acute exposures may cause noticeable discomfort including irritation; AEGL-2 represents concentrations above which acute exposure may cause irreversible health effects or impaired ability to escape; and AEGL-3 represents concentrations above which exposure may cause life-threatening health effects or death. The default procedure for setting AEGL values across durations when applicable data are unavailable involves estimation based on Haber's rule, which has an underlying assumption that cumulative exposure is the determinant of toxicity. For acute exposure to trichloroethylene (TCE), however, experimental data indicate that momentary tissue concentration, and not the cumulative amount of exposure, is important. We employed an alternative approach to duration adjustments in which a physiologically-based pharmacokinetic (PBPK) model was used to predict the arterial blood concentrations [TCE(a)] associated with adverse outcomes appropriate for AEGL-1, -2, or -3-level effects. The PBPK model was then used to estimate the atmospheric concentration that produces equivalent [TCE(a)] at each of the AEGL-specific exposure durations. This approach yielded [TCE(a)] values of 4.89 mg/l for AEGL-1, 18.7 mg/l for AEGL-2, and 310 mg/l for AEGL-3. Duration adjustments based on equivalent target tissue doses should provide similar degrees of toxicity protection at different exposure durations.  相似文献   

8.
U.S. Environment Protection Agency benchmark doses for dichotomous cancer responses are often estimated using a multistage model based on a monotonic dose‐response assumption. To account for model uncertainty in the estimation process, several model averaging methods have been proposed for risk assessment. In this article, we extend the usual parameter space in the multistage model for monotonicity to allow for the possibility of a hormetic dose‐response relationship. Bayesian model averaging is used to estimate the benchmark dose and to provide posterior probabilities for monotonicity versus hormesis. Simulation studies show that the newly proposed method provides robust point and interval estimation of a benchmark dose in the presence or absence of hormesis. We also apply the method to two data sets on carcinogenic response of rats to 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin.  相似文献   

9.
Methods for evaluating the hazards associated with noncancer responses with epidemiologic data are considered. The methods for noncancer risk assessment have largely been developed for experimental data, and are not always suitable for the more complex structure of epidemiologic data. In epidemiology, the measurement of the response and the exposure is often either continuous or dichotomous. For a continuous noncancer response modeled with multiple regression, a variety of endpoints may be examined: (1) the concentration associated with absolute or relative decrements in response; (2) a threshold concentration associated with no change in response; and (3) the concentration associated with a particular added risk of impairment. For a dichotomous noncancer response modeled with logistic regression, concentrations associated with specified added/extra risk or with a threshold responses may be estimated. No-observed-effect concentrations may also be estimated for categorizations of exposures for both continuous and dichotomous responses but these may depend on the arbitrary categories chosen. Respiratory function in miners exposed to coal dust is used to illustrate these methods.  相似文献   

10.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

11.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

12.
European directives require that all veterinary medicines be assessed to determine the harmful effects that their use may have on the environment. Fundamental to this assessment is the calculation of the predicted environmental concentration (PEC), which is dependent on the type of drug, its associated treatment characteristics, and the route by which residues enter the environment. Deterministic models for the calculation of the PEC have previously been presented. In this article, the inclusion of variability and uncertainty within such models is introduced. In particular, models for the calculation of the PEC for residues excreted directly onto pasture by grazing animals are considered and comparison of deterministic and stochastic results suggest that uncertainty and variability cannot be ignored.  相似文献   

13.
Robert M. Park 《Risk analysis》2020,40(12):2561-2571
Uncertainty in model predictions of exposure response at low exposures is a problem for risk assessment. A particular interest is the internal concentration of an agent in biological systems as a function of external exposure concentrations. Physiologically based pharmacokinetic (PBPK) models permit estimation of internal exposure concentrations in target tissues but most assume that model parameters are either fixed or instantaneously dose-dependent. Taking into account response times for biological regulatory mechanisms introduces new dynamic behaviors that have implications for low-dose exposure response in chronic exposure. A simple one-compartment simulation model is described in which internal concentrations summed over time exhibit significant nonlinearity and nonmonotonicity in relation to external concentrations due to delayed up- or downregulation of a metabolic pathway. These behaviors could be the mechanistic basis for homeostasis and for some apparent hormetic effects.  相似文献   

14.
Applying a hockey stick parametric dose-response model to data on late or retarded development in Iraqi children exposed in utero to methylmercury, with mercury (Hg) exposure characterized by the peak Hg concentration in mothers'hair during pregnancy, Cox et al. calculated the "best statistical estimate" of the threshold for health effects as 10 ppm Hg in hair with a 95% range of uncertainty of between 0 and 13.6 ppm.(1)A new application of the hockey stick model to the Iraqi data shows, however, that the statistical upper limit of the threshold based on the hockey stick model could be as high as 255 ppm. Furthermore, the maximum likelihood estimate of the threshold using a different parametric model is virtually zero. These and other analyses demonstrate that threshold estimates based on parametric models exhibit high statistical variability and model dependency, and are highly sensitive to the precise definition of an abnormal response. Consequently, they are not a reliable basis for setting a reference dose (RfD) for methylmercury. Benchmark analyses and statistical analyses useful for deriving NOAELs are also presented. We believe these latter analyses—particularly the benchmark analyses—generally form a sounder basis for determining RfDs than the type of hockey stick analysis presented by Cox et al. However, the acute nature of the exposures, as well as other limitations in the Iraqi data suggest that other data may be more appropriate for determining acceptable human exposures to methylmercury.  相似文献   

15.
An ecological risk assessment framework for aircraft overflights has been developed, with special emphasis on military applications. This article presents the analysis of effects and risk characterization phases; the problem formulation and exposure analysis phases are presented in a companion article. The framework addresses the effects of sound, visual stressors, and collision on the abundance and production of wildlife populations. Profiles of effects, including thresholds, are highlighted for two groups of endpoint species: ungulates (hoofed mammals) and pinnipeds (seals, sea lions, walruses). Several factors complicate the analysis of effects for aircraft overflights. Studies of the effects of aircraft overflights previously have not been associated with a quantitative assessment framework; therefore no consistent relations between exposure and population-level response have been developed. Information on behavioral effects of overflights by military aircraft (or component stressors) on most wildlife species is sparse. Moreover, models that relate behavioral changes to abundance or reproduction, and those that relate behavioral or hearing effects thresholds from one population to another are generally not available. The aggregation of sound frequencies, durations, and the view of the aircraft into the single exposure metric of slant distance is not always the best predictor of effects, but effects associated with more specific exposure metrics (e.g., narrow sound spectra) may not be easily determined or added. The weight of evidence and uncertainty analyses of the risk characterization for overflights are also discussed in this article.  相似文献   

16.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

17.
Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations.  相似文献   

18.
In chemical and microbial risk assessments, risk assessors fit dose‐response models to high‐dose data and extrapolate downward to risk levels in the range of 1–10%. Although multiple dose‐response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose‐response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA.  相似文献   

19.
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.  相似文献   

20.
This article presents a Listeria monocytogenes growth model in milk at the farm bulk tank stage. The main objective was to judge the feasibility and value to risk assessors of introducing a complex model, including a complete thermal model, within a microbial quantitative risk assessment scheme. Predictive microbiology models are used under varying temperature conditions to predict bacterial growth. Input distributions are estimated based on data in the literature, when it is available. If not, reasonable assumptions are made for the considered context. Previously published results based on a Bayesian analysis of growth parameters are used. A Monte Carlo simulation that forecasts bacterial growth is the focus of this study. Three scenarios that take account of the variability and uncertainty of growth parameters are compared. The effect of a sophisticated thermal model taking account of continuous variations in milk temperature was tested by comparison with a simplified model where milk temperature was considered as constant. Limited multiplication of bacteria within the farm bulk tank was modeled. The two principal factors influencing bacterial growth were found to be tank thermostat regulation and bacterial population growth parameters. The dilution phenomenon due to the introduction of new milk was the main factor affecting the final bacterial concentration. The results show that a model that assumes constant environmental conditions at an average temperature should be acceptable for this process. This work may constitute a first step toward exposure assessment for L. monocytogenes in milk. In addition, this partly conceptual work provides guidelines for other risk assessments where continuous variation of a parameter needs to be taken into account.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号