首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In evaluating the risk of exposure to health hazards, characterizing the dose‐response relationship and estimating acceptable exposure levels are the primary goals. In analyses of health risks associated with exposure to ionizing radiation, while there is a clear agreement that moderate to high radiation doses cause harmful effects in humans, little has been known about the possible biological effects at low doses, for example, below 0.1 Gy, which is the dose range relevant to most radiation exposures of concern today. A conventional approach to radiation dose‐response estimation based on simple parametric forms, such as the linear nonthreshold model, can be misleading in evaluating the risk and, in particular, its uncertainty at low doses. As an alternative approach, we consider a Bayesian semiparametric model that has a connected piece‐wise‐linear dose‐response function with prior distributions having an autoregressive structure among the random slope coefficients defined over closely spaced dose categories. With a simulation study and application to analysis of cancer incidence data among Japanese atomic bomb survivors, we show that this approach can produce smooth and flexible dose‐response estimation while reasonably handling the risk uncertainty at low doses and elsewhere. With relatively few assumptions and modeling options to be made by the analyst, the method can be particularly useful in assessing risks associated with low‐dose radiation exposures.  相似文献   

2.
3.
Two forms of single‐hit infection dose‐response models have previously been developed to assess available data from human feeding trials and estimate the norovirus dose‐response relationship. The mechanistic interpretations of these models include strong assumptions that warrant reconsideration: the first study includes an implicit assumption that there is no immunity to Norwalk virus among the specific study population, while the recent second study includes assumptions that such immunity could exist and that the nonimmune have no defensive barriers to prevent infection from exposure to just one virus. Both models addressed unmeasured virus aggregation in administered doses. In this work, the available data are reanalyzed using a generalization of the first model to explore these previous assumptions. It was hypothesized that concurrent estimation of an unmeasured degree of virus aggregation and important dose‐response parameters could lead to structural nonidentifiability of the model (i.e., that a diverse range of alternative mechanistic interpretations yield the same optimal fit), and this is demonstrated using the profile likelihood approach and by algebraic proof. It is also demonstrated that omission of an immunity parameter can artificially inflate the estimated degree of aggregation and falsely suggest high susceptibility among the nonimmune. The currently available data support the assumption of immunity within the specific study population, but provide only weak information about the degree of aggregation and susceptibility among the nonimmune. The probability of infection at low and moderate doses may be much lower than previously asserted, but more data from strategically designed dose‐response experiments are needed to provide adequate information.  相似文献   

4.
Microbial food safety risk assessment models can often at times be simplified by eliminating the need to integrate a complex dose‐response relationship across a distribution of exposure doses. This is possible if exposure pathways lead to pathogens at exposure that consistently have a small probability of causing illness. In this situation, the probability of illness will follow an approximately linear function of dose. Consequently, the predicted probability of illness per serving across all exposures is linear with respect to the expected value of dose. The majority of dose‐response functions are approximately linear when the dose is low. Nevertheless, what constitutes “low” is dependent on the parameters of the dose‐response function for a particular pathogen. In this study, a method is proposed to determine an upper bound of the exposure distribution for which the use of a linear dose‐response function is acceptable. If this upper bound is substantially larger than the expected value of exposure doses, then a linear approximation for probability of illness is reasonable. If conditions are appropriate for using the linear dose‐response approximation, for example, the expected value for exposure doses is two to three logs10 smaller than the upper bound of the linear portion of the dose‐response function, then predicting the risk‐reducing effectiveness of a proposed policy is trivial. Simple examples illustrate how this approximation can be used to inform policy decisions and improve an analyst's understanding of risk.  相似文献   

5.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

6.
The study presents an integrated, rigorous statistical approach to define the likelihood of a threshold and point of departure (POD) based on dose–response data using nested family of bent‐hyperbola models. The family includes four models: the full bent‐hyperbola model, which allows for transition between two linear regiments with various levels of smoothness; a bent‐hyperbola model reduced to a spline model, where the transition is fixed to a knot; a bent‐hyperbola model with a restricted negative asymptote slope of zero, named hockey‐stick with arc (HS‐Arc); and spline model reduced further to a hockey‐stick type model (HS), where the first linear segment has a slope of zero. A likelihood‐ratio test is used to discriminate between the models and determine if the more flexible versions of the model provide better or significantly better fit than a hockey‐stick type model. The full bent‐hyperbola model can accommodate both threshold and nonthreshold behavior, can take on concave up and concave down shapes with various levels of curvature, can approximate the biochemically relevant Michaelis–Menten model, and even be reduced to a straight line. Therefore, with the use of this model, the presence or absence of a threshold may even become irrelevant and the best fit of the full bent‐hyperbola model be used to characterize the dose–response behavior and risk levels, with no need for mode of action (MOA) information. Point of departure (POD), characterized by exposure level at which some predetermined response is reached, can be defined using the full model or one of the better fitting reduced models.  相似文献   

7.
Extremely low frequency electric and magnetic fields (ELF EMFs) are a common exposure for modern populations. The prevailing public‐health protection paradigm is that quantitative exposure limits are based on the established acute effects, whereas the possible chronic effects are considered too uncertain for quantitative limits, but might justify precautionary measures. The choice of precautionary measures can be informed by a health‐economics analysis (HEA). We consider four such analyses of precautionary measures that have been conducted at a national or state level in California, the Netherlands, the United Kingdom, and Israel. We describe the context of each analysis, examine how they deal with some of the more significant issues that arise, and present a comparison of the input parameters and assumptions used. The four HEAs are methodologically similar. The most significant qualitative choices that have to be made are what dose‐response relationship to assume, what allowance if any to make for uncertainty, and, for a CBA only, what diseases to consider, and all four analyses made similar choices. These analyses suggest that, on the assumptions made, there are some low‐cost measures, such as rephasing, that can be applied to transmission in some circumstances and that can be justifiable in cost‐benefit terms, but that higher cost measures, such as undergrounding, become unjustifiable. Of the four HEAs, those in the United Kingdom and Israel were influential in determining the country's EMF policy. In California and Netherlands, the HEA may well have informed the debate, but the policy chosen did not stem directly from the HEA.  相似文献   

8.
Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups. The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.  相似文献   

9.
Increased cell proliferation increases the opportunity for transformations of normal cells to malignant cells via intermediate cells. Nongenotoxic cytotoxic carcinogens that increase cell proliferation rates to replace necrotic cells are likely to have a threshold dose for cytotoxicity below which necrosis and hence, carcinogenesis do not occur. Thus, low dose cancer risk estimates based upon nonthreshold, linear extrapolation are inappropriate for this situation. However, a threshold dose is questionable if a nongenotoxic carcinogen acts via a cell receptor. Also, a nongenotoxic carcinogen that increases the cell proliferation rate, via the cell division rate and/or cell removal rate by apoptosis, by augmenting an existing endogenous mechanism is not likely to have a threshold dose. Whether or not a threshold dose exists for nongenotoxic carcinogens, it is of interest to study the relationship between lifetime tumor incidence and the cell proliferation rate. The Moolgavkar–Venzon–Knudson biologically based stochastic two-stage clonal expansion model is used to describe a carcinogenic process. Because the variability in cell proliferation rates among animals often makes it impossible to detect changes of less than 20% in the rate, it is shown that small changes in the cell proliferation rate, that may be obscured by the background noise in rates, can produce large changes in the lifetime tumor incidence as calculated from the Moolgavkar–Venzon–Knudson model. That is, dose response curves for cell proliferation and tumor incidence do not necessarily mimic each other. This makes the use of no observed effect levels (NOELs) for cell proliferation rates often inadmissible for establishing acceptable daily intakes (ADIs) of nongenotoxic carcinogens. In those cases where low dose linearity is not likely, a potential alternative to a NOEL is a benchmark dose corresponding to a small increase in the cell proliferation rate, e. g., 1%, to which appropriate safety (uncertainty) factors can be applied to arrive at an ADI.  相似文献   

10.
There is considerable debate as to the most appropriate metric for characterizing the mortality impacts of air pollution. Life expectancy has been advocated as an informative measure. Although the life‐table calculus is relatively straightforward, it becomes increasingly cumbersome when repeated over large numbers of geographic areas and for multiple causes of death. Two simplifying assumptions were evaluated: linearity of the relation between excess rate ratio and change in life expectancy, and additivity of cause‐specific life‐table calculations. We employed excess rate ratios linking PM2.5 and mortality from cerebrovascular disease, chronic obstructive pulmonary disease, ischemic heart disease, and lung cancer derived from a meta‐analysis of worldwide cohort studies. As a sensitivity analysis, we employed an integrated exposure response function based on the observed risk of PM2.5 over a wide range of concentrations from ambient exposure, indoor exposure, second‐hand smoke, and personal smoking. Impacts were estimated in relation to a change in PM2.5 from 19.5 μg/m3 estimated for Toronto to an estimated natural background concentration of 1.8 μg/m3. Estimated changes in life expectancy varied linearly with excess rate ratios, but at higher values the relationship was more accurately represented as a nonlinear function. Changes in life expectancy attributed to specific causes of death were additive with maximum error of 10%. Results were sensitive to assumptions about the air pollution concentration below which effects on mortality were not quantified. We have demonstrated valid approximations comprising expression of change in life expectancy as a function of excess mortality and summation across multiple causes of death.  相似文献   

11.
Dose‐response analysis of binary developmental data (e.g., implant loss, fetal abnormalities) is best done using individual fetus data (identified to litter) or litter‐specific statistics such as number of offspring per litter and proportion abnormal. However, such data are not often available to risk assessors. Scientific articles usually present only dose‐group summaries for the number or average proportion abnormal and the total number of fetuses. Without litter‐specific data, it is not possible to estimate variances correctly (often characterized as a problem of overdispersion, intralitter correlation, or “litter effect”). However, it is possible to use group summary data when the design effect has been estimated for each dose group. Previous studies have demonstrated useful dose‐response and trend test analyses based on design effect estimates using litter‐specific data from the same study. This simplifies the analysis but does not help when litter‐specific data are unavailable. In the present study, we show that summary data on fetal malformations can be adjusted satisfactorily using estimates of the design effect based on historical data. When adjusted data are then analyzed with models designed for binomial responses, the resulting benchmark doses are similar to those obtained from analyzing litter‐level data with nested dichotomous models.  相似文献   

12.
There is a need to advance our ability to characterize the risk of inhalational anthrax following a low‐dose exposure. The exposure scenario most often considered is a single exposure that occurs during an attack. However, long‐term daily low‐dose exposures also represent a realistic exposure scenario, such as what may be encountered by people occupying areas for longer periods. Given this, the objective of the current work was to model two rabbit inhalational anthrax dose‐response data sets. One data set was from single exposures to aerosolized Bacillus anthracis Ames spores. The second data set exposed rabbits repeatedly to aerosols of B. anthracis Ames spores. For the multiple exposure data the cumulative dose (i.e., the sum of the individual daily doses) was used for the model. Lethality was the response for both. Modeling was performed using Benchmark Dose Software evaluating six models: logprobit, loglogistic, Weibull, exponential, gamma, and dichotomous‐Hill. All models produced acceptable fits to either data set. The exponential model was identified as the best fitting model for both data sets. Statistical tests suggested there was no significant difference between the single exposure exponential model results and the multiple exposure exponential model results, which suggests the risk of disease is similar between the two data sets. The dose expected to cause 10% lethality was 15,600 inhaled spores and 18,200 inhaled spores for the single exposure and multiple exposure exponential dose‐response model, respectively, and the 95% lower confidence intervals were 9,800 inhaled spores and 9,200 inhaled spores, respectively.  相似文献   

13.
《Risk analysis》2018,38(7):1474-1489
Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation‐related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation‐related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions.  相似文献   

14.
Dose‐response models in microbial risk assessment consider two steps in the process ultimately leading to illness: from exposure to (asymptomatic) infection, and from infection to (symptomatic) illness. Most data and theoretical approaches are available for the exposure‐infection step; the infection‐illness step has received less attention. Furthermore, current microbial risk assessment models do not account for acquired immunity. These limitations may lead to biased risk estimates. We consider effects of both dose dependency of the conditional probability of illness given infection, and acquired immunity to risk estimates, and demonstrate their effects in a case study on exposure to Campylobacter jejuni. To account for acquired immunity in risk estimates, an inflation factor is proposed. The inflation factor depends on the relative rates of loss of protection over exposure. The conditional probability of illness given infection is based on a previously published model, accounting for the within‐host dynamics of illness. We find that at low (average) doses, the infection‐illness model has the greatest impact on risk estimates, whereas at higher (average) doses and/or increased exposure frequencies, the acquired immunity model has the greatest impact. The proposed models are strongly nonlinear, and reducing exposure is not expected to lead to a proportional decrease in risk and, under certain conditions, may even lead to an increase in risk. The impact of different dose‐response models on risk estimates is particularly pronounced when introducing heterogeneity in the population exposure distribution.  相似文献   

15.
We study the implications of economies of party size in a model of party formation. We show that when the policy space is one‐dimensional, candidates form at most two parties. This result does not depend on the magnitude of the economies of party size or sensitively on the nature of the individuals' preferences. It does depend on our assumptions that the policy space is one‐dimensional and that uncertainty is absent; we study how modifications of these assumptions affect our conclusions. (JEL: D70, D72)  相似文献   

16.
Cryptosporidium human dose‐response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose. Previous models attempt to explicitly account for virulence differences among C. parvum isolates, using three or six species/isolates. Four (two new) models assume species/isolate differences are insignificant and three of these (all but exponential) allow for variable human susceptibility. These three human‐focused models (fractional Poisson, exponential with immunity and beta‐Poisson) are relatively simple yet fit the data significantly better than the more complex isolate‐focused models. Among these three, the one‐parameter fractional Poisson model is the simplest but assumes that all Cryptosporidium oocysts used in the studies were capable of initiating infection. The exponential with immunity model does not require such an assumption and includes the fractional Poisson as a special case. The fractional Poisson model is an upper bound of the exponential with immunity model and applies when all oocysts are capable of initiating infection. The beta Poisson model does not allow an immune human subpopulation; thus infection probability approaches 100% as dose becomes huge. All three of these models predict significantly (>10x) greater risk at the low doses that consumers might receive if exposed through drinking water or other environmental exposure (e.g., 72% vs. 4% infection probability for a one oocyst dose) than previously predicted. This new insight into Cryptosporidium risk suggests additional inactivation and removal via treatment may be needed to meet any specified risk target, such as a suggested 10?4 annual risk of Cryptosporidium infection.  相似文献   

17.
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose‐response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta‐Poisson dose‐response model that includes parameters for virus aggregation and for a beta‐distribution that describes variable susceptibility among hosts. The quality of the beta‐Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two‐parameter beta‐distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta‐Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta‐Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta‐Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low‐dose data would be of great value to further clarify the NoV dose‐response relationship and to support improved risk assessment for environmentally relevant exposures.  相似文献   

18.
A mathematical model of receptor-mediated gene expression that includes receptor binding of natural and xenobiotic ligands, protein synthesis and degradation, and metabolism of the xenobiotic ligand was created to identify the determinants of the shape of the dose-response profile. Values of the model's parameters were varied to reflect alternative mechanisms of expression of the protein. These assumptions had dramatic effects on the computed response to a bolus dose of the xenobiotic ligand. If all processes in the model exhibit hyperbolic kinetics, the dose-response curves can appear sigmoidal but actually be linear with a positive slope at low doses. The slope of the curve only approached zero at low dose, indicative of a threshold for response, if binding of the xenobiotic ligand to the receptor exhibited positive cooperativity (ligand binding at one site increases the affinity for ligand at another binding site on the receptor). Positive cooperativity in the rate-limiting step of protein synthesis produced dose-response curves which were "U-shaped" at low doses, also indicative of a threshold. Positive cooperativity in the metabolism of the xenobiotic ligand produced dose-response curves that increased more rapidly than linearly with increasing dose. The model illustrates the fact that response cannot be predicted from qualitative mechanistic arguments alone; any assessment of risk to health from xenobiotic chemicals must be based on a detailed quantitative examination of the kinetic behavior of each chemical species individually.  相似文献   

19.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

20.
One-Hit Models of Carcinogenesis: Conservative or Not?   总被引:3,自引:0,他引:3  
One-hit formulas are widely believed to be "conservative" when used to analyze carcinogenesis bioassays, in the sense that they will rarely underestimate risks of cancer at low exposures. Such formulas are generally applied to the lifetime incidence of cancer at a specific site, with risks estimated from animal data at zero dose (control), and two or more additional doses that are appreciable fractions of a maximum tolerated dose. No empirical study has demonstrated that the one-hit formula is conservative in the sense described. The Carcinogenesis Bioassay Database System contains data on 1212 separate bioassays of 308 chemical substances tested at exactly three evaluable doses. These provided sufficient data to examine 8432 specific combinations of cancer site with sex, species, and chemical. For each of these we fitted a one-hit formula to the zero and maximum dose data points, then examined the relation of the fitted curve to the incidence rate observed at the mid-dose, with and without adjustment for intercurrent mortality. Both underestimates and overestimates of risk at mid-dose occurred substantially more often than expected by chance. We cannot tell whether such underestimates would occur at lower doses, but offer six biological reasons why underestimates might be expected. In a high percentage of animal bioassays, the one-hit formula is not conservative when applied in the usual way to animal data. It remains possible that the one-hit formula may indeed be conservative at sufficiently low doses (below the observational range), but the usual procedure, applied to the usual dose range, can be nonconservative in estimating the slope of the formula at such low doses. Risk assessments for regulation of carcinogens should incorporate some measure of additional uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号