首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Compliance Versus Risk in Assessing Occupational Exposures   总被引:1,自引:0,他引:1  
Assessments of occupational exposures to chemicals are generally based upon the practice of compliance testing in which the probability of compliance is related to the exceedance [γ, the likelihood that any measurement would exceed an occupational exposure limit (OEL)] and the number of measurements obtained. On the other hand, workers’ chronic health risks generally depend upon cumulative lifetime exposures which are not directly related to the probability of compliance. In this paper we define the probability of “overexposure” (θ) as the likelihood that individual risk (a function of cumulative exposure) exceeds the risk inherent in the OEL (a function of the OEL and duration of exposure). We regard θ as a relevant measure of individual risk for chemicals, such as carcinogens, which produce chronic effects after long-term exposures but not necessarily for acutely-toxic substances which can produce effects relatively quickly. We apply a random-effects model to data from 179 groups of workers, exposed to a variety of chemical agents, and obtain parameter estimates for the group mean exposure and the within- and between-worker components of variance. These estimates are then combined with OELs to generate estimates of γ and θ. We show that compliance testing can significantly underestimate the health risk when sample sizes are small. That is, there can be large probabilities of compliance with typical sample sizes, despite the fact that large proportions of the working population have individual risks greater than the risk inherent in the OEL. We demonstrate further that, because the relationship between θ and γ depends upon the within- and between-worker components of variance, it cannot be assumed a priori that exceedance is a conservative surrogate for overexposure. Thus, we conclude that assessment practices which focus upon either compliance or exceedance are problematic and recommend that employers evaluate exposures relative to the probabilities of overexposure.  相似文献   

2.
Our reconstructed historical work scenarios incorporating a vintage 1950s locomotive can assist in better understanding the historical asbestos exposures associated with past maintenance and repairs and fill a literature data gap. Air sampling data collected during the exposure scenarios and analyzed by NIOSH 7400 (PCM) and 7402 (PCME) methodologies show personal breathing zone asbestiform fiber exposures were below the current OSHA exposure limits for the eight‐hour TWA permissible exposure limit (PEL) of 0.1 f/cc (range <0.007–0.064 PCME f/cc) and the 30‐minute short‐term excursion limit (EL) of 1.0 f/cc (range <0.045–0.32 PCME f/cc) and orders of magnitude below historic OSHA PEL and ACGIH TLVs. Bayesian decision analysis (BDA) results demonstrate that the 95th percentile point estimate falls into an AIHA exposure category 3 or 4 as compared to the current PEL and category 1 when compared to the historic PEL. BDA results demonstrate that bystander exposures would be classified as category 0. Our findings were also significantly below the published calcium magnesium insulations exposure range of 2.5 to 7.5 f/cc reported for historic work activities of pipefitters, mechanics, and boilermakers. Diesel‐electric locomotive pipe systems were typically insulated with a woven tape lagging that may have been chrysotile asbestos and handled, removed, and reinstalled during repair and maintenance activities. We reconstructed historical work scenarios containing asbestos woven tape pipe lagging that have not been characterized in the published literature. The historical work scenarios were conducted by a retired railroad pipefitter with 37 years of experience working with materials and locomotives.  相似文献   

3.
Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose–response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure–response quantile relationship, which gives the model flexibility to estimate the quantal dose–response function. We describe this methodology and apply it to both epidemiology and toxicology data.  相似文献   

4.
Robert M. Park 《Risk analysis》2020,40(12):2561-2571
Uncertainty in model predictions of exposure response at low exposures is a problem for risk assessment. A particular interest is the internal concentration of an agent in biological systems as a function of external exposure concentrations. Physiologically based pharmacokinetic (PBPK) models permit estimation of internal exposure concentrations in target tissues but most assume that model parameters are either fixed or instantaneously dose-dependent. Taking into account response times for biological regulatory mechanisms introduces new dynamic behaviors that have implications for low-dose exposure response in chronic exposure. A simple one-compartment simulation model is described in which internal concentrations summed over time exhibit significant nonlinearity and nonmonotonicity in relation to external concentrations due to delayed up- or downregulation of a metabolic pathway. These behaviors could be the mechanistic basis for homeostasis and for some apparent hormetic effects.  相似文献   

5.
《Risk analysis》2018,38(3):442-453
Infections among health‐care personnel (HCP) occur as a result of providing care to patients with infectious diseases, but surveillance is limited to a few diseases. The objective of this study is to determine the annual number of influenza infections acquired by HCP as a result of occupational exposures to influenza patients in hospitals and emergency departments (EDs) in the United States. A risk analysis approach was taken. A compartmental model was used to estimate the influenza dose received in a single exposure, and a dose–response function applied to calculate the probability of infection. A three‐step algorithm tabulated the total number of influenza infections based on: the total number of occupational exposures (tabulated in previous work), the total number of HCP with occupational exposures, and the probability of infection in an occupational exposure. Estimated influenza infections were highly dependent upon the dose–response function. Given current compliance with infection control precautions, we estimated 151,300 and 34,150 influenza infections annually with two dose–response functions (annual incidence proportions of 9.3% and 2.1%, respectively). Greater reductions in infectious were achieved by full compliance with vaccination and IC precautions than with patient isolation. The burden of occupationally‐acquired influenza among HCP in hospitals and EDs in the United States is not trivial, and can be reduced through improved compliance with vaccination and preventive measures, including engineering and administrative controls.  相似文献   

6.
Quantitative Risk Assessment for Developmental Neurotoxic Effects   总被引:4,自引:0,他引:4  
Developmental neurotoxicity concerns the adverse health effects of exogenous agents acting on neurodevelopment. Because human brain development is a delicate process involving many cellular events, the developing fetus is rather susceptible to compounds that can alter the structure and function of the brain. Today, there is clear evidence that early exposure to many neurotoxicants can severely damage the developing nervous system. Although in recent years, there has been much attention given to model development and risk assessment procedures for developmental toxicants, the area of developmental neurotoxicity has been largely ignored. Here, we consider the problem of risk estimation for developmental neurotoxicants from animal bioassay data. Since most responses from developmental neurotoxicity experiments are nonquantal in nature, an adverse health effect will be defined as a response that occurs with very small probability in unexposed animals. Using a two-stage hierarchical normal dose-response model, upper confidence limits on the excess risk due to a given level of added exposure are derived. Equivalently, the model is used to obtain lower confidence limits on dose for a small negligible level of risk. Our method is based on the asymptotic distribution of the likelihood ratio statistic (cf. Crump, 1995). An example is used to provide further illustration.  相似文献   

7.
Typical exposures to lead often involve a mix of long-term exposures to relatively constant exposure levels (e.g., residential yard soil and indoor dust) and highly intermittent exposures at other locations (e.g., seasonal recreational visits to a park). These types of exposures can be expected to result in blood lead concentrations that vary on a temporal scale with the intermittent exposure pattern. Prediction of short-term (or seasonal) blood lead concentrations arising from highly variable intermittent exposures requires a model that can reliably simulate lead exposures and biokinetics on a temporal scale that matches that of the exposure events of interest. If exposure model averaging times (EMATs) of the model exceed the shortest exposure duration that characterizes the intermittent exposure, uncertainties will be introduced into risk estimates because the exposure concentration used as input to the model must be time averaged to account for the intermittent nature of the exposure. We have used simulation as a means of determining the potential magnitude of these uncertainties. Simulations using models having various EMATs have allowed exploration of the strengths and weaknesses of various approaches to time averaging of exposures and impact on risk estimates associated with intermittent exposures to lead in soil. The International Commission of Radiological Protection (ICRP) model of lead pharmacokinetics in humans simulates lead intakes that can vary in intensity over time spans as small as one day, allowing for the simulation of intermittent exposures to lead as a series of discrete daily exposure events. The ICRP model was used to compare the outcomes (blood lead concentration) of various time-averaging adjustments for approximating the time-averaged intake of lead associated with various intermittent exposure patterns. Results of these analyses suggest that standard approaches to time averaging (e.g., U.S. EPA) that estimate the long-term daily exposure concentration can, in some cases, result in substantial underprediction of short-term variations in blood lead concentrations when used in models that operate with EMATs exceeding the shortest exposure duration that characterizes the intermittent exposure. Alternative time-averaging approaches recommended for use in lead risk assessment more reliably predict short-term periodic (e.g., seasonal) elevations in blood lead concentration that might result from intermittent exposures. In general, risk estimates will be improved by simulation on shorter time scales that more closely approximate the actual temporal dynamics of the exposure.  相似文献   

8.
Dose‐response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi‐mechanistic models known as single‐hit models, such as the exponential and the exact beta‐Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single‐hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so‐called single‐hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single‐hit models. Further analysis of the model framework is facilitated by formulating the single‐hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single‐hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model‐consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model‐consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model‐consistent expression for the mean per‐exposure dose that produces the correct total risk from repeated exposures is developed.  相似文献   

9.
The leaching of organotin (OT) heat stabilizers from polyvinyl chloride (PVC) pipes used in residential drinking water systems may affect the quality of drinking water. These OTs, principally mono- and di-substituted species of butyltins and methyltins, are a potential health concern because they belong to a broad class of compounds that may be immune, nervous, and reproductive system toxicants. In this article, we develop probability distributions of U.S. population exposures to mixtures of OTs encountered in drinking water transported by PVC pipes. We employed a family of mathematical models to estimate OT leaching rates from PVC pipe as a function of both surface area and time. We then integrated the distribution of estimated leaching rates into an exposure model that estimated the probability distribution of OT concentrations in tap waters and the resulting potential human OT exposures via tap water consumption. Our study results suggest that human OT exposures through tap water consumption are likely to be considerably lower than the World Health Organization (WHO) "safe" long-term concentration in drinking water (150 μg/L) for dibutyltin (DBT)—the most toxic of the OT considered in this article. The 90th percentile average daily dose (ADD) estimate of 0.034 ± 2.92 × 10−4μg/kg day is approximately 120 times lower than the WHO-based ADD for DBT (4.2 μg/kg day).  相似文献   

10.
A Latin Hypercube probabilistic risk assessment methodology was employed in the assessment of health risks associated with exposures to contaminated sediment and biota in an estuary in the Tidewater region of Virginia. The primary contaminants were polychlorinated biphenyls (PCBs), polychlorinated terphenyls (PCTs), polynuclear aromatic hydrocarbons (PAHs), and metals released into the estuary from a storm sewer system. The exposure pathways associated with the highest contaminant intake and risks were dermal contact with contaminated sediment and ingestion of contaminated aquatic and terrestrial biota from the contaminated area. As expected, all of the output probability distributions of risk were highly skewed, and the ratios of the expected value (mean) to median risk estimates ranged from 1.4 to 14.8 for the various exposed populations. The 99th percentile risk estimates were as much as two orders of magnitude above the mean risk estimates. For the sediment exposure pathways, the stability of the median risk estimates was found to be much greater than the stability of the expected value risk estimates. The interrun variability in the median risk estimate was found to be +/-1.9% at 3000 iterations. The interrun stability of the mean risk estimates was found to be approximately equal to that of the 95th percentile estimates at any number of iterations. The variation in neither contaminant concentrations nor any other single input variable contributed disproportionately to the overall simulation variance. The inclusion or exclusion of spatial correlations among contaminant concentrations in the simulation model did not significantly effect either the magnitude or the variance of the simulation risk estimates for sediment exposures.  相似文献   

11.
There has been considerable scientific effort to understand the potential link between exposures to power-frequency electric and magnetic fields (EMF) and the occurrence of cancer and other diseases. The combination of widespread exposures, established biological effects from acute, high-level exposures, and the possibility of leukemia in children from low-level, chronic exposures has made it both necessary and difficult to develop consistent public health policies. In this article we review the basis of both numeric standards and precautionary-based approaches. While we believe that policies regarding EMF should indeed be precautionary, this does not require or imply adoption of numeric exposure standards. We argue that cutpoints from epidemiologic studies, which are arbitrarily chosen, should not be used as the basis for making exposure limits due to a number of uncertainties. Establishment of arbitrary numeric exposure limits undermines the value of both the science-based numeric EMF exposure standards for acute exposures and precautionary approaches. The World Health Organization's draft Precautionary Framework provides guidance for establishing appropriate public health policies for power-frequency EMF.  相似文献   

12.
Essential elements such as copper and manganese may demonstrate U‐shaped exposure‐response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure‐response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U‐shaped exposure‐response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed‐form expression for the point at which the exposure‐response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure‐response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U‐shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest.  相似文献   

13.
The mesothelioma epidemic in the United States, which peaked during the 2000–2004 period, can be traced to high‐level asbestos exposures experienced by males in occupational settings prior to the full recognition of the disease‐causing potential of asbestos and the establishment of enforceable asbestos exposure limits by the Occupational Safety and Health Administration (OSHA) in 1971. Many individuals diagnosed with mesothelioma where asbestos has been identified as a contributing cause of the disease have filed claims seeking compensation from asbestos settlement trusts or through the court system. An individual with mesothelioma typically has been exposed to asbestos in more than one setting and from more than one asbestos product. Apportioning risk for mesothelioma among contributing factors is an ongoing problem faced by occupational disease compensation boards, juries, parties responsible for paying damages, and currently by the U.S. Senate in its efforts to formulate a bill establishing an asbestos settlement trust. In this article we address the following question: If an individual with mesothelioma where asbestos has been identified as a contributing cause were to be compensated for his or her disease, how should that compensation be apportioned among those responsible for the asbestos exposures? For the purposes of apportionment, we assume that asbestos is the only cause of mesothelioma and that every asbestos exposure contributes, albeit differentially, to the risk. We use an extension of the mesothelioma risk model initially proposed in the early 1980s to quantify the contribution to risk of each exposure as a percentage of the total risk. The percentage for each specific discrete asbestos exposure depends on the start and end dates, the intensity, and the asbestos fiber type for the exposure. We provide justification for the use of the mesothelioma risk model for apportioning risk and discuss how to assess uncertainty associated with its application.  相似文献   

14.
As part of its preparation to review a potential license application from the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission (NRC) is examining the performance of the proposed Yucca Mountain nuclear waste repository. In this regard, we evaluated postclosure repository performance using Monte Carlo analyses with an NRC-developed system model that has 950 input parameters, of which 330 are sampled to represent system uncertainties. The quantitative compliance criterion for dose was established by NRC to protect inhabitants who might be exposed to any releases from the repository. The NRC criterion limits the peak-of-the-mean dose, which in our analysis is estimated by averaging the potential exposure at any instant in time for all Monte Carlo realizations, and then determining the maximum value of the mean curve within 10000 years, the compliance period. This procedure contrasts in important ways with a more common measure of risk based on the mean of the ensemble of peaks from each Monte Carlo realization. The NRC chose the former (peak-of-the-mean) because it more correctly represents the risk to an exposed individual. Procedures for calculating risk in the expected case of slow repository degradation differ from those for low-probability cases of disruption by external forces such as volcanism. We also explored the possibility of risk dilution (i.e., lower calculated risk) that could result from arbitrarily defining wide probability distributions for certain parameters. Finally, our sensitivity analyses to identify influential parameters used two approaches: (1). the ensemble of doses from each Monte Carlo realization at the time of the peak risk (i.e., peak-of-the-mean) and (2). the ensemble of peak doses calculated from each realization within 10000 years. The latter measure appears to have more discriminatory power than the former for many parameters (based on the greater magnitude of the sensitivity coefficient), but can yield different rankings, especially for parameters that influence the timing of releases.  相似文献   

15.
Modeling for Risk Assessment of Neurotoxic Effects   总被引:2,自引:0,他引:2  
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels.  相似文献   

16.
This paper presents a method of estimating long-term exposures to point source emissions. The method consists of a Monte Carlo exposure model (PSEM or Point Source Exposure Model) that combines data on population mobility and mortality with information on daily activity patterns. The approach behind the model can be applied to a wide variety of exposure scenarios. In this paper, PSEM is used to characterize the range and distribution of lifetime equivalent doses received by inhalation of air contaminated by the emissions of a point source. The output of the model provides quantitative information on the dose, age, and gender of highly exposed individuals. The model is then used in an example risk assessment. Finally, future uses of the model's approach are discussed.  相似文献   

17.
Smith  Jeffrey S.  Mendeloff  John M. 《Risk analysis》1999,19(6):1223-1234
For carcinogens, this paper provides a quantitative examination of the roles of potency and weight-of-evidence (WOE) in setting permissible exposure limits (PELs) at the U.S. Occupational Safety and Health Administration (OSHA) and threshold limit values (TLVs) at the private American Conference of Governmental Industrial Hygienists (ACGIH). On normative grounds, both of these factors should influence choices about the acceptable level of exposures. Our major objective is to examine whether and in what ways these factors have been considered by these organizations. A lesser objective is to identify outliers, which might be candidates for further regulatory scrutiny. Our sample (N=48) includes chemicals for which EPA has estimated a unit risk as a measure of carcinogenic potency and for which OSHA or the ACGIH has a PEL or TLV. Different assessments of the strength of the evidence of carcinogenicity were obtained from EPA, ACGIH, and the International Agency for Research on Cancer. We found that potency alone explains 49% of the variation in PELs and 62% of the variation in TLVs. For the ACGIH, WOE plays a much smaller role than potency. TLVs set by the ACGIH since 1989 appear to be stricter than earlier TLVs. We suggest that this change represents evidence that the ACGIH had responded to criticisms leveled at it in the late 1980s for failing to adopt sufficiently protective standards. The models developed here identify 2-nitropropane, ethylene dibromide, and chromium as having OSHA PELs significantly higher than predicted on the basis of potency and WOE.  相似文献   

18.
For the vast majority of chemicals that have cancer potency estimates on IRIS, the underlying database is deficient with respect to early-life exposures. This data gap has prevented derivation of cancer potency factors that are relevant to this time period, and so assessments may not fully address children's risks. This article provides a review of juvenile animal bioassay data in comparison to adult animal data for a broad array of carcinogens. This comparison indicates that short-term exposures in early life are likely to yield a greater tumor response than short-term exposures in adults, but similar tumor response when compared to long-term exposures in adults. This evidence is brought into a risk assessment context by proposing an approach that: (1) does not prorate children's exposures over the entire life span or mix them with exposures that occur at other ages; (2) applies the cancer slope factor from adult animal or human epidemiology studies to the children's exposure dose to calculate the cancer risk associated with the early-life period; and (3) adds the cancer risk for young children to that for older children/adults to yield a total lifetime cancer risk. The proposed approach allows for the unique exposure and pharmacokinetic factors associated with young children to be fully weighted in the cancer risk assessment. It is very similar to the approach currently used by U.S. EPA for vinyl chloride. The current analysis finds that the database of early life and adult cancer bioassays supports extension of this approach from vinyl chloride to other carcinogens of diverse mode of action. This approach should be enhanced by early-life data specific to the particular carcinogen under analysis whenever possible.  相似文献   

19.
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose‐response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta‐Poisson dose‐response model that includes parameters for virus aggregation and for a beta‐distribution that describes variable susceptibility among hosts. The quality of the beta‐Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two‐parameter beta‐distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta‐Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta‐Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta‐Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low‐dose data would be of great value to further clarify the NoV dose‐response relationship and to support improved risk assessment for environmentally relevant exposures.  相似文献   

20.
Epidemiology textbooks often interpret population attributable fractions based on 2 x 2 tables or logistic regression models of exposure-response associations as preventable fractions, i.e., as fractions of illnesses in a population that would be prevented if exposure were removed. In general, this causal interpretation is not correct, since statistical association need not indicate causation; moreover, it does not identify how much risk would be prevented by removing specific constituents of complex exposures. This article introduces and illustrates an approach to calculating useful bounds on preventable fractions, having valid causal interpretations, from the types of partial but useful molecular epidemiological and biological information often available in practice. The method applies probabilistic risk assessment concepts from systems reliability analysis, together with bounding constraints for the relationship between event probabilities and causation (such as that the probability that exposure X causes response Y cannot exceed the probability that exposure X precedes response Y, or the probability that both X and Y occur) to bound the contribution to causation from specific causal pathways. We illustrate the approach by estimating an upper bound on the contribution to lung cancer risk made by a specific, much-discussed causal pathway that links smoking to a polycyclic aromatic hydrocarbon (PAH) (specifically, benzo(a)pyrene diol epoxide-DNA) adducts at hot spot codons at p53 in lung cells. The result is a surprisingly small preventable fraction (of perhaps 7% or less) for this pathway, suggesting that it will be important to consider other mechanisms and non-PAH constituents of tobacco smoke in designing less risky tobacco-based products.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号