首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 887 毫秒
1.
Based on results reported from the NHANES II Survey (the National Health and Nutrition Examination Survey II) for people living in the United States during 1976–1980, we use exploratory data analysis, probability plots, and the method of maximum likelihood to fit lognormal distributions to percentiles of body weight for males and females as a function of age from 6 months through 74 years. The results are immediately useful in probabilistic (and deterministic) risk assessments.  相似文献   

2.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   

3.
Twenty-four-hour recall data from the Continuing Survey of Food Intake by Individuals (CSFII) are frequently used to estimate dietary exposure for risk assessment. Food frequency questionnaires are traditional instruments of epidemiological research; however, their application in dietary exposure and risk assessment has been limited. This article presents a probabilistic method of bridging the National Health and Nutrition Examination Survey (NHANES) food frequency and the CSFII data to estimate longitudinal (usual) intake, using a case study of seafood mercury exposures for two population subgroups (females 16 to 49 years and children 1 to 5 years). Two hundred forty-nine CSFII food codes were mapped into 28 NHANES fish/shellfish categories. FDA and state/local seafood mercury data were used. A uniform distribution with minimum and maximum blood-diet ratios of 0.66 to 1.07 was assumed. A probabilistic assessment was conducted to estimate distributions of individual 30-day average daily fish/shellfish intakes, methyl mercury exposure, and blood levels. The upper percentile estimates of fish and shellfish intakes based on the 30-day daily averages were lower than those based on two- and three-day daily averages. These results support previous findings that distributions of "usual" intakes based on a small number of consumption days provide overestimates in the upper percentiles. About 10% of the females (16 to 49 years) and children (1 to 5 years) may be exposed to mercury levels above the EPA's RfD. The predicted 75th and 90th percentile blood mercury levels for the females in the 16-to-49-year group were similar to those reported by NHANES. The predicted 90th percentile blood mercury levels for children in the 1-to-5-year subgroup was similar to NHANES and the 75th percentile estimates were slightly above the NHANES.  相似文献   

4.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

5.
Utility functions in the form of tables or matrices have often been used to combine discretely rated decision‐making criteria. Matrix elements are usually specified individually, so no one rule or principle can be easily stated for the utility function as a whole. A series of five matrices are presented that aggregate criteria two at a time using simple rules that express a varying degree of constraint of the lower rating over the higher. A further nine possible matrices were obtained by using a different rule either side of the main axis of the matrix to describe situations where the criteria have a differential influence on the outcome. Uncertainties in the criteria are represented by three alternative frequency distributions from which the assessors select the most appropriate. The output of the utility function is a distribution of rating frequencies that is dependent on the distributions of the input criteria. In pest risk analysis (PRA), seven of these utility functions were required to mimic the logic by which assessors for the European and Mediterranean Plant Protection Organization arrive at an overall rating of pest risk. The framework enables the development of PRAs that are consistent and easy to understand, criticize, compare, and change. When tested in workshops, PRA practitioners thought that the approach accorded with both the logic and the level of resolution that they used in the risk assessments.  相似文献   

6.
Using probability plots and Maximum Likelihood Estimation (MLE), we fit lognormal distributions to data compiled by Ershow et al. for daily intake of total water and tap water by three groups of women (controls, pregnant, and lactating; all between 15–49 years of age) in the United States. We also develop bivariate lognormal distributions for the joint distribution of water ingestion and body weight for these three groups. Overall, we recommend the marginal distributions for water intake as fit by MLE for use in human health risk assessments.  相似文献   

7.
For the U.S. population, we fit bivariate distributions to estimated numbers of men and women aged 18-74 years in cells representing 1 in. intervals in height and 10 lb intervals in weight. For each sex separately, the marginal histogram of height is well fit by a normal distribution. For men and women, respectively, the marginal histogram of weight is well fit and satisfactorily fit by a lognormal distribution. For men, the bivariate histogram is satisfactorily fit by a normal distribution between the height and the natural logarithm of weight. For women, the bivariate histogram is satisfactorily fit by two superposed normal distributions between the height and the natural logarithm of weight. The resulting distributions are suitable for use in public health risk assessments.  相似文献   

8.
Nonlinear hazard models are used to examine temporal trends in the age-specific mortality risks of chronic obstructive lung diseases for the U.S. population. These hazard functions are fit to age-specific mortality rates for 1968 and 1977 for four race/sex groups. Changes in the parameters of these models are used to assess two types of differences in the age pattern of the rates between 1968 and 1977. The first measure of trend in the age-specific mortality rates is the temporal change in the proportionality constant in the function used to model their age variation. By allowing only this proportionality parameter to vary between 1968 and 1977, it is possible to determine an age-constant percentage increase or decrease. The second measure reflects the absolute displacement in terms of years of life of the fitted mortality curves for the two time points. This second index can be interpreted as the acceleration or deceleration of mortality risks over the life span, i.e., the number of years that is needed for mortality rates to achieve the same level as in the comparison group. The analysis showed that the age changes in chronic obstructive lung disease mortality rates differed by race/sex group and for both measures of change over the period. Adjustment of the fitted curves for the effects of individual variability in risk was significant for three of four groups.  相似文献   

9.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

10.
Ethylene oxide (EO) has been identified as a carcinogen in laboratory animals. Although the precise mechanism of action is not known, tumors in animals exposed to EO are presumed to result from its genotoxicity. The overall weight of evidence for carcinogenicity from a large body of epidemiological data in the published literature remains limited. There is some evidence for an association between EO exposure and lympho/hematopoietic cancer mortality. Of these cancers, the evidence provided by two large cohorts with the longest follow-up is most consistent for leukemia. Together with what is known about human leukemia and EO at the molecular level, there is a body of evidence that supports a plausible mode of action for EO as a potential leukemogen. Based on a consideration of the mode of action, the events leading from EO exposure to the development of leukemia (and therefore risk) are expected to be proportional to the square of the dose. In support of this hypothesis, a quadratic dose-response model provided the best overall fit to the epidemiology data in the range of observation. Cancer dose-response assessments based on human and animal data are presented using three different assumptions for extrapolating to low doses: (1) risk is linearly proportionate to dose; (2) there is no appreciable risk at low doses (margin-of-exposure or reference dose approach); and (3) risk below the point of departure continues to be proportionate to the square of the dose. The weight of evidence for EO supports the use of a nonlinear assessment. Therefore, exposures to concentrations below 37 microg/m3 are not likely to pose an appreciable risk of leukemia in human populations. However, if quantitative estimates of risk at low doses are desired and the mode of action for EO is considered, these risks are best quantified using the quadratic estimates of cancer potency, which are approximately 3.2- to 32-fold lower, using alternative points of departure, than the linear estimates of cancer potency for EO. An approach is described for linking the selection of an appropriate point of departure to the confidence in the proposed mode of action. Despite high confidence in the proposed mode of action, a small linear component for the dose-response relationship at low concentrations cannot be ruled out conclusively. Accordingly, a unit risk value of 4.5 x 10(-8) (microg/m3)(-1) was derived for EO, with a range of unit risk values of 1.4 x 10(-8) to 1.4 x 10(-7) (microg/m3)(-1) reflecting the uncertainty associated with a theoretical linear term at low concentrations.  相似文献   

11.
Risk analysis is a widely used tool to understand problems in food safety policy, but it is seldom applied to nutrition policy. We propose that risk analysis be applied more often to inform debates on nutrition policy, and we conduct a risk assessment of the relationship of regular carbonated soft drink (RCSD) consumption in schools and body mass index (BMI) as a case study. Data for RCSD consumption in schools were drawn from three data sets: the Continuing Survey of Food Intake by Individuals 1994-1996, 1998 (CSFII), the National Health and Nutrition Examination Survey 1999-2000 (NHANES), and the National Family Opinion (NFO) WorldGroup Share of Intake Panel (SIP) study. We used the largest relationship between RCSD and BMI that was published by prospective observational studies to characterize the maximum plausible relationship in our study. Consumption of RCSD in schools was low in all three data sets, ranging from 15 g/day in NFO-SIP to 60 g/day in NHANES. There was no relationship between RCSD consumption from all sources and BMI in either the CSFII or the NHANES data. The risk assessment showed no impact on BMI by removing RCSD consumption in school. These findings suggest that focusing adolescent overweight prevention programs on RCSD in schools will not have a significant impact on BMI.  相似文献   

12.
The extensive data from the Blair et al.((1)) epidemiology study of occupational acrylonitrile exposure among 25460 workers in eight plants in the United States provide an excellent opportunity to update quantitative risk assessments for this widely used commodity chemical. We employ the semiparametric Cox relative risk (RR) regression model with a cumulative exposure metric to model cause-specific mortality from lung cancer and all other causes. The separately estimated cause-specific cumulative hazards are then combined to provide an overall estimate of age-specific mortality risk. Age-specific estimates of the additional risk of lung cancer mortality associated with several plausible occupational exposure scenarios are obtained. For age 70, these estimates are all markedly lower than those generated with the cancer potency estimate provided in the USEPA acrylonitrile risk assessment.((2)) This result is consistent with the failure of recent occupational studies to confirm elevated lung cancer mortality among acrylonitrile-exposed workers as was originally reported by O'Berg,((3)) and it calls attention to the importance of using high-quality epidemiology data in the risk assessment process.  相似文献   

13.
Fish consumption rates play a critical role in the assessment of human health risks posed by the consumption of fish from chemically contaminated water bodies. Based on data from the 1989 Michigan Sport Anglers Fish Consumption Survey, we examined total fish consumption, consumption of self-caught fish, and consumption of Great Lakes fish for all adults, men, women, and certain higher risk subgroups such as anglers. We present average daily consumption rates as compound probability distributions consisting of a Bernoulli trial (to distinguish those who ate fish from those who did not) combined with a distribution (both empirical and parametric) for those who ate fish. We found that the average daily consumption rates for adults who ate fish are reasonably well fit by lognormal distributions. The compound distributions may be used as input variables for Monte Carlo simulations in public health risk assessments.  相似文献   

14.
A general probabilistically-based approach is proposed for both cancer and noncancer risk/safety assessments. The familiar framework of the original ADI/RfD formulation is used, substituting in the numerator a benchmark dose derived from a hierarchical pharmacokinetic/pharmacodynamic model and in the denominator a unitary uncertainty factor derived from a hierarchical animal/average human/sensitive human model. The empirical probability distributions of the numerator and denominator can be combined to produce an empirical human-equivalent distribution for an animal-derived benchmark dose in external-exposure units.  相似文献   

15.
An exposure model was developed to relate seafood consumption to levels of methylmercury (reported as mercury) in blood and hair in the U.S. population, and two subpopulations defined as children aged 2-5 and women aged 18-45. Seafood consumption was initially modeled using short-term (three-day) U.S.-consumption surveys that recorded the amount of fish eaten per meal. Since longer exposure periods include more eaters with a lower daily mean intake, the consumption distribution was adjusted by broadening the distribution to include more eaters and reducing the distribution mean to keep total population intake constant. The estimate for the total number of eaters was based on long-term purchase diaries. Levels of mercury in canned tuna, swordfish, and shark were based on FDA survey data. The distribution of mercury levels in other species was based on reported mean levels, with the frequency of consumption of each species based on market share. The shape distribution for the given mean was based on the range of variation encountered among shark, tuna, and swordfish. These distributions were integrated with a simulation that estimated average daily intake over a 360-day period, with 10,000 simulated individuals and 1,000 uncertainty iterations. The results of this simulation were then used as an input to a second simulation that modeled levels of mercury in blood and hair. The relationship between dietary intake and blood mercury in a population was modeled from data obtained from a 90-day study with controlled seafood intake. The relationship between blood and hair mercury in a population was modeled from data obtained from several sources. The biomarker simulation employed 2,000 simulated individuals and 1,000 uncertainty iterations. These results were then compared to the recent National Health and Nutrition Examination Survey (NHANES) that tabulated blood and hair mercury levels in a cross-section of the U.S. population. The output of the model and NHANES results were similar for both children and adult women, with predicted mercury biomarker concentrations within a factor of two or less of NHANES biomarker results. However, the model tended to underpredict blood levels for women and overpredict blood and hair levels for children.  相似文献   

16.
Moment‐matching discrete distributions were developed by Miller and Rice (1983) as a method to translate continuous probability distributions into discrete distributions for use in decision and risk analysis. Using gaussian quadrature, they showed that an n‐point discrete distribution can be constructed that exactly matches the first 2n ‐ 1 moments of the underlying distribution. These moment‐matching discrete distributions offer several theoretical advantages over the typical discrete approximations as shown in Smith (1993), but they also pose practical problems. In particular, how does the analyst estimate the moments given only the subjective assessments of the continuous probability distribution? Smith suggests that the moments can be estimated by fitting a distribution to the assessments. This research note shows that the quality of the moment estimates cannot be judged solely by how close the fitted distribution is to the true distribution. Examples are used to show that the relative errors in higher order moment estimates can be greater than 100%, even though the cumulative distribution function is estimated within a Kolmogorov‐Smirnov distance less than 1%.  相似文献   

17.
For risk assessments, the average current residence time (time since moving into current residence) has often been used as a surrogate for the average total residence time (time between moving into and out of a residence). Since the distributions of the two quantities are not necessarily the same, neither are their averages. Housing surveys provide current residence time data; total residence times must, therefore, be inferred. By modeling the moving process, the total residence time distribution can be estimated from current residence time data. Using 1985 and 1987 U.S. housing survey data, distributions and averages for both current and total residence times were calculated for several housing categories. The average total residence time calculated for all U.S. households, 4.6 ( se = 0.6) years, is less than half the average current residence time, 10.6 ( se = 0.1) years.  相似文献   

18.
Three methods (multiplicative, additive, and allometric) were developed to extrapolate physiological model parameter distributions across species, specifically from rats to humans. In the multiplicative approach, the rat model parameters are multiplied by the ratio of the mean values between humans and rats. Additive scaling of the distributions is denned by adding the difference between the average human value and the average rat value to each rat value. Finally, allometric scaling relies on established extrapolation relationships using power functions of body weight. A physiologically-based pharmacokinetic model was fitted independently to rat and human benzene disposition data. Human model parameters obtained by extrapolation and by fitting were used to predict the total bone marrow exposure to benzene and the quantity of metabolites produced in bone marrow. We found that extrapolations poorly predict the human data relative to the human model. In addition, the prediction performance depends largely on the quantity of interest. The extrapolated models underpredict bone marrow exposure to benzene relative to the human model. Yet, predictions of the quantity of metabolite produced in bone marrow are closer to the human model predictions. These results indicate that the multiplicative and allometric techniques were able to extrapolate the model parameter distributions, but also that rats do not provide a good kinetic model of benzene disposition in humans.  相似文献   

19.
This paper examines the problem of testing and confidence set construction for one‐dimensional functions of the coefficients in autoregressive (AR(p)) models with potentially persistent time series. The primary example concerns inference on impulse responses. A new asymptotic framework is suggested and some new theoretical properties of known procedures are demonstrated. I show that the likelihood ratio (LR) and LR± statistics for a linear hypothesis in an AR(p) can be uniformly approximated by a weighted average of local‐to‐unity and normal distributions. The corresponding weights depend on the weight placed on the largest root in the null hypothesis. The suggested approximation is uniform over the set of all linear hypotheses. The same family of distributions approximates the LR and LR± statistics for tests about impulse responses, and the approximation is uniform over the horizon of the impulse response. I establish the size properties of tests about impulse responses proposed by Inoue and Kilian (2002) and Gospodinov (2004), and theoretically explain some of the empirical findings of Pesavento and Rossi (2007). An adaptation of the grid bootstrap for impulse response functions is suggested and its properties are examined.  相似文献   

20.
Children are becoming an increasingly important focus for exposure and risk assessments because they are more sensitive than adults to environmental contaminants. A necessary step in measuring the extent of children's exposure and in calculating risk assessments is to document how and where children spend their time. This 1990-1991 survey of 1000 households was designed for this purpose, targeting children between 5 and 12 years of age, in six states in varied geographic regions. The behavior of children was sampled on both weekdays and weekends over all four seasons of the year using a retrospective time diary to allocate time to activities during the previous 24 h. Information was obtained on the kinds and locations of activities, the nature of the microenvironments of the locations, and the time spent in the different environments. Measures of variability in addition to mean hours per day are reported. Results of this study closely match those of earlier research on California children's activities done by the California Air Resources Board. One important finding of the survey was that 5- to 12-year-old children in all geographic regions spend most of their time indoors at home, indicating that risk assessments should focus on indoor, on-site hazards.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号