首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Minimum surgical times are positive and often large. The lognormal distribution has been proposed for modeling surgical data, and the three‐parameter form of the lognormal, which includes a location parameter, should be appropriate for surgical data. We studied the goodness‐of‐fit performance, as measured by the Shapiro‐Wilk p‐value, of three estimators of the location parameter for the lognormal distribution, using a large data set of surgical times. Alternative models considered included the normal distribution and the two‐parameter lognormal model, which sets the location parameter to zero. At least for samples with n > 30, data adequately fit by the normal had significantly smaller skewness than data not well fit by the normal, and data with larger relative minima (smallest order statistic divided by the mean) were better fit by a lognormal model. The rule “If the skewness of the data is greater than 0.35, use the three‐parameter lognormal with the location parameter estimate proposed by Muralidhar & Zanakis (1992), otherwise, use the two‐parameter model” works almost as well at specifying the lognormal model as more complex guidelines formulated by linear discriminant analysis and by tree induction.  相似文献   

2.
Distributions of pathogen counts in treated water over time are highly skewed, power‐law‐like, and discrete. Over long periods of record, a long tail is observed, which can strongly determine the long‐term mean pathogen count and associated health effects. Such distributions have been modeled with the Poisson lognormal (PLN) computed (not closed‐form) distribution, and a new discrete growth distribution (DGD), also computed, recently proposed and demonstrated for microbial counts in water (Risk Analysis 29, 841–856). In this article, an error in the original theoretical development of the DGD is pointed out, and the approach is shown to support the closed‐form discrete Weibull (DW). Furthermore, an information‐theoretic derivation of the DGD is presented, explaining the fit shown for it to the original nine empirical and three simulated (n = 1,000) long‐term waterborne microbial count data sets. Both developments result from a theory of multiplicative growth of outcome size from correlated, entropy‐forced cause magnitudes. The predicted DW and DGD are first borne out in simulations of continuous and discrete correlated growth processes, respectively. Then the DW and DGD are each demonstrated to fit 10 of the original 12 data sets, passing the chi‐square goodness‐of‐fit test (α= 0.05, overall p = 0.1184). The PLN was not demonstrated, fitting only 4 of 12 data sets (p = 1.6 × 10?8), explained by cause magnitude correlation. Results bear out predictions of monotonically decreasing distributions, and suggest use of the DW for inhomogeneous counts correlated in time or space. A formula for computing the DW mean is presented.  相似文献   

3.
This paper critiques the Environmental Protection Agency's assessment of risk for hazardous waste incineration at sea. It reviews operational and transportation risks and considers alternative approaches for managing chlorinated organic hazardous wastes. It concludes that depending on the scale of the program, ocean incineration will either contribute little to the overall management of this waste stream or else it will engender significant risks, especially in the coastal environment. Furthermore, past assessments on the part of EPA have tended to understate the risks of incineration at sea while simultaneously holding out the promise of the technology as a commercial-scale management option. Finally, this paper observes that the Western European countries that pioneered incineration in the North Sea are now finding practical alternatives. It is recommended that waste reuse, on-site treatment, and techniques of waste reduction provide viable alternatives and obviate the need for incineration at sea.  相似文献   

4.
Application of Geostatistics to Risk Assessment   总被引:3,自引:0,他引:3  
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.  相似文献   

5.
Massive efforts are underway to clean up hazardous and radioactive waste sites located throughout the United States. To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate, and effects of hazardous chemicals and radioactive materials found at these sites. Although the U.S. Environmental Protection Agency (EPA), the U.S. Department of Energy (DOE), and the U.S. Nuclear Regulatory Commission (NRC)have provided preliminary guidance to promote the use of computer models for remediation purposes, no agency has produced directed guidance on models that must be used in these efforts. As a result, model selection is currently done on an ad hoc basis. This is administratively ineffective and costly, and can also result in technically inconsistent decision-making. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE, and NRC was initiated. The purpose of this project was to: (1)identify models being used for hazardous and radioactive waste site assessment purposes; and (2)describe and classify these models. This report presents the results of this study. A mail survey was conducted to identify models in use. The survey was sent to 550 persons engaged in the cleanup of hazardous and radioactive waste sites; 87 individuals responded. They represented organizations including federal agencies, national laboratories, and contractor organizations. The respondents identified 127 computer models that were being used to help support cleanup decision-making. There were a few models that appeared to be used across a large number of sites (e.g., RESRAD). In contrast, the survey results also suggested that most sites were using models which were not reported in use elsewhere. Information is presented on the types of models being used and the characteristics of the models in use. Also shown is a list of models available, but not identified in the survey itself.  相似文献   

6.
Living microbes are discrete, not homogeneously distributed in environmental media, and the form of the distribution of their counts in drinking water has not been well established. However, this count may "scale" or range over orders of magnitude over time, in which case data representing the tail of the distribution, and governing the mean, would be represented only in impractically long data records. In the absence of such data, knowledge of the general form of the full distribution could be used to estimate the true mean accounting for low-probability, high-consequence count events and provide a basis for a general environmental dose-response function. In this article, a new theoretical discrete growth distribution (DGD) is proposed for discrete counts in environmental media and other discrete growth systems. The term growth refers not to microbial growth but to a general abiotic first-order growth/decay of outcome sizes in many complex systems. The emergence and stability of the DGD in such systems, defined in simultaneous work, are also described. The DGD is then initially verified versus 12 of 12 simulated long-term drinking water and short-term treated and untreated water microbial count data sets. The alternative Poisson lognormal (PLN) distribution was rejected for 2 (17%) of the 12 data sets with 95% confidence and, like other competitive distributions, was not found stable (in simultaneous work). Sample averages are compared with means assessed from the fitted DGD, with varying results. Broader validation of the DGD for discrete counts arising as outcomes of mathematical growth systems is suggested.  相似文献   

7.
The purpose of this paper is to undertake a statistical analysis to specify empirical distributions and to estimate univariate parametric probability distributions for air exchange rates for residential structures in the United States. To achieve this goal, we used data compiled by the Brookhaven National Laboratory using a method known as the perfluorocarbon tracer (PFT) technique. While these data are not fully representative of all areas of the country or all housing types, they are judged to be by far the best available. The analysis is characterized by four key points: the use of data for 2,844 households; a four-region breakdown based on heating degree days, a best available measure of climatic factors affecting air exchange rates; estimation of lognormal distributions as well as provision of empirical (frequency) distributions; and provision of these distributions for all of the data, for the data segmented by the four regions, for the data segmented by the four seasons, and for the data segmented by a 16 region by season breakdown. Except in a few cases, primarily for small sample sizes, air exchange rates were found to be well fit by lognormal distributions (adjusted R2 0.95). The empirical or lognormal distributions may be used in indoor air models or as input variables for probabilistic human health risk assessments.  相似文献   

8.
Areas immediately adjacent to 16 of the first US national priority (NPL) hazardous waste sites that also had pre-superfund emergency actions were examined to measure local stigma. Four decades after their NPL designation, I found marked variation in these areas’ social, public health and environmental attributes. About one-third of these small areas fit the stereotype of stressed areas with environmental injustice challenges. Yet, another one-third of these sites have better measurable outcomes than a combination of their host states and counties. For example, they have elevated levels of broadband access and their local jurisdictions are classified as safe and attractive to families. I conclude that long-term stigma around a Superfund site was limited by US EPA actions, as well as by progressive state and local governments, and community groups, in other words, contributions from parties at multiple geographical scales.  相似文献   

9.
The U.S. Environmental Protection Agency's cancer guidelines ( USEPA, 2005 ) present the default approach for the cancer slope factor (denoted here as s*) as the slope of the linear extrapolation to the origin, generally drawn from the 95% lower confidence limit on dose at the lowest prescribed risk level supported by the data. In the past, the cancer slope factor has been calculated as the upper 95% confidence limit on the coefficient (q*1) of the linear term of the multistage model for the extra cancer risk over background. To what extent do the two approaches differ in practice? We addressed this issue by calculating s* and q*1 for 102 data sets for 60 carcinogens using the constrained multistage model to fit the dose‐response data. We also examined how frequently the fitted dose‐response curves departed appreciably from linearity at low dose by comparing q1, the coefficient of the linear term in the multistage polynomial, with a slope factor, sc, derived from a point of departure based on the maximum liklihood estimate of the dose‐response. Another question we addressed is the extent to which s* exceeded sc for various levels of extra risk. For the vast majority of chemicals, the prescribed default EPA methodology for the cancer slope factor provides values very similar to that obtained with the traditionally estimated q*1. At 10% extra risk, q*1/s* is greater than 0.3 for all except one data set; for 82% of the data sets, q*1 is within 0.9 to 1.1 of s*. At the 10% response level, the interquartile range of the ratio, s*/sc, is 1.4 to 2.0.  相似文献   

10.
Using probability plots and Maximum Likelihood Estimation (MLE), we fit lognormal distributions to data compiled by Ershow et al. for daily intake of total water and tap water by three groups of women (controls, pregnant, and lactating; all between 15–49 years of age) in the United States. We also develop bivariate lognormal distributions for the joint distribution of water ingestion and body weight for these three groups. Overall, we recommend the marginal distributions for water intake as fit by MLE for use in human health risk assessments.  相似文献   

11.
For the U.S. population, we fit bivariate distributions to estimated numbers of men and women aged 18-74 years in cells representing 1 in. intervals in height and 10 lb intervals in weight. For each sex separately, the marginal histogram of height is well fit by a normal distribution. For men and women, respectively, the marginal histogram of weight is well fit and satisfactorily fit by a lognormal distribution. For men, the bivariate histogram is satisfactorily fit by a normal distribution between the height and the natural logarithm of weight. For women, the bivariate histogram is satisfactorily fit by two superposed normal distributions between the height and the natural logarithm of weight. The resulting distributions are suitable for use in public health risk assessments.  相似文献   

12.
This study presents a method to assess short term traumatic fatality risks for workers involved in hazardous waste site remediation to provide a quantitative, rather than qualitative, basis for evaluating occupational exposures in remediation feasibility studies. Occupational employment and fatality data for the years 1979–1981 and 1983 were compiled from Bureau of Labor Statistics data for 11 states. These data were analyzed for 17 occupations associated with three common remediation alternatives: excavation and landfill, capping, and capping plus slurry wall. The two occupations with the highest death rates, truck driver and laborer, contributed most to total exposure hours in each alternative. Weighted average death rates were produced for each alternative and multiplied by respective total person-years of exposure. The resultant expected number of fatalities was converted, using the Poisson distribution, to the risk of experiencing at least one fatality, as follows: 0.149 for excavation and landfill, 0.012 for capping, and 0.014 for capping plus slurry wall. These risks were discussed in light of the need to obtain more reliable and comprehensive data than are currently available on the occupational safety and health risks associated with hazardous waste site remediation and the need for a more scientific, quantitative approach to remediation decisions involving risks to workers.  相似文献   

13.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

14.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   

15.
A conventional dose–response function can be refitted as additional data become available. A predictive dose–response function in contrast does not require a curve-fitting step, only additional data and presents the unconditional probabilities of illness, reflecting the level of information it contains. In contrast, the predictive Bayesian dose–response function becomes progressively less conservative as more information is included. This investigation evaluated the potential for using predictive Bayesian methods to develop a dose–response for human infection that improves on existing models, to show how predictive Bayesian statistical methods can utilize additional data, and expand the Bayesian methods for a broad audience including those concerned about an oversimplification of dose–response curve use in quantitative microbial risk assessment (QMRA). This study used a dose–response relationship incorporating six separate data sets for Cryptosporidium parvum. A Pareto II distribution with known priors was applied to one of the six data sets to calibrate the model, while the others were used for subsequent updating. While epidemiological principles indicate that local variations, host susceptibility, and organism strain virulence may vary, the six data sets all appear to be well characterized using the Bayesian approach. The adaptable model was applied to an existing data set for Campylobacter jejuni for model validation purposes, which yielded results that demonstrate the ability to analyze a dose–response function with limited data using and update those relationships with new data. An analysis of the goodness of fit compared to the beta-Poisson methods also demonstrated correlation between the predictive Bayesian model and the data.  相似文献   

16.
The rate of fish consumption is a critical variable in the assessment of human health risk from water bodies affected by chemical contamination and in the establishment of federal and state Ambient Water Quality Criteria (AWQC). For 1973 and 1974, the National Marine Fisheries Service (NMFS) analyzed data on the consumption of salt-water finfish, shellfish, and freshwater finfish from all sources in 10 regions of the United States for three age groups in the general population: children (ages 1 through 11 years), teenagers (ages 12 through 18 years), and adults (ages 19 through 98 years). Even though the NMFS data reported in Ref. 14 are 20 years old, they remain the most complete data on the overall consumption of all fish by the general U.S. population and they have been widely used to select point values for consumption. Using three methods, we fit lognormal distributions to the results of the survey as analyzed and published in Ref. 14. Strong lognormal fits were obtained for most of the 90 separate data sets. These results cannot necessarily be used to model the consumption of fish by sport or subsistence anglers from specific sites or from single water bodies.  相似文献   

17.
Mycobacterium avium subspecies paratuberculosis (MAP) causes chronic inflammation of the intestines in humans, ruminants, and other species. It is the causative agent of Johne's disease in cattle, and has been implicated as the causative agent of Crohn's disease in humans. To date, no quantitative microbial risk assessment (QMRA) for MAP utilizing a dose‐response function exists. The objective of this study is to develop a nested dose‐response model for infection from oral exposure to MAP utilizing data from the peer‐reviewed literature. Four studies amenable to dose‐response modeling were identified in the literature search and optimized to the one‐parameter exponential or two‐parameter beta‐Poisson dose‐response models. A nesting analysis was performed on all permutations of the candidate data sets to determine the acceptability of pooling data sets across host species. Three of four data sets exhibited goodness of fit to at least one model. All three data sets exhibited good fit to the beta‐Poisson model, and one data set exhibited goodness of fit, and best fit, to the exponential model. Two data sets were successfully nested using the beta‐Poisson model with parameters α = 0.0978 and N50 = 2.70 × 102 CFU. These data sets were derived from sheep and red deer host species, indicating successful interspecies nesting, and demonstrate the highly infective nature of MAP. The nested dose‐response model described should be used for future QMRA research regarding oral exposure to MAP.  相似文献   

18.
David Okrent 《Risk analysis》1999,19(5):877-901
This article begins with some history of the derivation of 40 CFR Part 191, the U.S. Environmental Protection Agency (EPA) standard that governs the geologic disposal of spent nuclear fuel and high-level and transuranic radioactive wastes. This is followed by criticisms of the standard that were made by a Sub-Committee of the EPA Science Advisory Board, by the staff of the U.S. Nuclear Regulatory Commission, and by a panel of the National Academies of Science and Engineering. The large disparity in the EPA approaches to regulation of disposal of radioactive wastes and disposal of hazardous, long-lived, nonradioactive chemical waste is illustrated. An examination of the intertwined matters of intergenerational equity and the discounting of future health effects follows, together with a discussion of the conflict between intergenerational equity and intragenerational equity. Finally, issues related to assumptions in the regulations concerning the future state of society and the biosphere are treated, as is the absence of any national philosophy or guiding policy for how to deal with societal activities that pose very long-term risks.  相似文献   

19.
Fish consumption rates play a critical role in the assessment of human health risks posed by the consumption of fish from chemically contaminated water bodies. Based on data from the 1989 Michigan Sport Anglers Fish Consumption Survey, we examined total fish consumption, consumption of self-caught fish, and consumption of Great Lakes fish for all adults, men, women, and certain higher risk subgroups such as anglers. We present average daily consumption rates as compound probability distributions consisting of a Bernoulli trial (to distinguish those who ate fish from those who did not) combined with a distribution (both empirical and parametric) for those who ate fish. We found that the average daily consumption rates for adults who ate fish are reasonably well fit by lognormal distributions. The compound distributions may be used as input variables for Monte Carlo simulations in public health risk assessments.  相似文献   

20.
There are a number of sources of variability in food consumption patterns and residue levels of a particular chemical (e.g., pesticide, food additive) in commodities that lead to an expected high level of variability in dietary exposures across a population. This paper focuses on examples of consumption pattern survey data for specific commodities, namely that for wine and grape juice, and demonstrates how such data might be analyzed in preparation for performing stochastic analyses of dietary exposure. Data from the NIAAA/NHIS wine consumption survey were subset for gender and age group and, with matched body weight data from the survey database, were used to define empirically-based percentile estimates for wine intake (μl wine/kg body weight) for the strata of interest. The data for these two subpopulations were analyzed to estimate 14-day consumption distributional statistics and distributions for only those days on which wine was consumed. Data subsets for all wine-consuming adults and wine-consuming females ages 18 through 45, were determined to fit a lognormal distribution ( R 2= 0.99 for both datasets). Market share data were incorporated into estimation of chronic exposures to hypothetical chemical residues in imported table wine. As a separate example, treatment of grape juice consumption data for females, ages 18–40, as a simple lognormal distribution resulted in a significant underestimation of intake, and thus exposure, because the actual distribution is a mixture (i.e., multiple subpopulations of grape juice consumers exist in the parent distribution). Thus, deriving dietary intake statistics from food consumption survey data requires careful analysis of the underlying empirical distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号