首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
The Constrained Extremal Distribution Selection Method   总被引:5,自引:0,他引:5  
Engineering design and policy formulation often involve the assessment of the likelihood of future events commonly expressed through a probability distribution. Determination of these distributions is based, when possible, on observational data. Unfortunately, these data are often incomplete, biased, and/or incorrect. These problems are exacerbated when policy formulation involves the risk of extreme events—situations of low likelihood and high consequences. Usually, observational data simply do not exist for such events. Therefore, determination of probabilities which characterize extreme events must utilize all available knowledge, be it subjective or observational, so as to most accurately reflect the likelihood of such events. Extending previous work on the statistics of extremes, the Constrained Extremal Distribution Selection Method is a methodology that assists in the selection of probability distributions that characterize the risk of extreme events using expert opinion to constrain the feasible values for parameters which explicitly define a distribution. An extremal distribution is then "fit" to observational data, conditional that the selection of parameters does not violate any constraints. Using a random search technique, genetic algorithms, parameters that minimize a measure of fit between a hypothesized distribution and observational data are estimated. The Constrained Extremal Distribution Selection Method is applied to a real world policy problem faced by the U.S. Environmental Protection Agency. Selected distributions characterize the likelihood of extreme, fatal hazardous material accidents in the United States. These distributions are used to characterize the risk of large scale accidents with numerous fatalities.  相似文献   

2.
Variability arises due to differences in the value of a quantity among different members of a population. Uncertainty arises due to lack of knowledge regarding the true value of a quantity for a given member of a population. We describe and evaluate two methods for quantifying both variability and uncertainty. These methods, bootstrap simulation and a likelihood-based method, are applied to three datasets. The datasets include a synthetic sample of 19 values from a Lognormal distribution, a sample of nine values obtained from measurements of the PCB concentration in leafy produce, and a sample of five values for the partitioning of chromium in the flue gas desulfurization system of coal-fired power plants. For each of these datasets, we employ the two methods to characterize uncertainty in the arithmetic mean and standard deviation, cumulative distribution functions based upon fitted parametric distributions, the 95th percentile of variability, and the 63rd percentile of uncertainty for the 81st percentile of variability. The latter is intended to show that it is possible to describe any point within the uncertain frequency distribution by specifying an uncertainty percentile and a variability percentile. Using the bootstrap method, we compare results based upon use of the method of matching moments and the method of maximum likelihood for fitting distributions to data. Our results indicate that with only 5–19 data points as in the datasets we have evaluated, there is substantial uncertainty based upon random sampling error. Both the boostrap and likelihood-based approaches yield comparable uncertainty estimates in most cases.  相似文献   

3.
The evaluation of the risk of water quality failures in a distribution network is a challenging task given that much of the available data are highly uncertain and vague, and many of the mechanisms are not fully understood. Consequently, a systematic approach is required to handle quantitative-qualitative data as well as a means to update existing information when new knowledge and data become available. Five general pathways (mechanisms) through which a water quality failure can occur in the distribution network are identified in this article. These include contaminant intrusion, leaching and corrosion, biofilm formation and microbial regrowth, permeation, and water treatment breakthrough (including disinfection byproducts formation). The proposed methodology is demonstrated using a simplified example for water quality failures in a distribution network. This article builds upon the previous developments of aggregative risk analysis approach. Each basic risk item in a hierarchical framework is expressed by a triangular fuzzy number, which is derived from the composition of the likelihood of a failure event and the associated failure consequence . An analytic hierarchy process is used to estimate weights required for grouping noncommensurate risk sources. The evidential reasoning is proposed to incorporate newly arrived data for the updating of existing risk estimates. The exponential ordered weighted averaging operators are used for defuzzification to incorporate attitudinal dimension for risk management. It is envisaged that the proposed approach could serve as a basis to benchmark acceptable risks in water distribution networks.  相似文献   

4.
Twenty-four-hour recall data from the Continuing Survey of Food Intake by Individuals (CSFII) are frequently used to estimate dietary exposure for risk assessment. Food frequency questionnaires are traditional instruments of epidemiological research; however, their application in dietary exposure and risk assessment has been limited. This article presents a probabilistic method of bridging the National Health and Nutrition Examination Survey (NHANES) food frequency and the CSFII data to estimate longitudinal (usual) intake, using a case study of seafood mercury exposures for two population subgroups (females 16 to 49 years and children 1 to 5 years). Two hundred forty-nine CSFII food codes were mapped into 28 NHANES fish/shellfish categories. FDA and state/local seafood mercury data were used. A uniform distribution with minimum and maximum blood-diet ratios of 0.66 to 1.07 was assumed. A probabilistic assessment was conducted to estimate distributions of individual 30-day average daily fish/shellfish intakes, methyl mercury exposure, and blood levels. The upper percentile estimates of fish and shellfish intakes based on the 30-day daily averages were lower than those based on two- and three-day daily averages. These results support previous findings that distributions of "usual" intakes based on a small number of consumption days provide overestimates in the upper percentiles. About 10% of the females (16 to 49 years) and children (1 to 5 years) may be exposed to mercury levels above the EPA's RfD. The predicted 75th and 90th percentile blood mercury levels for the females in the 16-to-49-year group were similar to those reported by NHANES. The predicted 90th percentile blood mercury levels for children in the 1-to-5-year subgroup was similar to NHANES and the 75th percentile estimates were slightly above the NHANES.  相似文献   

5.
传统EVT方法是从静态的角度,研究超额数据的性质。然而,它没有同时考虑极端数据发生的时间所隐含的充分信息。本文首次在国内提出了非奇次空间动态极值理论(ITD-EVT)的概念,克服了EVT的上述缺陷,在极端数据的基础上考虑了时间因素,并引入多个解释变量,使极值分布的是三个参数为时变的,用二维泊松分布过程建立动态空间模型,是文中一大特色。把TD-EVT运用于极端情况下风险值的估计中,对金融风险管理、资产定价等问题有较大的理论和现实意义。  相似文献   

6.
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard “point” risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.  相似文献   

7.
A method is developed for estimating a probability distribution using estimates of its percentiles provided by experts. The analyst's judgment concerning the credibility of these expert opinions is quantified in the likelihood function of Bayes'Theorem. The model considers explicitly the random variability of each expert estimate, the dependencies among the estimates of each expert, the dependencies among experts, and potential systematic biases. The relation between the results of the formal methods of this paper and methods used in practice is explored. A series of sensitivity studies provides insights into the significance of the parameters of the model. The methodology is applied to the problem of estimation of seismic fragility curves (i.e., the conditional probability of equipment failure given a seismically induced stress).  相似文献   

8.
The qualitative and quantitative evaluation of risk in developmental toxicology has been discussed in several recent publications.(1–3) A number of issues still are to be resolved in this area. The qualitative evaluation and interpretation of end points in developmental toxicology depends on an understanding of the biological events leading to the end points observed, the relationships among end points, and their relationship to dose and to maternal toxicity. The interpretation of these end points is also affected by the statistical power of the experiments used for detecting the various end points observed. The quantitative risk assessment attempts to estimate human risk for developmental toxicity as a function of dose. The current approach is to apply safety (uncertainty) factors to die no observed effect level (NOEL). An alternative presented and discussed here is to model the experimental data and apply a safety factor to an estimated risk level to achieve an “acceptable” level of risk. In cases where the dose-response curves upward, this approach provides a conservative estimate of risk. This procedure does not preclude the existence of a threshold dose. More research is needed to develop appropriate dose-response models that can provide better estimates for low-dose extrapolation of developmental effects.  相似文献   

9.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

10.
A call for risk assessment approaches that better characterize and quantify uncertainty has been made by the scientific and regulatory community. This paper responds to that call by demonstrating a distributional approach that draws upon human data to derive potency estimates and to identify and quantify important sources of uncertainty. The approach is rooted in the science of decision analysis and employs an influence diagram, a decision tree, probabilistic weights, and a distribution of point estimates of carcinogenic potency. Its results estimate the likelihood of different carcinogenic risks (potencies) for a chemical under a specific scenario. For this exercise, human data on formaldehyde were employed to demonstrate the approach. Sensitivity analyses were performed to determine the relative impact of specific levels and alternatives on the potency distribution. The resulting potency estimates are compared with the results of an exercise using animal data on formaldehyde. The paper demonstrates that distributional risk assessment is readily adapted to situations in which epidemiologic data serve as the basis for potency estimates. Strengths and weaknesses of the distributional approach are discussed. Areas for further application and research are recommended.  相似文献   

11.
In the analytic hierarchy process (AHP), interval judgments instead of precise ratios are widely accepted and can be practically used to solve decision-making problems when uncertainty exists because of scant information available or insufficient understanding of the problem. This paper presents a simple and effective method for finding the extreme points in a range of interval ratios (such as loose articulation, minimum number of interval ratios, and general interval ratios) and ultimately for establishing the dominance relations among alternatives using the identified extreme points. This is followed by an enumeration or simulation approach to manage situations in which the best or a full ranking of alternatives remains unidentified.  相似文献   

12.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

13.
We propose a method to correct for sample selection in quantile regression models. Selection is modeled via the cumulative distribution function, or copula, of the percentile error in the outcome equation and the error in the participation decision. Copula parameters are estimated by minimizing a method‐of‐moments criterion. Given these parameter estimates, the percentile levels of the outcome are readjusted to correct for selection, and quantile parameters are estimated by minimizing a rotated “check” function. We apply the method to correct wage percentiles for selection into employment, using data for the UK for the period 1978–2000. We also extend the method to account for the presence of equilibrium effects when performing counterfactual exercises.  相似文献   

14.
In project portfolio selection, the aim is to choose projects which are expected to offer most value and satisfy relevant risk and other constraints. In this study, we show that uncertainties about how much value the projects will offer, combined with the fact that only a subset of the proposed projects will be selected, lead to inaccurate risk estimates about the aggregate value provided by the selected project portfolio. In particular, when downside risks are measured in terms of lower percentiles of the distribution of portfolio value, these risk estimates will exhibit a systematic bias. For deriving unbiased risk estimates, we present a calibration framework in which the required calibration can be presented in closed‐form in some cases or, more generally, derived by using Monte Carlo simulation to study a large number of project selection decisions. We also show that when the decision must comply with risk constraints, the introduction of tighter (more demanding) risk constraints can counterintuitively aggravate the underestimation of risks. Finally, we present how the calibrated risk estimates can be employed to align the portfolio with the decision maker's risk preferences while eliminating systematic biases in risk estimates.  相似文献   

15.
Ozone depletion potential (ODP) represents the cumulative ozone depletion induced by a particular halocarbon relative to a reference gas (usually trichlorofluoromethane, CFC-11). We focus on ODP estimation for methyl bromide. Previous attempts at its estimation have assumed that components of the ODP equation are lognormally distributed. By considering a wide range of modeling scenarios, we show that this restriction (which is based on computational convenience rather than experimental evidence) has obscured the true uncertainty in the ODP value. Moreover, when publishing point estimates for the ODP value, previous authors have given either mean or median values. We submit that a more appropriate choice for a point estimate is the mode since the distribution of ODP is skewed and since the mode is by definition, the most likely value. For each modeling scenario considered, modal values are given. In general, we find these ODP point estimates are considerably lower than those published elsewhere.  相似文献   

16.
Given the prevalence of uncertainty and variability in estimates of environmental health risks, it is important to know how citizens interpret information representing uncertainty in risk estimates. Ranges of risk estimates from a hypothetical industry source elicited divergent evaluations of risk assessors' honesty and competence among New Jersey residents within one mile of one or more factories. A plurality saw ranges of risk estimates as both honest and competent, but with most judging such ranges as deficient on one or both dimensions. They wanted definitive conclusions about safety, tended to believe the high end of the range was more likely to be an accurate estimate of the risk, and believed that institutions only discuss risks when they are "high." Acknowledgment of scientific, as opposed to self-interested, reasons for uncertainty and disputes among experts was low. Attitude toward local industry seemed associated with, if not a cause of, attitudes about ranges of risk estimates. These reactions by industry neighbors appear to replicate the findings of Johnson and Slovic (1995, 1998), despite the hypothetical producer of risk estimates being industry instead of government. Respondents were older and less educated on average than were the earlier samples, but more diverse. Regression analyses suggested attitude toward industry was a major factor in these reactions, although other explanations (e.g., level of scientific understanding independent of general education) were not tested in this study.  相似文献   

17.
Traditionally, microbial risk assessors have used point estimates to evaluate the probability that an individual will become infected. We developed a quantitative approach that shifts the risk characterization perspective from point estimate to distributional estimate, and from individual to population. To this end, we first designed and implemented a dynamic model that tracks traditional epidemiological variables such as the number of susceptible, infected, diseased, and immune, and environmental variables such as pathogen density. Second, we used a simulation methodology that explicitly acknowledges the uncertainty and variability associated with the data. Specifically, the approach consists of assigning probability distributions to each parameter, sampling from these distributions for Monte Carlo simulations, and using a binary classification to assess the output of each simulation. A case study is presented that explores the uncertainties in assessing the risk of giardiasis when swimming in a recreational impoundment using reclaimed water. Using literature-based information to assign parameters ranges, our analysis demonstrated that the parameter describing the shedding of pathogens by infected swimmers was the factor that contributed most to the uncertainty in risk. The importance of other parameters was dependent on reducing the a priori range of this shedding parameter. By constraining the shedding parameter to its lower subrange, treatment efficiency was the parameter most important in predicting whether a simulation resulted in prevalences above or below non outbreak levels. Whereas parameters associated with human exposure were important when the shedding parameter was constrained to a higher subrange. This Monte Carlo simulation technique identified conditions in which outbreaks and/or nonoutbreaks are likely and identified the parameters that most contributed to the uncertainty associated with a risk prediction.  相似文献   

18.
Statements such as "80% of the employees do 20% of the work" or "the richest 1% of society controls 10% of its assets" are commonly used to describe the distribution or concentration of a variable characteristic within a population. Analogous statements can be constructed to reflect the relationship between probability and concentration for unvarying quantities surrounded by uncertainty. Both kinds of statements represent specific usages of a general relationship, the "mass density function," that is not widely exploited in risk analysis and management. This paper derives a simple formula for the mass density function when the uncertainty and/or the variability in a quantity is lognormally distributed; the formula gives the risk analyst an exact, "back-of-the-envelope" method for determining the fraction of the total amount of a quantity contained within any portion of its distribution. For example, if exposures to a toxicant are lognormally distributed with σin x= 2, 50% of all the exposure is borne by the 2.3% of persons most heavily exposed. Implications of this formula for various issues in risk assessment are explored, including: (1) the marginal benefits of risk reduction; (2) distributional equity and risk perception; (3) accurate confidence intervals for the population mean when a limited set of data is available; (4) the possible biases introduced by the uncritical assumption that extreme "outliers" exist; and (5) the calculation of the value of new information.  相似文献   

19.
We analyze the risk of severe fatal accidents causing five or more fatalities and for nine different activities covering the entire oil chain. Included are exploration and extraction, transport by different modes, refining and final end use in power plants, heating or gas stations. The risks are quantified separately for OECD and non‐OECD countries and trends are calculated. Risk is analyzed by employing a Bayesian hierarchical model yielding analytical functions for both frequency (Poisson) and severity distributions (Generalized Pareto) as well as frequency trends. This approach addresses a key problem in risk estimation—namely the scarcity of data resulting in high uncertainties in particular for the risk of extreme events, where the risk is extrapolated beyond the historically most severe accidents. Bayesian data analysis allows the pooling of information from different data sets covering, for example, the different stages of the energy chains or different modes of transportation. In addition, it also inherently delivers a measure of uncertainty. This approach provides a framework, which comprehensively covers risk throughout the oil chain, allowing the allocation of risk in sustainability assessments. It also permits the progressive addition of new data to refine the risk estimates. Frequency, severity, and trends show substantial differences between the activities, emphasizing the need for detailed risk analysis.  相似文献   

20.
Health care professionals are a major source of risk communications, but their estimation of risks may be compromised by systematic biases. We examined fuzzy-trace theory's predictions of professionals' biases in risk estimation for sexually transmitted infections (STIs) linked to: knowledge deficits (producing underestimation of STI risk, re-infection, and gender differences), gist-based mental representation of risk categories (producing overestimation of condom effectiveness for psychologically atypical but prevalent infections), retrieval failure for risk knowledge (producing greater risk underestimation when STIs are not specified), and processing interference involving combining risk estimates (producing biases in post-test estimation of infection, regardless of knowledge). One-hundred-seventy-four subjects (experts attending a national workshop, physicians, other health care professionals, and students) estimated the risk of teenagers contracting STIs, re-infection rates for males and females, and condom effectiveness in reducing infection risk. Retrieval was manipulated by asking estimation questions in two formats, a specific format that "unpacked" the STI category (infection types) and a global format that did not provide specific cues. Requesting estimates of infection risk after relevant knowledge was directly provided, isolating processing effects, assessed processing biases. As predicted, all groups of professionals underestimated the risk of STI transmission, re-infection, and gender differences, and overestimated the effectiveness of condoms, relative to published estimates. However, when questions provided better retrieval supports (specified format), estimation bias decreased. All groups of professionals also suffered from predicted processing biases. Although knowledge deficits contribute to estimation biases, the research showed that biases are also linked to fuzzy representations, retrieval failures, and processing errors Hence, interventions that are designed to improve risk perception among professionals must incorporate more than knowledge dissemination. They should also provide support for information representation, effective retrieval, and accurate processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号