首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   

2.
3.
This article develops and fits probability distributions for the variability in projected (total) job tenure for adult men and women in 31 industries and 22 occupations based on data reported by the U.S. Department of Labor's Bureau of Labor Statistics. It extends previously published results and updates those results from January 1987 to February 1996. The model provides probability distributions for the variability in projected (total) job tenures within the time range of the data, and it extrapolates the distributions beyond the time range of the data, i.e., beyond 25 years.  相似文献   

4.
This paper reanalyzes the dataset cited by the U.S. Environmental Protection Agency in its Exposure Factors Handbook that contains measurements of skin area, height, and body weight for 401 people spanning all stages of development. The reanalysis shows that a univariate model for total skin area as a function of body weight gives useful and practical results with little or no loss of reliability as compared to the Agency's bivariate model. This new result leads to a new method to develop Lognormal distributions for total skin area as a function of body weight alone.  相似文献   

5.
Lognormal Distributions for Water Intake by Children and Adults   总被引:4,自引:0,他引:4  
We fit lognormal distributions to data collected in a national survey for both total water intake and tap water intake by children and adults for these age groups in years: 0 less than age less than 1; 1 less than or equal to age less than 11; 11 less than or equal to age less than 20; 20 less than or equal to age less than 65; 65 less than or equal to age; and all people in the survey taken as a single group. These distributions are suitable for use in public health risk assessments.  相似文献   

6.
Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method, entitled Two-Step Tail Area Sampling, is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses a two-step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first-step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor.  相似文献   

7.
This note presents parameterized distributions of estimates of the amount of soil ingested by children based on data collected by Binder et al. (1986). Following discussions with Dr. Binder, we modified the Binder study data by using the actual stool weights instead of the 15 g value used in the original study. After testing the data for lognormality, we generated parameterized distributions for use in risk assessment uncertainty analyses such as Monte Carlo simulations.  相似文献   

8.
On Modeling Correlated Random Variables in Risk Assessment   总被引:1,自引:0,他引:1  
Haas  Charles N. 《Risk analysis》1999,19(6):1205-1214
Monte Carlo methods in risk assessment are finding increasingly widespread application. With the recognition that inputs may be correlated, the incorporation of such correlations into the simulation has become important. Most implementations rely upon the method of Iman and Conover for generating correlated random variables. In this work, alternative methods using copulas are presented for deriving correlated random variables. It is further shown that the particular algorithm or assumption used may have a substantial effect on the output results, due to differences in higher order bivariate moments.  相似文献   

9.
Use of probability distributions by regulatory agencies often focuses on the extreme events and scenarios that correspond to the tail of probability distributions. This paper makes the case that assessment of the tail of the distribution can and often should be performed separately from assessment of the central values. Factors to consider when developing distributions that account for tail behavior include (a) the availability of data, (b) characteristics of the tail of the distribution, and (c) the value of additional information in assessment. The integration of these elements will improve the modeling of extreme events by the tail of distributions, thereby providing policy makers with critical information on the risk of extreme events. Two examples provide insight into the theme of the paper. The first demonstrates the need for a parallel analysis that separates the extreme events from the central values. The second shows a link between the selection of the tail distribution and a decision criterion. In addition, the phenomenon of breaking records in time-series data gives insight to the information that characterizes extreme values. One methodology for treating risk of extreme events explicitly adopts the conditional expected value as a measure of risk. Theoretical results concerning this measure are given to clarify some of the concepts of the risk of extreme events.  相似文献   

10.
Regulatory agencies often perform microbial risk assessments to evaluate the change in the number of human illnesses as the result of a new policy that reduces the level of contamination in the food supply. These agencies generally have regulatory authority over the production and retail sectors of the farm‐to‐table continuum. Any predicted change in contamination that results from new policy that regulates production practices occurs many steps prior to consumption of the product. This study proposes a framework for conducting microbial food‐safety risk assessments; this framework can be used to quantitatively assess the annual effects of national regulatory policies. Advantages of the framework are that estimates of human illnesses are consistent with national disease surveillance data (which are usually summarized on an annual basis) and some of the modeling steps that occur between production and consumption can be collapsed or eliminated. The framework leads to probabilistic models that include uncertainty and variability in critical input parameters; these models can be solved using a number of different Bayesian methods. The Bayesian synthesis method performs well for this application and generates posterior distributions of parameters that are relevant to assessing the effect of implementing a new policy. An example, based on Campylobacter and chicken, estimates the annual number of illnesses avoided by a hypothetical policy; this output could be used to assess the economic benefits of a new policy. Empirical validation of the policy effect is also examined by estimating the annual change in the numbers of illnesses observed via disease surveillance systems.  相似文献   

11.
Monte Carlo simulations have become a mainstream technique for environmental and technical risk assessments. Because their results are dependent on the quality of the involved input distributions, it is important to identify distributions that are flexible enough to model all relevant data yet efficient enough to allow thousands of evaluations necessary in a typical simulation analysis. It has been shown in recent years that the S-distribution provides accurate representations for frequency data that are symmetric or skewed to either side. This flexibility makes the S-distribution an ideal candidate for Monte Carlo analyses. To use the distribution effectively, methods must be available for drawing S-distributed random numbers. Such a method is proposed here. It is shown that S-distributed random numbers can be efficiently generated from a simple algebraic formula whose coefficients are tabulated. The method is shown step by step and illustrated with a detailed example. (The tables are accessible in electronic form in the FTP parent directory at http:@www.musc.edu/voiteo/ftp/.)  相似文献   

12.
This paper is concerned with accuracy properties of simulations of approximate solutions for stochastic dynamic models. Our analysis rests upon a continuity property of invariant distributions and a generalized law of large numbers. We then show that the statistics generated by any sufficiently good numerical approximation are arbitrarily close to the set of expected values of the model's invariant distributions. Also, under a contractivity condition on the dynamics, we establish error bounds. These results are of further interest for the comparative study of stationary solutions and the estimation of structural dynamic models.  相似文献   

13.
The use of benchmark dose (BMD) calculations for dichotomous or continuous responses is well established in the risk assessment of cancer and noncancer endpoints. In some cases, responses to exposure are categorized in terms of ordinal severity effects such as none, mild, adverse, and severe. Such responses can be assessed using categorical regression (CATREG) analysis. However, while CATREG has been employed to compare the benchmark approach and the no‐adverse‐effect‐level (NOAEL) approach in determining a reference dose, the utility of CATREG for risk assessment remains unclear. This study proposes a CATREG model to extend the BMD approach to ordered categorical responses by modeling severity levels as censored interval limits of a standard normal distribution. The BMD is calculated as a weighted average of the BMDs obtained at dichotomous cutoffs for each adverse severity level above the critical effect, with the weights being proportional to the reciprocal of the expected loss at the cutoff under the normal probability model. This approach provides a link between the current BMD procedures for dichotomous and continuous data. We estimate the CATREG parameters using a Markov chain Monte Carlo simulation procedure. The proposed method is demonstrated using examples of aldicarb and urethane, each with several categories of severity levels. Simulation studies comparing the BMD and BMDL (lower confidence bound on the BMD) using the proposed method to the correspondent estimates using the existing methods for dichotomous and continuous data are quite compatible; the difference is mainly dependent on the choice of cutoffs for the severity levels.  相似文献   

14.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

15.
基于MCMC的金融市场风险VaR的估计   总被引:17,自引:6,他引:11  
针对现有 Va R计算中主流方法的缺陷 ,创新性地提出了一种基于马尔科夫链蒙特卡洛(Markov Chain Monte Carlo,MCMC)模拟的 Va R计算方法 ,以克服传统 Monte Carlo模拟的高维、静态性缺陷 ,提高估算精度 .通过对美元国债的实证分析和计算 ,验证了 MCMC方法的优越性 .  相似文献   

16.
Stochastic effects and data uncertainties are present in any engineering calculation. Their impact may be particularly important if they concern the design of process equipment. A calculation model for the dynamic behavior of a heat exchanger and procedures to deal with the related uncertainties are presented. Their propagation through the calculation by means of a Monte Carlo approach is shown. The temperature at the heat exchanger outlet and the step response of a sudden variation in the heat exchanger inlet temperature are simulated and evaluated by way of example. It is demonstrated that the inclusion of stochastic effects and uncertainties provides a more reliable basis for design decisions and hence reduces the probability of errors.  相似文献   

17.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

18.
Three methods (multiplicative, additive, and allometric) were developed to extrapolate physiological model parameter distributions across species, specifically from rats to humans. In the multiplicative approach, the rat model parameters are multiplied by the ratio of the mean values between humans and rats. Additive scaling of the distributions is denned by adding the difference between the average human value and the average rat value to each rat value. Finally, allometric scaling relies on established extrapolation relationships using power functions of body weight. A physiologically-based pharmacokinetic model was fitted independently to rat and human benzene disposition data. Human model parameters obtained by extrapolation and by fitting were used to predict the total bone marrow exposure to benzene and the quantity of metabolites produced in bone marrow. We found that extrapolations poorly predict the human data relative to the human model. In addition, the prediction performance depends largely on the quantity of interest. The extrapolated models underpredict bone marrow exposure to benzene relative to the human model. Yet, predictions of the quantity of metabolite produced in bone marrow are closer to the human model predictions. These results indicate that the multiplicative and allometric techniques were able to extrapolate the model parameter distributions, but also that rats do not provide a good kinetic model of benzene disposition in humans.  相似文献   

19.
Although there has been nearly complete agreement in the scientific community that Monte Carlo techniques represent a significant improvement in the exposure assessment process, virtually all state and federal risk assessments still rely on the traditional point estimate approach. One of the rate-determining steps to a timely implementation of Monte Carlo techniques to regulatory decision making is the development of "standard" data distributions that are considered applicable to any setting. For many exposure variables, there is no need to wait any longer to adopt Monte Carlo techniques into regulatory policy since there is a wealth of data from which a robust distribution can be developed and ample evidence to indicate that the variable is not significantly influenced by site-specific conditions. In this paper, we propose several distributions that can be considered standard and customary for most settings. Age-specific distributions for soil ingestion rates, inhalation rates, body weights, skin surface area, tapwater and fish consumption, residential occupancy and occupational tenure, and soil-on-skin adherence were developed. For each distribution offered in this paper, we discuss the adequacy of the database, derivation of the distribution, and applicability of the distribution to various settings and conditions.  相似文献   

20.
Multimodal distribution functions that result from Monte Carlo simulations can be interpreted by superimposing joint probability density functions onto the contour space of the simulated calculations. The method is demonstrated by analysis of the pathway of a radioactive groundwater contaminant using an analytical solution to the transport equation. Simulated concentrations at a fixed time and distance produce multimodal histograms, which are understood with reference to the parameter space for the two random variables-velocity and dispersivity. Numerical integration under the joint density function up to the contour of the analytical solution gives the probability of contaminant exceeding a target concentration. This technique is potentially more efficient than Monte Carlo simulation for low probability events. Visualization of parameter space is restricted to two random variables. Nevertheless, analyzing the two most pertinent random variables in a simulation might still offer insights into the multimodal nature of output histograms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号