首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.  相似文献   

2.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

3.
The objective of this study is the estimation of health hazards due to the inhalation of combustion products from accidental mineral oil transformer fires. Calculations of production, dispersion, and subsequent human intake of polychlorinated dibenzofurans (PCDFs) provide us with exposure estimates. PCDFs are believed to be the principal toxic products of the pyrolysis of polychlorinated biphenyls (PCBs) sometimes found as contaminants in transformer mineral oil. Cancer burdens and birth defect hazard indices are estimated from population data and exposure statistics. Monte Carlo-derived variational factors emphasize the statistics of uncertainty in the estimates of risk parameters. Community health issues are addressed and risks are found to be insignificant.  相似文献   

4.
Probability models incorporating a deterministic versus stochastic infectious dose are described for estimating infection risk due to airborne pathogens that infect at low doses. Such pathogens can be occupational hazards or candidate agents for bioterrorism. Inputs include parameters for the infectious dose model, distribution parameters for ambient pathogen concentrations, the breathing rate, the duration of an exposure period, the anticipated number of exposure periods, and, if a respirator device is used, distribution parameters for respirator penetration values. Application of the models is illustrated with a hypothetical scenario involving exposure to Coccidioides immitis, a fungus present in soil in areas of the southwestern United States Inhaling C. immitis spores causes a respiratory tract infection and is a recognized occupational hazard in jobs involving soil dust exposure in endemic areas An uncertainty analysis is applied to risk estimation in the context of selecting respiratory protection with a desired degree of efficacy.  相似文献   

5.
Saltelli  Andrea  Tarantola  Stefano  Chan  Karen 《Risk analysis》1998,18(6):799-803
The motivation of the present work is to provide an auxiliary tool for the decision-maker (DM) faced with predictive model uncertainty. The tool is especially suited for the allocation of R&Dresources. When taking decisions under uncertainties, making use of the output from mathematical or computational models, the DM might be helped if the uncertainty in model predictions be decomposed in a quantitative-rather than qualitativefashion, apportioning uncertainty according to source. This would allow optimal use of resources to reduce the imprecision in the prediction. For complex models, such a decomposition of the uncertainty into constituent elements could be impractical as such, due to the large number of parameters involved. If instead parameters could be grouped into logical subsets, then the analysis could be more useful, also because the decision maker might likely have different perceptions (and degrees of acceptance) for different kinds of uncertainty. For instance, the decomposition in groups could involve one subset of factors for each constituent module of the model; or one set for the weights, and one for the factors in a multicriteria analysis; or phenomenological parameters of the model vs. factors driving the model configuratiodstructure aggregation level, etc.); finally, one might imagine that a partition of the uncertainty could be sought between stochastic (or aleatory) and subjective (or epistemic) uncertainty. The present note shows how to compute rigorous decomposition of the output's variance with grouped parameters, and how this approach may be beneficial for the efficiency and transparency of the analysis.  相似文献   

6.
The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows us to focus our resources if we want to reduce the uncertainty. The potential dose of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.  相似文献   

7.
The decision process involved in cleaning sites contaminated with hazardous, mixed, and radioactive materials is supported often by results obtained from computer models. These results provide limits within which a decision-maker can judge the importance of individual transport and fate processes, and the likely outcome of alternative cleanup strategies. The transport of hazardous materials may occur predominately through one particular pathway but, more often, actual or potential transport must be evaluated across several pathways and media. Multimedia models are designed to simulate the transport of contaminants from a source to a receptor through more than one environmental pathway. Three such multimedia models are reviewed here: MEPAS, MMSOILS, and PRESTO-EPA-CPG. The reviews are based on documentation provided with the software, on published reviews, on personal interviews with the model developers, and on model summaries extracted from computer databases and expert systems. The three models are reviewed within the context of specific media components: air, surface water, ground water, and food chain. Additional sections evaluate the way that these three models calculate human exposure and dose and how they report uncertainty. Special emphasis is placed on how each model handles radio-nuclide transport within specific media. For the purpose of simulating the transport, fate and effects of radioactive contaminants through more than one pathway, both MEPAS and PRESTO-EPA-CPG are adequate for screening studies; MMSOILS only handles nonradioactive substances and must be modified before it can be used in these same applications. Of the three models, MEPAS is the most versatile, especially if the user needs to model the transport, fate, and effects of hazardous and radioactive contaminants.  相似文献   

8.
One of the main steps in an uncertainty analysis is the selection of appropriate probability distribution functions for all stochastic variables. In this paper, criteria for such selections are reviewed, the most important among them being any a priori knowledge about the nature of a stochastic variable, and the Central Limit Theorem of probability theory applied to sums and products of stochastic variables. In applications of these criteria, it is shown that many of the popular selections, such as the uniform distribution for a poorly known variable, require far more knowledge than is actually available. However, the knowledge available is usually sufficient to make use of other, more appropriate distributions. Next, functions of stochastic variables and the selection of probability distributions for their arguments as well as the use of different methods of error propagation through these functions are discussed. From these evaluations, priorities can be assigned to determine which of the stochastic variables in a function need the most care in selecting the type of distribution and its parameters. Finally, a method is proposed to assist in the assignment of an appropriate distribution which is commensurate with the total information on a particular stochastic variable, and is based on the scientific method. Two examples are given to elucidate the method for cases of little or almost no information.  相似文献   

9.
We consider the estimation of dynamic panel data models in the presence of incidental parameters in both dimensions: individual fixed‐effects and time fixed‐effects, as well as incidental parameters in the variances. We adopt the factor analytical approach by estimating the sample variance of individual effects rather than the effects themselves. In the presence of cross‐sectional heteroskedasticity, the factor method estimates the average of the cross‐sectional variances instead of the individual variances. The method thereby eliminates the incidental‐parameter problem in the means and in the variances over the cross‐sectional dimension. We further show that estimating the time effects and heteroskedasticities in the time dimension does not lead to the incidental‐parameter bias even when T and N are comparable. Moreover, efficient and robust estimation is obtained by jointly estimating heteroskedasticities.  相似文献   

10.
Uncertainty importance measures are quantitative tools aiming at identifying the contribution of uncertain inputs to output uncertainty. Their application ranges from food safety (Frey & Patil (2002)) to hurricane losses (Iman et al. (2005a, 2005b)). Results and indications an analyst derives depend on the method selected for the study. In this work, we investigate the assumptions at the basis of various indicator families to discuss the information they convey to the analyst/decisionmaker. We start with nonparametric techniques, and then present variance-based methods. By means of an example we show that output variance does not always reflect a decisionmaker state of knowledge of the inputs. We then examine the use of moment-independent approaches to global sensitivity analysis, i.e., techniques that look at the entire output distribution without a specific reference to its moments. Numerical results demonstrate that both moment-independent and variance-based indicators agree in identifying noninfluential parameters. However, differences in the ranking of the most relevant factors show that inputs that influence variance the most are not necessarily the ones that influence the output uncertainty distribution the most.  相似文献   

11.
The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al . Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.  相似文献   

12.
The World Trade Organization introduced the concept of appropriate level of protection (ALOP) as a public health target. For this public health objective to be interpretable by the actors in the food chain, the concept of food safety objective (FSO) was proposed by the International Commission on Microbiological Specifications for Foods and adopted later by the Codex Alimentarius Food Hygiene Committee. The way to translate an ALOP into a FSO is still in debate. The purpose of this article is to develop a methodological tool to derive a FSO from an ALOP being expressed as a maximal annual marginal risk. We explore the different models relating the annual marginal risk to the parameters of the FSO depending on whether the variability in the survival probability and in the concentration of the pathogen are considered or not. If they are not, determination of the FSO is straightforward. If they are, we propose to use stochastic Monte Carlo simulation models and logistic discriminant analysis in order to determine which sets of parameters are compatible with the ALOP. The logistic discriminant function was chosen such that the kappa coefficient is maximized. We illustrate this method by the example of the risks of listeriosis and salmonellosis in one type of soft cheese. We conclude that the definition of the FSO should integrate three dimensions: the prevalence of contamination, the average concentration per contaminated typical serving, and the dispersion of the concentration among those servings.  相似文献   

13.
Traditional models of capital budgeting with taxes are based on deterministic tax rates and tax bases. In reality, however, there are multiple sources of tax uncertainty. Frequent tax reforms make future taxation of investments a stochastic process. Fiscal authorities and tax courts create additional tax uncertainty by interpreting current tax laws differently. Moreover, simplified models that anticipate the actual tax base incorrectly contribute to tax uncertainty as perceived by investors. I analyze the effects of stochastic taxation on investment behavior in a real options model. The investor holds an option to invest in an irreversible project with stochastic cash flows and stochastic tax payments. Pre-tax cash flows and tax payments are assumed to be correlated. Increased tax uncertainty has an ambiguous impact on investment timing. For low tax uncertainty, high cash flow uncertainty and high correlation of cash flows and tax payments, increased tax uncertainty is likely to accelerate investment. A higher expected tax payment delays investment. A higher after-tax discount rate affects investment timing ambiguously.  相似文献   

14.
There have been many models for portfolio selection, but most do not explicitly include uncertainty and multiple objectives. This paper presents an approach that includes these aspects using a form of stochastic integer programming with recourse. The method involves the use of a time-based decision tree structure called a “project tree.” Using this basic format, an illustrative six-project example is presented and analyzed. Various forms of objectives are discussed, ranging from the maximization of expected portfolio value to the maximization of the minimum weighted portfolio deviation from two goals. In each case, formulated numerical problems are given, and the solutions derived are presented. The approach is shown to be very flexible and capable of handling a variety of situations and objectives.  相似文献   

15.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

16.
This article presents a general model for estimating population heterogeneity and "lack of knowledge" uncertainty in methylmercury (MeHg) exposure assessments using two-dimensional Monte Carlo analysis. Using data from fish-consuming populations in Bangladesh, Brazil, Sweden, and the United Kingdom, predictive model estimates of dietary MeHg exposures were compared against those derived from biomarkers (i.e., [Hg]hair and [Hg]blood). By disaggregating parameter uncertainty into components (i.e., population heterogeneity, measurement error, recall error, and sampling error) estimates were obtained of the contribution of each component to the overall uncertainty. Steady-state diet:hair and diet:blood MeHg exposure ratios were estimated for each population and were used to develop distributions useful for conducting biomarker-based probabilistic assessments of MeHg exposure. The 5th and 95th percentile modeled MeHg exposure estimates around mean population exposure from each of the four study populations are presented to demonstrate lack of knowledge uncertainty about a best estimate for a true mean. Results from a U.K. study population showed that a predictive dietary model resulted in a 74% lower lack of knowledge uncertainty around a central mean estimate relative to a hair biomarker model, and also in a 31% lower lack of knowledge uncertainty around central mean estimate relative to a blood biomarker model. Similar results were obtained for the Brazil and Bangladesh populations. Such analyses, used here to evaluate alternative models of dietary MeHg exposure, can be used to refine exposure instruments, improve information used in site management and remediation decision making, and identify sources of uncertainty in risk estimates.  相似文献   

17.
This paper presents a general model for exposure to homegrown foods that is used with a Monte Carlo analysis to determine the relative contributions of variability (Type A uncertainty) and true uncertainty (Type B uncertainty) to the overall variance in prediction of the dose-to-concentration ratio. Although classification of exposure inputs as uncertain or variable is somewhat subjective, food consumption rates and exposure duration are judged to have a predicted variance that is dominated by variability among individuals by age, income, culture, and geographical region. Whereas, biotransfer factors and partition factors are inputs that, to a large extent, involve uncertainty. Using ingestion of fruits, vegetables, grains, dairy products, and meat and soils assumed to be contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a Monte Carlo analysis is used to explore the relative contribution of uncertainty and variability to overall variance in the estimated distribution of potential dose within the population that consumes homegrown foods. It is found that, when soil concentrations are specified, variances in ratios of dose-to-concentration for HCB are equally attributable to uncertainty and variability, whereas for BaP, variance in these ratios is dominated by true uncertainty.  相似文献   

18.
Comparative risk projects can provide broad policy guidance but they rarely have adequate scientific foundations to support precise risk rankings. Many extant projects report rankings anyway, with limited attention to uncertainty. Stochastic uncertainty, structural uncertainty, and ignorance are types of incertitude that afflict risk comparisons. The recently completed New Jersey Comparative Risk Project was innovative in trying to acknowledge and accommodate some historically ignored uncertainties in a substantive manner. This article examines the methods used and lessons learned from the New Jersey project. Monte Carlo techniques were used to characterize stochastic uncertainty, and sensitivity analysis helped to manage structural uncertainty. A deliberative process and a sorting technique helped manage ignorance. Key findings are that stochastic rankings can be calculated but they reveal such an alarming degree of imprecision that the rankings are no longer useful, whereas sorting techniques are helpful in spite of uncertainty. A deliberative process is helpful to counter analytical overreaching.  相似文献   

19.
Marc Kennedy  Andy Hart 《Risk analysis》2009,29(10):1427-1442
We propose new models for dealing with various sources of variability and uncertainty that influence risk assessments for dietary exposure. The uncertain or random variables involved can interact in complex ways, and the focus is on methodology for integrating their effects and on assessing the relative importance of including different uncertainty model components in the calculation of dietary exposures to contaminants, such as pesticide residues. The combined effect is reflected in the final inferences about the population of residues and subsequent exposure assessments. In particular, we show how measurement uncertainty can have a significant impact on results and discuss novel statistical options for modeling this uncertainty. The effect of measurement error is often ignored, perhaps due to the laboratory process conforming to the relevant international standards, for example, or is treated in an  ad hoc  way. These issues are common to many dietary risk analysis problems, and the methods could be applied to any food and chemical of interest. An example is presented using data on carbendazim in apples and consumption surveys of toddlers.  相似文献   

20.
不确定环境下再装股票期权的稳健定价模型   总被引:1,自引:0,他引:1  
研究具有Knight不确定性的金融市场,假定标的资产(股票)价格过程服从几何布朗运动,建立了再装股票期权在一个概率测度集合上的最大、最小定价模型。并借助于倒向随机微分方程(BSDE)以及偏微分方程(PDE)的重要理论完成了对模型的转化。最后利用随机过程的有关知识求出了该模型的显示表达式,并通过具体的数值分析揭示了Knight不确定性对再装股票期权定价的重要影响。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号