首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

2.
In this paper, we present a Pairwise Aggregated Hierarchical Analysis of Ratio-Scale Preferences (PAHAP), a new method for solving discrete alternative multicriteria decision problems. Following the Analytic Hierarchy Process (AHP), PAHAP uses pairwise preference judgments to assess the relative attractiveness of the alternatives. By first aggregating the pairwise judgment ratios of the alternatives across all criteria, and then synthesizing based on these aggregate measures, PAHAP determines overall ratio scale priorities and rankings of the alternatives which are not subject to rank reversal, provided that certain weak consistency requirements are satisfied. Hence, PAHAP can serve as a useful alternative to the original AHP if rank reversal is undesirable, for instance when the system is open and criterion scarcity does not affect the relative attractiveness of the alternatives. Moreover, the single matrix of pairwise aggregated ratings constructed in PAHAP provides useful insights into the decision maker's preference structure. PAHAP requires the same preference information as the original AHP (or, altematively, the same information as the Referenced AHP, if the criteria are compared based on average (total) value of the alternatives). As it is easier to implement and interpret than previously proposed variants of the conventional AHP which prevent rank reversal, PAHAP also appears attractive from a practitioner's viewpoint.  相似文献   

3.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

4.
This article presents research aimed at developing and testing an online, multistakeholder decision‐aiding framework for informing multiattribute risk management choices associated with energy development and climate change. The framework was designed to provide necessary background information and facilitate internally consistent choices, or choices that are in line with users’ prioritized objectives. In order to test different components of the decision‐aiding framework, a six‐part, 2 × 2 × 2 factorial experiment was conducted, yielding eight treatment scenarios. The three factors included: (1) whether or not users could construct their own alternatives; (2) the level of detail regarding the composition of alternatives users would evaluate; and (3) the way in which a final choice between users’ own constructed (or highest‐ranked) portfolio and an internally consistent portfolio was presented. Participants’ self‐reports revealed the framework was easy to use and providing an opportunity to develop one's own risk‐management alternatives (Factor 1) led to the highest knowledge gains. Empirical measures showed the internal consistency of users’ decisions across all treatments to be lower than expected and confirmed that providing information about alternatives’ composition (Factor 2) resulted in the least internally consistent choices. At the same time, those users who did not develop their own alternatives and were not shown detailed information about the composition of alternatives believed their choices to be the most internally consistent. These results raise concerns about how the amount of information provided and the ability to construct alternatives may inversely affect users’ real and perceived internal consistency.  相似文献   

5.
Methods of engineering risk analysis are based on a functional analysis of systems and on the probabilities (generally Bayesian) of the events and random variables that affect their performances. These methods allow identification of a system's failure modes, computation of its probability of failure or performance deterioration per time unit or operation, and of the contribution of each component to the probabilities and consequences of failures. The model has been extended to include the human decisions and actions that affect components' performances, and the management factors that affect behaviors and can thus be root causes of system failures. By computing the risk with and without proposed measures, one can then set priorities among different risk management options under resource constraints. In this article, I present briefly the engineering risk analysis method, then several illustrations of risk computations that can be used to identify a system's weaknesses and the most cost-effective way to fix them. The first example concerns the heat shield of the space shuttle orbiter and shows the relative risk contribution of the tiles in different areas of the orbiter's surface. The second application is to patient risk in anesthesia and demonstrates how the engineering risk analysis method can be used in the medical domain to rank the benefits of risk mitigation measures, in that case, mostly organizational. The third application is a model of seismic risk analysis and mitigation, with application to the San Francisco Bay area for the assessment of the costs and benefits of different seismic provisions of building codes. In all three cases, some aspects of the results were not intuitively obvious. The probabilistic risk analysis (PRA) method allowed identifying system weaknesses and the most cost-effective way to fix them.  相似文献   

6.
This paper introduces a new notion of consistency for social choice functions, called self‐selectivity, which requires that a social choice function employed by a society to make a choice from a given alternative set it faces should choose itself from among other rival such functions when it is employed by the society to make this latter choice as well. A unanimous neutral social choice function turns out to be universally self‐selective if and only if it is Paretian and satisfies independence of irrelevant alternatives. The neutralunanimous social choice functions whose domains consist of linear order profiles on nonempty sets of any finite cardinality induce a class of social welfare functions that inherit Paretianism and independence of irrelevant alternatives in case the social choice function with which one starts is universally self‐selective. Thus, a unanimous and neutral social choice function is universally self‐selective if and only if it is dictatorial. Moreover, universal self‐selectivity for such functions is equivalent to the conjunction of strategy‐proofness and independence of irrelevant alternatives or the conjunction of monotonicity and independence of irrelevant alternatives again.  相似文献   

7.
Vicki L Sauter 《Omega》1985,13(4):277-284
A decision-maker's experience is thought to affect how he/she chooses the information that support his/her selection among alternatives. Unfortunately, results from empirical studies designed to demonstrate this hypothesis are not in agreement about the existence and/or extent of the relationship. Since one possible explanation for conflicting in results is variability in the operationalization of the variable ‘experience’, this study was designed to determine if it matters whether one chooses a macro view of experience (in which the focus is on a decision-maker's overall experience) or a micro view of experience (in which the focus is on specific experience with the decision under consideration). Additional insights regarding problems in operationalizing ‘experience’ were generated as a result of these analyses to provide a basis for further research in this area.  相似文献   

8.
针对复杂性和不确定性多属性决策问题,考虑定量和定性融合的属性形式,提出了模块化随机多准则妥协解排序法(Modular Random VlseKriterijumska Opti-mizacija I Kompromisno Resenje,Mo-RVIKOR),该方法无需将信息统一,就能处理多种信息形式存在的多属性决策问题。采用精确数、随机变量处理定量评价信息,用概率语义术语集处理定性评价信息;通过改进离差最大化法确定属性权重;根据Mo-RVIKOR对决策对象进行排序;最后以某公司C2B定制化服务质量评测项目为例,验证了所提方法的有效性。  相似文献   

9.
In this study, a variance‐based global sensitivity analysis method was first applied to a contamination assessment model of Listeria monocytogenes in cold smoked vacuum packed salmon at consumption. The impact of the choice of the modeling approach (populational or cellular) of the primary and secondary models as well as the effect of their associated input factors on the final contamination level was investigated. Results provided a subset of important factors, including the food water activity, its storage temperature, and duration in the domestic refrigerator. A refined sensitivity analysis was then performed to rank the important factors, tested over narrower ranges of variation corresponding to their current distributions, using three techniques: ANOVA, Spearman correlation coefficient, and partial least squares regression. Finally, the refined sensitivity analysis was used to rank the important factors.  相似文献   

10.
《Risk analysis》2018,38(4):710-723
Despite global efforts to reduce seismic risk, actual preparedness levels remain universally low. Although earthquake‐resistant building design is the most efficient way to decrease potential losses, its application is not a legal requirement across all earthquake‐prone countries and even if, often not strictly enforced. Risk communication encouraging homeowners to take precautionary measures is therefore an important means to enhance a country's earthquake resilience. Our study illustrates that specific interactions of mood, perceived risk, and frame type significantly affect homeowners’ attitudes toward general precautionary measures for earthquakes. The interdependencies of the variables mood, risk information, and frame type were tested in an experimental 2 × 2 × 2 design (N = 156). Only in combination and not on their own, these variables effectively influence attitudes toward general precautionary measures for earthquakes. The control variables gender, “trait anxiety” index, and alteration of perceived risk adjust the effect. Overall, the group with the strongest attitudes toward general precautionary actions for earthquakes are homeowners with induced negative mood who process high‐risk information and gain‐framed messages. However, the conditions comprising induced negative mood, low‐risk information and loss‐frame and induced positive mood, low‐risk information and gain‐framed messages both also significantly influence homeowners’ attitudes toward general precautionary measures for earthquakes. These results mostly confirm previous findings in the field of health communication. For practitioners, our study emphasizes that carefully compiled communication measures are a powerful means to encourage precautionary attitudes among homeowners, especially for those with an elevated perceived risk.  相似文献   

11.
Marco Percoco 《Risk analysis》2011,31(7):1038-1042
Natural and man‐made disasters are currently a source of major concern for contemporary societies. In order to understand their economic impacts, the inoperability input‐output model has recently gained recognition among scholars. In a recent paper, Percoco (2006) has proposed an extension of the model to map the technologically most important sectors through so‐called fields of influence. In the present note we aim to show that this importance measure also has a clear connection with local sensitivity analysis theory.  相似文献   

12.
Introduction of classical swine fever virus (CSFV) is a continuing threat to the pig production sector in the European Union. A scenario tree model was developed to obtain more insight into the main risk factors determining the probability of CSFV introduction (P(CSFV)). As this model contains many uncertain input parameters, sensitivity analysis was used to indicate which of these parameters influence model results most. Group screening combined with the statistical techniques of design of experiments and meta-modeling was applied to detect the most important uncertain input parameters among a total of 257 parameters. The response variable chosen was the annual P(CSFV) into the Netherlands. Only 128 scenario calculations were needed to specify the final meta-model. A consecutive one-at-a-time sensitivity analysis was performed with the main effects of this meta-model to explore their impact on the ranking of risk factors contributing most to the annual P(CSFV). The results indicated that model outcome is most sensitive to the uncertain input parameters concerning the expected number of classical swine fever epidemics in Germany, Belgium, and the United Kingdom and the probability that CSFV survives in an empty livestock truck traveling over a distance of 0-900 km.  相似文献   

13.
Application of Monte Carlo simulation methods to quantitative risk assessment are becoming increasingly popular. With this methodology, investigators have become concerned about correlations among input variables which might affect the resulting distribution of risk. We show that the choice of input distributions in these simulations likely has a larger effect on the resultant risk distribution than does the inclusion or exclusion of correlations. Previous investigators have studied the effect of correlated input variables for the addition of variables with any underlying distribution and for the product of lognormally distributed variables. The effects in the main part of the distribution are small unless the correlation and variances are large. We extend this work by considering addition, multiplication and division of two variables with assumed normal, lognormal, uniform and triangular distributions. For all possible pairwise combinations, we find that the effects of correlated input variables are similar to those observed for lognormal distributions, and thus relatively small overall. The effect of using different distributions, however, can be large.  相似文献   

14.
基于标准金融理论与行为金融理论相结合的思想,力图刻画投资者情绪的生成机理。以引起投资者情绪变化的货币环境、市场收益、市场波动、相关资产收益等因素为起点,引入市场投资价值、市场预期两个中间变量,建立了包含直接和间接影响两类路径的投资者情绪生成概念模型。使用中国股市2014年7月1日至2017年3月31日间的667组日度数据,在VAR建模的基础上开展实证研究。实证结果表明市场收益对投资者情绪具有直接的正向影响,市场波动和相关资产收益两因素基于市场预期中介变量间接负向作用于投资者情绪,而修正后引入的经济周期波动变量可以基于市场投资价值中介变量对投资者情绪产生正向影响,并进一步发现了市场收益、市场投资价值与投资者情绪之间存在正反馈强化过程。研究揭示出了投资者情绪生成的影响因素体系及其实现路径,将该领域研究深入到机理分析层面,并从一个侧面佐证了中国股市过度投机行为的存在。  相似文献   

15.
An ongoing, important question in the operations strategy literature pertains to tradeoffs: Can manufacturers focus on multiple priorities simultaneously or achieve strength on multiple capabilities without sacrificing performance of another? In this paper, we accumulate, integrate, and examine the wide spectrum of conclusions reached in the literature concerning tradeoffs using modified meta‐analysis methods. Based on two decades of empirical research in operations strategy, we find that the evidence in the literature indicates manufacturers, on average, do not report experiencing tradeoffs among the competitive dimensions of quality, delivery, flexibility, and cost as suggested by the classical tradeoffs model. Our meta‐analysis also reveals that the way variables are operationalized, whether initiatives are implemented, and the unit of analysis are all related to the degree and nature of the evidence a paper contains with respect to the tradeoffs issue. We interpret our meta‐analysis results in the context of the prevailing model of manufacturing strategy and the theory of performance frontiers. We also discuss how the research designs used in this literature, which are predominantly cross‐sectional, affect the nature of the evidence generated and the conclusions that can be drawn. We go on to suggest research designs that more directly assess the tradeoffs issue.  相似文献   

16.
How much do migrants stand to gain in income from moving across borders? Answering this question is complicated by non‐random selection of migrants from the general population, which makes it hard to obtain an appropriate comparison group of non‐migrants. New Zealand allows a quota of Tongans to immigrate each year with a random ballot used to choose among the excess number of applicants. A unique survey conducted by the authors allows experimental estimates of the income gains from migration to be obtained by comparing the incomes of migrants to those who applied to migrate, but whose names were not drawn in the ballot, after allowing for the effect of non‐compliance among some of those whose names were drawn. We also conducted a survey of individuals who did not apply for the ballot. Comparing this non‐applicant group to the migrants enables assessment of the degree to which non‐experimental methods can provide an unbiased estimate of the income gains from migration. We find evidence of migrants being positively selected in terms of both observed and unobserved skills. As a result, non‐experimental methods other than instrumental variables are found to overstate the gains from migration by 20–82%, with difference‐in‐differences and bias‐adjusted matching estimators performing best among the alternatives to instrumental variables. (JEL: J61, F22, C21)  相似文献   

17.
Moment independent methods for the sensitivity analysis of model output are attracting growing attention among both academics and practitioners. However, the lack of benchmarks against which to compare numerical strategies forces one to rely on ad hoc experiments in estimating the sensitivity measures. This article introduces a methodology that allows one to obtain moment independent sensitivity measures analytically. We illustrate the procedure by implementing four test cases with different model structures and model input distributions. Numerical experiments are performed at increasing sample size to check convergence of the sensitivity estimates to the analytical values.  相似文献   

18.
A multiattribute decision problem with imprecise parameters refers to one in which at least one of the parameters such as attribute weights and value scores is not represented by precise numerical values. Some well-known types of incomplete attribute weights are chosen and analyzed to find their extreme points. In doing so, we show that their coefficients matrix, by itself or by the change of variables, belongs to a class of M-matrix which enables us to find its extreme points readily due to the inverse-positive property.The knowledge of extreme points not only helps us to prioritize alternatives but also supports iterative exploration of decision-maker’s preference by investigating modified extreme points caused by additional preference information. A wide range of eligible attribute weights, however, often fail to result in the best alternative or a complete ranking of alternatives. To address this situation, we consider an approximate weighting method, so called the minimizing squared deviations from extreme points (MSD) which locates the attribute weights at the barycenter of a weight set. Accordingly, the MSD approach extends the rank order centroid (ROC) weighting method which is known to outperform other approximate weighting methods in case of ranked attribute weights. The evidence of the MSD’s superiority over a linear program-based weighting method is verified via simulation analysis under different forms of incomplete attribute weights.  相似文献   

19.
Interest in examining both the uncertainty and variability in environmental health risk assessments has led to increased use of methods for propagating uncertainty. While a variety of approaches have been described, the advent of both powerful personal computers and commercially available simulation software have led to increased use of Monte Carlo simulation. Although most analysts and regulators are encouraged by these developments, some are concerned that Monte Carlo analysis is being applied uncritically. The validity of any analysis is contingent on the validity of the inputs to the analysis. In the propagation of uncertainty or variability, it is essential that the statistical distribution of input variables are properly specified. Furthermore, any dependencies among the input variables must be considered in the analysis. In light of the potential difficulty in specifying dependencies among input variables, it is useful to consider whether there exist rules of thumb as to when correlations can be safely ignored (i.e., when little overall precision is gained by an additional effort to improve upon an estimation of correlation). We make use of well-known error propagation formulas to develop expressions intended to aid the analyst in situations wherein normally and lognormally distributed variables are linearly correlated.  相似文献   

20.
Many environmental and risk management decisions are made jointly by technical experts and members of the public. Frequently, their task is to select from among management alternatives whose outcomes are subject to varying degrees of uncertainty. Although it is recognized that how this uncertainty is interpreted can significantly affect decision‐making processes and choices, little research has examined similarities and differences between expert and public understandings of uncertainty. We present results from a web‐based survey that directly compares expert and lay interpretations and understandings of different expressions of uncertainty in the context of evaluating the consequences of proposed environmental management actions. Participants responded to two hypothetical but realistic scenarios involving trade‐offs between environmental and other objectives and were asked a series of questions about their comprehension of the uncertainty information, their preferred choice among the alternatives, and the associated difficulty and amount of effort. Results demonstrate that experts and laypersons tend to use presentations of numerical ranges and evaluative labels differently; interestingly, the observed differences between the two groups were not explained by differences in numeracy or concerns for the predicted environmental losses. These findings question many of the usual presumptions about how uncertainty should be presented as part of deliberative risk‐ and environmental‐management processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号