首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper uses “revealed probability trade‐offs” to provide a natural foundation for probability weighting in the famous von Neumann and Morgenstern axiomatic set‐up for expected utility. In particular, it shows that a rank‐dependent preference functional is obtained in this set‐up when the independence axiom is weakened to stochastic dominance and a probability trade‐off consistency condition. In contrast with the existing axiomatizations of rank‐dependent utility, the resulting axioms allow for complete flexibility regarding the outcome space. Consequently, a parameter‐free test/elicitation of rank‐dependent utility becomes possible. The probability‐oriented approach of this paper also provides theoretical foundations for probabilistic attitudes towards risk. It is shown that the preference conditions that characterize the shape of the probability weighting function can be derived from simple probability trade‐off conditions.  相似文献   

2.
In the analysis of the risk associated to rare events that may lead to catastrophic consequences with large uncertainty, it is questionable that the knowledge and information available for the analysis can be reflected properly by probabilities. Approaches other than purely probabilistic have been suggested, for example, using interval probabilities, possibilistic measures, or qualitative methods. In this article, we look into the problem and identify a number of issues that are foundational for its treatment. The foundational issues addressed reflect on the position that “probability is perfect” and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.  相似文献   

3.
PG Moore  H Thomas 《Omega》1975,3(6):657-672
The decision analysis approach to problem solving is widely documented. A range of articles and books have considered ways for analysing problems under uncertainty and also methods for formalising an individual's attitude towards risk in terms of cardinal utility measures. In order to carry out such analyses the uncertainty in any problem has to be explicitly quantified in probabilistic terms. The present article reviews procedures for assessing probabilities and proposes practical guidelines indicating those methods which, in the authors' opinion, are most fruitful for executives to use.  相似文献   

4.
Rex V. Brown 《决策科学》1978,9(4):543-554
When making a current decision, like choosing an experiment, a subject will often take into account “subsequent acts” which he does not yet commit to. Common practice requires modeling through preposterior analysis, which treats one act as certain, conditional on the intervening information modeled. This is not logically necessary since the same expected utilities could be obtained by properly conditioning utility on any selection of events (including subsequent acts). The subject could assess utility marginal on subsequent acts or conditional on subsequent acts treated as uncertain events. The preposterior model is a special case of the latter where conditioning information is sufficiently modeled to imply subsequent act probabilities of zero or one. This paper argues that attempts at preposterior modeling are often unsuccessful and have critically flawed much current practice in decision analysis. Simpler approaches such as the “acts-as-events” model are intrinsically less dependent on restrictive assumptions and have been successfully applied to many real-world decisions.  相似文献   

5.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

6.
Event-tree analysis with imprecise probabilities   总被引:1,自引:0,他引:1  
You X  Tonon F 《Risk analysis》2012,32(2):330-344
Novel methods are proposed for dealing with event-tree analysis under imprecise probabilities, where one could measure chance or uncertainty without sharp numerical probabilities and express available evidence as upper and lower previsions (or expectations) of gambles (or bounded real functions). Sets of upper and lower previsions generate a convex set of probability distributions (or measures). Any probability distribution in this convex set should be considered in the event-tree analysis. This article focuses on the calculation of upper and lower bounds of the prevision (or the probability) of some outcome at the bottom of the event-tree. Three cases of given information/judgments on probabilities of outcomes are considered: (1) probabilities conditional to the occurrence of the event at the upper level; (2) total probabilities of occurrences, that is, not conditional to other events; (3) the combination of the previous two cases. Corresponding algorithms with imprecise probabilities under the three cases are explained and illustrated by simple examples.  相似文献   

7.
Mean‐deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean‐deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean‐deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered.  相似文献   

8.
《Risk analysis》2018,38(4):666-679
We test here the risk communication proposition that explicit expert acknowledgment of uncertainty in risk estimates can enhance trust and other reactions. We manipulated such a scientific uncertainty message, accompanied by probabilities (20%, 70%, implicit [“will occur”] 100%) and time periods (10 or 30 years) in major (≥magnitude 8) earthquake risk estimates to test potential effects on residents potentially affected by seismic activity on the San Andreas fault in the San Francisco Bay Area (n = 750). The uncertainty acknowledgment increased belief that these specific experts were more honest and open, and led to statistically (but not substantively) significant increases in trust in seismic experts generally only for the 20% probability (vs. certainty) and shorter versus longer time period. The acknowledgment did not change judged risk, preparedness intentions, or mitigation policy support. Probability effects independent of the explicit admission of expert uncertainty were also insignificant except for judged risk, which rose or fell slightly depending upon the measure of judged risk used. Overall, both qualitative expressions of uncertainty and quantitative probabilities had limited effects on public reaction. These results imply that both theoretical arguments for positive effects, and practitioners’ potential concerns for negative effects, of uncertainty expression may have been overblown. There may be good reasons to still acknowledge experts’ uncertainties, but those merit separate justification and their own empirical tests.  相似文献   

9.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   

10.
11.
Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey‐based experiment administered to federal wildfire managers. Respondents were presented with a multiattribute lottery‐choice experiment where each lottery is defined by three outcome attributes: expenditures for fire suppression, damage to private property, and exposure of firefighters to the risk of aviation‐related fatalities. Respondents choose one of two strategies, each of which includes “good” (low cost/low damage) and “bad” (high cost/high damage) outcomes that occur with varying probabilities. The choice task also incorporates an information framing experiment to test whether information about fatality risk to firefighters alters managers' responses to risk. Results suggest that managers exhibit risk aversion and nonlinear probability weighting, which can result in choices that do not minimize expected expenditures, property damage, or firefighter exposure. Information framing tends to result in choices that reduce the risk of aviation fatalities, but exacerbates nonlinear probability weighting.  相似文献   

12.
Subjective probability distributions constitute an important part of the input to decision analysis and other decision aids. The long list of persistent biases associated with human judgments under uncertainy [16] suggests, however, that these biases can be translated into the elicited probabilities which, in turn, may be reflected in the output of the decision aids, potentially leading to biased decisions. This experiment studies the effectiveness of three debiasing techniques in elicitation of subjective probability distributions. It is hypothesized that the Socratic procedure [18] and the devil's advocate approach [6] [7] [31] [32] [33] [34] will increase subjective uncertainty and thus help assessors overcome a persistent bias called “overconfidence.” Mental encoding of the frequency of the observed instances into prespecified intervals, however, is expected to decrease subjective uncertainty and to help assessors better capture, mentally, the location and skewness of the observed distribution. The assessors' ratings of uncertainty confirm these hypotheses related to subjective uncertainty but three other measures based on the dispersion of the elicited subjective probability distributions do not. Possible explanations are discussed. An intriguing explanation is that debiasing may affect what some have called “second order” uncertainty. While uncertainty ratings may include this second component, the measures based on the elicited distributions relate only to “first order” uncertainty.  相似文献   

13.
We consider the problem of managing demand risk in tactical supply chain planning for a particular global consumer electronics company. The company follows a deterministic replenishment‐and‐planning process despite considerable demand uncertainty. As a possible way to formally address uncertainty, we provide two risk measures, “demand‐at‐risk” (DaR) and “inventory‐at‐risk” (IaR) and two linear programming models to help manage demand uncertainty. The first model is deterministic and can be used to allocate the replenishment schedule from the plants among the customers as per the existing process. The other model is stochastic and can be used to determine the “ideal” replenishment request from the plants under demand uncertainty. The gap between the output of the two models as regards requested replenishment and the values of the risk measures can be used by the company to reallocate capacity among different products and to thus manage demand/inventory risk.  相似文献   

14.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

15.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

16.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

17.
Attitudes towards risk and uncertainty have been indicated to be highly context‐dependent, and to be sensitive to the measurement technique employed. We present data collected in controlled experiments with 2,939 subjects in 30 countries measuring risk and uncertainty attitudes through incentivized measures as well as survey questions. Our data show clearly that measures correlate not only within decision contexts or measurement methods, but also across contexts and methods. This points to the existence of one underlying “risk preference”, which influences attitudes independently of the measurement method or choice domain. We furthermore find that answers to a general and a financial survey question correlate with incentivized lottery choices in most countries. Incentivized and survey measures also correlate significantly between countries. This opens the possibility to conduct cultural comparisons on risk attitudes using survey instruments.  相似文献   

18.
In this article, we propose a new product positioning method based on the neural network methodology of a self‐organizing map. The method incorporates the concept of rings of influence, where a firm evaluates individual consumers and decides on the intensity to pursue a consumer, based on the probability that this consumer will purchase a competing product. The method has several advantages over earlier work. First, no limitations are imposed on the number of competing products and second, the method can position multiple products in multiple market segments. Using simulations, we compare the new product positioning method with a quasi‐Newton method and find that the new method always approaches the best solution obtained by the quasi‐Newton method. The quasi‐Newton method, however, is dependent on the initial positions of the new products, with the majority of cases ending in a local optimum. Furthermore, the computational time required by the quasi‐Newton method increases exponentially, while the time required by the new method is small and remains almost unchanged, when the number of new products positioned increases. We also compute the expected utility that a firm will provide consumers by offering its products. We show that as the intensity with which a firm pursues consumers increases, the new method results in near‐optimal solutions in terms of market share, but with higher expected utility provided to consumers when compared to that obtained by a quasi‐Newton method. Thus, the new method can serve as a managerial decision‐making tool to compare the short‐term market share objective with the long‐term expected utility that a firm will provide to consumers, when it positions its products and intensifies its effort to attract consumers away from competition.  相似文献   

19.
Managers at all stages of a supply chain are concerned about meeting profit targets. We study contract design for a buyer–supplier supply chain, with each party maximizing expected profit subject to a chance constraint on meeting his respective profit target. We derive the optimal contract form (across all contract types) with randomized and deterministic payments. The best contract has the property that, if the chance constraints are binding, at most one party fails to satisfy his profit target for any given demand realization. This implies that “least risk sharing,”that is, minimizing the probability of outcomes for which both parties fail to achieve their profit targets, is optimal, contrary to the usual expectations of “risk sharing.” We show that an optimal contract can possess only two of the following three properties simultaneously: (i) supply chain coordination, (ii) truth‐telling, and (iii) non‐randomized payments. We discuss methods to mitigate the consequent implementation challenges. We also derive the optimal contract form when chance constraints are incorporated into several simpler and easier‐to‐implement contracts. From a numerical study, we find that an incremental returns contract (in which the marginal rebate rate depends on the return quantity) performs quite well across a relatively broad range of conditions.  相似文献   

20.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号