首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Two images, “black swans” and “perfect storms,” have struck the public's imagination and are used—at times indiscriminately—to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure—Bayesian probability—and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near‐misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow “prediction” of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines.  相似文献   

2.
Terje Aven 《Risk analysis》2011,31(10):1515-1525
Few policies for risk management have created more controversy than the precautionary principle. A main problem is the extreme number of different definitions and interpretations. Almost all definitions of the precautionary principle identify “scientific uncertainties” as the trigger or criterion for its invocation; however, the meaning of this concept is not clear. For applying the precautionary principle it is not sufficient that the threats or hazards are uncertain. A stronger requirement is needed. This article provides an in‐depth analysis of this issue. We question how the scientific uncertainties are linked to the interpretation of the probability concept, expected values, the results from probabilistic risk assessments, the common distinction between aleatory uncertainties and epistemic uncertainties, and the problem of establishing an accurate prediction model (cause‐effect relationship). A new classification structure is suggested to define what scientific uncertainties mean.  相似文献   

3.
The past forty years have seen a rapid rise in top income inequality in the United States. While there is a large number of existing theories of the Pareto tail of the long‐run income distributions, almost none of these address the fast rise in top inequality observed in the data. We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data. We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high‐growth types.” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers.  相似文献   

4.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

5.
The matching of individuals in teams is a key element in the functioning of an economy. The network of social ties can potentially transmit important information on abilities and reputations and also help mitigate matching frictions by facilitating interactions among “screened” individuals. We conjecture that the probability of two individuals forming a team falls in the distance between the two individuals in the network of existing social ties. The objective of this paper is to empirically test this conjecture. We examine the formation of coauthor relations among economists over a twenty‐year period. Our principal finding is that a new collaboration emerges faster among two researchers if they are “closer” in the existing coauthor network among economists. This proximity effect on collaboration is strong: Being at a network distance of 2 instead of 3, for instance, raises the probability of initiating a collaboration by 27%. (JEL: C78, D83, D85)  相似文献   

6.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

7.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   

8.
We show that efficient bargaining is impossible for a wide class of economic settings and property rights. These settings are characterized by (i) the existence of “adverse efficient opt‐out types”, whose participation does not change the efficient allocation and who, when they opt out, are the worst type other agents can face, and (ii) non‐existence of the “marginal core”, and its multivaluedness with a positive probability. We also examine the optimal allocation of property rights within a given class that satisfies (i), such as simple property rights, liability rules, and dual‐chooser rules. We characterize property rights that minimize the expected subsidy required to implement efficiency. With two agents, simple property rights that are optimal in this way maximize the expected surplus at the status quo allocation, but this no longer holds with more agents. We also study “second‐best” budget‐balanced bargaining under a liability rule. The optimal “second‐best” liability rule may differ from, but is often close to, the expectation of the victim's harm, which would be optimal if there were no bargaining. However, liability rules that are close to a simple property right result in a lower expected surplus than the simple property right they are near.  相似文献   

9.
《Risk analysis》2018,38(1):56-70
Feedback from industrial accidents is provided by various state or even international, institutions, and lessons learned can be controversial. However, there has been little research into organizational learning at the international level. This article helps to fill the gap through an in‐depth review of official reports of the Fukushima Daiichi accident published shortly after the event. We present a new method to analyze the arguments contained in these voluminous documents. Taking an intertextual perspective, the method focuses on the accident narratives, their rationale, and links between “facts,” “causes,” and “recommendations.” The aim is to evaluate how the findings of the various reports are consistent with (or contradict) “institutionalized knowledge,” and identify the social representations that underpin them. We find that although the scientific controversy surrounding the results of the various inquiries reflects different ethical perspectives, they are integrated into the same utopian ideal. The involvement of multiple actors in this controversy raises questions about the public construction of epistemic authority, and we highlight the special status given to the International Atomic Energy Agency in this regard.  相似文献   

10.
Much attention has been paid to the treatment of dependence and to the characterization of uncertainty and variability (including the issue of dependence among inputs) in performing risk assessments to avoid misleading results. However, with relatively little progress in communicating about the effects and implications of dependence, the effort involved in performing relatively sophisticated risk analyses (e.g., two‐dimensional Monte Carlo analyses that separate variability from uncertainty) may be largely wasted, if the implications of those analyses are not clearly understood by decisionmakers. This article emphasizes that epistemic uncertainty can introduce dependence among related risks (e.g., risks to different individuals, or at different facilities), and illustrates the potential importance of such dependence in the context of two important types of decisions—evaluations of risk acceptability for a single technology, and comparisons of the risks for two or more technologies. We also present some preliminary ideas on how to communicate the effects of dependence to decisionmakers in a clear and easily comprehensible manner, and suggest future research directions in this area.  相似文献   

11.
Tim Bedford 《Risk analysis》2013,33(10):1884-1898
Group risk is usually represented by FN curves showing the frequency of different accident sizes for a given activity. Many governments regulate group risk through FN criterion lines, which define the tolerable location of an FN curve. However, to compare different risk reduction alternatives, one must be able to rank FN curves. The two main problems in doing this are that the FN curve contains multiple frequencies, and that there are usually large epistemic uncertainties about the curve. Since the mid 1970s, a number of authors have used the concept of “disutility” to summarize FN curves in which a family of disutility functions was defined with a single parameter controlling the degree of “risk aversion.” Here, we show it to be risk neutral, disaster averse, and insensitive to epistemic uncertainty on accident frequencies. A new approach is outlined that has a number of attractive properties. The formulation allows us to distinguish between risk aversion and disaster aversion, two concepts that have been confused in the literature until now. A two‐parameter family of disutilities generalizing the previous approach is defined, where one parameter controls risk aversion and the other disaster aversion. The family is sensitive to epistemic uncertainties. Such disutilities may, for example, be used to compare the impact of system design changes on group risks, or might form the basis for valuing reductions in group risk in a cost‐benefit analysis.  相似文献   

12.
In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank‐dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi‐concavity or quasi‐convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse‐S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure‐thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank‐dependent theories, i.e. the separation of chance attitude and utility.  相似文献   

13.
The distinction between ignorance about a parameter and knowing only a probability distribution for that parameter is of fundamental importance in risk assessment. Brief dialogs between a hypothetical decisionmaker and a risk assessor illustrate this point, showing that the distinction has real consequences. These dialogs are followed by a short exposition that places risk analysis in a decision‐theoretic framework, describes the important elements of that framework, and uses these to shed light on Terje Aven's criticism of nonprobabilistic purely “objective” methods. Suggestions are offered concerning a more effective approach to evaluating those methods.  相似文献   

14.
Ought we to take seriously large risks predicted by “exotic” or improbable theories? We routinely assess risks on the basis or either common sense, or some developed theoretical framework based on the best available scientific explanations. Recently, there has been a substantial increase of interest in the low‐probability “failure modes” of well‐established theories, which can involve global catastrophic risks. However, here I wish to discuss a partially antithetical situation: alternative, low‐probability (“small”) scientific theories predicting catastrophic outcomes with large probability. I argue that there is an important methodological issue (determining what counts as the best available explanation in cases where the theories involved describe possibilities of extremely destructive global catastrophes), which has been neglected thus far. There is no simple answer to the correct method for dealing with high‐probability high‐stakes risks following from low‐probability theories that still cannot be rejected outright, and much further work is required in this area. I further argue that cases like these are more numerous than usually assumed, for reasons including cognitive biases, sociological issues in science and the media image of science. If that is indeed so, it might lead to a greater weight of these cases in areas such as moral deliberation and policy making.  相似文献   

15.
Dr. Yellman proposes to define frequency as “a time‐rate of events of a specified type over a particular time interval.” We review why no definition of frequency, including this one, can satisfy both of two conditions: (1) the definition should agree with the ordinary meaning of frequency, such as that less frequent events are less likely to occur than more frequent events, over any particular time interval for which the frequencies of both are defined; and (2) the definition should be applicable not only to exponentially distributed times between (or until) events, but also to some nonexponential (e.g., uniformly distributed) times. We make the simple point that no definition can satisfy (1) and (2) by showing that any definition that determines which of any two uniformly distributed times has the higher “frequency” (or that determines that they have the same “frequency,” if neither is higher) must assign a higher frequency number to the distribution with the lower probability of occurrence over some time intervals. Dr. Yellman's proposed phrase, “time‐rate of events … over a particular time interval” is profoundly ambiguous in such cases, as the instantaneous failure rates vary over an infinitely wide range (e.g., from one to infinity), making it unclear which value is denoted by the phrase “time‐rate of events.”  相似文献   

16.
Suppose that each player in a game is rational, each player thinks the other players are rational, and so on. Also, suppose that rationality is taken to incorporate an admissibility requirement—that is, the avoidance of weakly dominated strategies. Which strategies can be played? We provide an epistemic framework in which to address this question. Specifically, we formulate conditions of rationality and mth‐order assumption of rationality (RmAR) and rationality and common assumption of rationality (RCAR). We show that (i) RCAR is characterized by a solution concept we call a “self‐admissible set”; (ii) in a “complete” type structure, RmAR is characterized by the set of strategies that survive m+1 rounds of elimination of inadmissible strategies; (iii) under certain conditions, RCAR is impossible in a complete structure.  相似文献   

17.
We present a model of conflict in which discriminatory government policy or social intolerance is responsive to various forms of ethnic activism, including violence. It is this perceived responsiveness—captured by the probability that the government gives in and accepts a proposed change in ethnic policy—that induces individuals to mobilize, often violently, to support their cause. Yet, mobilization is costly and militants have to be compensated accordingly. The model allows for both financial and human contributions to conflict and allows for a variety of individual attitudes (“radicalism”) towards the cause. The main results concern the effects of within‐group heterogeneity in radicalism and income, as well as the correlation between radicalism and income, in precipitating conflict.  相似文献   

18.
We consider settings in which a revenue manager controls bookings over a sequence of flights. The revenue manager uses a buy‐up model to select booking limits and updates estimates of the model parameters as data are accumulated. The buy‐up model we consider is based upon a simple model of customer choice, wherein each low‐fare customer who is not able to purchase a low‐fare ticket will, with a fixed probability, “buy up” to the high fare, independent of everything else. We analyze the evolution of the parameter estimates (e.g., the buy‐up probability) and chosen booking limits in situations where the buy‐up model is misspecified, that is, in situations where there is no setting of its parameters for which its objective function gives an accurate representation of expected revenue as a function of the booking limit. The analysis is motivated by the common situation in which a revenue manager does not know precisely how customers behave but nevertheless uses a parametric model to make decisions. Under some assumptions, we prove that the booking limits and parameter estimates converge and we compare the actual expected revenue at the limiting values with that associated with the booking limits that would be chosen if the revenue manager knew the actual behavior of customers. The analysis shows that the buy‐up model often works reasonably well even when it is misspecified, and also reveals the importance of understanding how parameter estimates of misspecified models vary as functions of decisions.  相似文献   

19.
Managers at all stages of a supply chain are concerned about meeting profit targets. We study contract design for a buyer–supplier supply chain, with each party maximizing expected profit subject to a chance constraint on meeting his respective profit target. We derive the optimal contract form (across all contract types) with randomized and deterministic payments. The best contract has the property that, if the chance constraints are binding, at most one party fails to satisfy his profit target for any given demand realization. This implies that “least risk sharing,”that is, minimizing the probability of outcomes for which both parties fail to achieve their profit targets, is optimal, contrary to the usual expectations of “risk sharing.” We show that an optimal contract can possess only two of the following three properties simultaneously: (i) supply chain coordination, (ii) truth‐telling, and (iii) non‐randomized payments. We discuss methods to mitigate the consequent implementation challenges. We also derive the optimal contract form when chance constraints are incorporated into several simpler and easier‐to‐implement contracts. From a numerical study, we find that an incremental returns contract (in which the marginal rebate rate depends on the return quantity) performs quite well across a relatively broad range of conditions.  相似文献   

20.
“Time‐to‐build” models of investment expenditures play an important role in many traditional and modern theories of the business cycle, especially for explaining the dynamic propagation of shocks. We estimate the structural parameters of a time‐to‐build model using annual firm‐level investment data on equipment and structures. For expenditures on equipment, we find no evidence of time‐to‐build effects beyond one year. For expenditures on structures, by contrast, there is clear evidence of such effects in the range of two to three years. The contrast between equipment and structures is intuitively reasonable and consistent with previous results. The estimates for structures also indicate that initial‐period expenditures are low and increase as projects near completion. These results provide empirical support for including “time‐to‐plan” effects for investment in structures. More generally, these results suggest a potential source of specification error for Q models of investment and production‐based asset pricing models that ignore the time required to plan, build, and install new capital. (JEL: D24, G31, C33, C34)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号