首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Starting from the generalized notion of a vertical coordination continuum introduced by Williamson and others, the article more specifically defines the nature of the continuum, especially the array of hybrid strategies. The continuum as presented includes five distinct groups of strategy–spot markets, specification contracts, relation-based alliances, equity-based alliances, and vertical integration. The article then presents a decision making framework that can be used by firms to determine which place on the continuum makes the most sense for a particular transaction. The framework suggests that five assessments are critical to adopting a specific change in coordination strategy: (1) Is the current strategy too costly?; (2) Would an alternative strategy reduce the cost?; (3) Is an alternative programmable?; (4) Is an alternative implementable?; (5) Is the risk/return tradeoff acceptable? If the answers to all five assessments are “yes,” then a change in strategy would be expected to occur.  相似文献   

2.
Cox LA 《Risk analysis》2012,32(7):1244-1252
Simple risk formulas, such as risk = probability × impact, or risk = exposure × probability × consequence, or risk = threat × vulnerability × consequence, are built into many commercial risk management software products deployed in public and private organizations. These formulas, which we call risk indices, together with risk matrices, “heat maps,” and other displays based on them, are widely used in applications such as enterprise risk management (ERM), terrorism risk analysis, and occupational safety. But, how well do they serve to guide allocation of limited risk management resources? This article evaluates and compares different risk indices under simplifying conditions favorable to their use (statistically independent, uniformly distributed values of their components; and noninteracting risk‐reduction opportunities). Compared to an optimal (nonindex) approach, simple indices produce inferior resource allocations that for a given cost may reduce risk by as little as 60% of what the optimal decisions would provide, at least in our simple simulations. This article suggests a better risk reduction per unit cost index that achieves 98–100% of the maximum possible risk reduction on these problems for all budget levels except the smallest, which allow very few risks to be addressed. Substantial gains in risk reduction achieved for resources spent can be obtained on our test problems by using this improved index instead of simpler ones that focus only on relative sizes of risk (or of components of risk) in informing risk management priorities and allocating limited risk management resources. This work suggests the need for risk management tools to explicitly consider costs in prioritization activities, particularly in situations where budget restrictions make careful allocation of resources essential for achieving close‐to‐maximum risk‐reduction benefits.  相似文献   

3.
An omnibus spending bill in 2014 directed the Department of Energy to analyze how effectively Department of Energy (DOE) identifies, programs, and executes its plans to address public health and safety risks that remain as part of DOE's remaining environmental cleanup liabilities. A committee identified two dozen issues and associated recommendations for the DOE, other federal agencies, and the U.S. Congress to consider, as well as other stakeholders such as states and tribal nations. In regard to risk assessment, the committee described a risk review process that uses available data, expert experience, identifies major data gaps, permits input from key stakeholders, and creates an ordered set of risks based on what is known. Probabilistic risk assessments could be a follow‐up from these risk reviews. In regard to risk management, the states, in particular, have become major drivers of how resources are driven. States use different laws, different priorities, and challenge DOE's policies in different ways. Land use decisions vary, technology choices are different, and other notable variations are apparent. The cost differences associated with these differences are marked. The net result is that resources do not necessarily go to the most prominent human health and safety risks, as seen from the national level.  相似文献   

4.
Hypothetical fears are concepts, not quantities. Their conceptual nature makes impractical conventional quantitative risk analyses (QRA) based on benefit/cost/risk, so they become an unmeasured influence in national decision making. The decision process involves two steps, the Analytic Stage (QRA based) and the Priority Stage (resource allocation competition). This article suggests that a quantitative estimate of the social cost of fear reduction to acceptable levels be used as a surrogate QRA input to the Priority Stage.  相似文献   

5.
On the basis of the combination of the well‐known knapsack problem and a widely used risk management technique in organizations (that is, the risk matrix), an approach was developed to carry out a cost‐benefits analysis to efficiently take prevention investment decisions. Using the knapsack problem as a model and combining it with a well‐known technique to solve this problem, bundles of prevention measures are prioritized based on their costs and benefits within a predefined prevention budget. Those bundles showing the highest efficiencies, and within a given budget, are identified from a wide variety of possible alternatives. Hence, the approach allows for an optimal allocation of safety resources, does not require any highly specialized information, and can therefore easily be applied by any organization using the risk matrix as a risk ranking tool.  相似文献   

6.
Helicobacter pylori is a microaerophilic, gram‐negative bacterium that is linked to adverse health effects including ulcers and gastrointestinal cancers. The goal of this analysis is to develop the necessary inputs for a quantitative microbial risk assessment (QMRA) needed to develop a potential guideline for drinking water at the point of ingestion (e.g., a maximum contaminant level, or MCL) that would be protective of human health to an acceptable level of risk while considering sources of uncertainty. Using infection and gastric cancer as two discrete endpoints, and calculating dose‐response relationships from experimental data on humans and monkeys, we perform both a forward and reverse risk assessment to determine the risk from current reported surface water concentrations of H. pylori and an acceptable concentration of H. pylori at the point of ingestion. This approach represents a synthesis of available information on human exposure to H. pylori via drinking water. A lifetime risk of cancer model suggests that a MCL be set at <1 organism/L given a 5‐log removal treatment because we cannot exclude the possibility that current levels of H. pylori in environmental source waters pose a potential public health risk. Research gaps include pathogen occurrence in source and finished water, treatment removal rates, and determination of H. pylori risks from other water sources such as groundwater and recreational water.  相似文献   

7.
Moment‐matching discrete distributions were developed by Miller and Rice (1983) as a method to translate continuous probability distributions into discrete distributions for use in decision and risk analysis. Using gaussian quadrature, they showed that an n‐point discrete distribution can be constructed that exactly matches the first 2n ‐ 1 moments of the underlying distribution. These moment‐matching discrete distributions offer several theoretical advantages over the typical discrete approximations as shown in Smith (1993), but they also pose practical problems. In particular, how does the analyst estimate the moments given only the subjective assessments of the continuous probability distribution? Smith suggests that the moments can be estimated by fitting a distribution to the assessments. This research note shows that the quality of the moment estimates cannot be judged solely by how close the fitted distribution is to the true distribution. Examples are used to show that the relative errors in higher order moment estimates can be greater than 100%, even though the cumulative distribution function is estimated within a Kolmogorov‐Smirnov distance less than 1%.  相似文献   

8.
Three Conceptions of Quantified Societal Risk   总被引:1,自引:0,他引:1  
In several European countries efforts are undertaken, in particular with regard to fixed industrial installations and transport of dangerous substances, to quantify the "societal risk" (SR) of accidents that may cause more than one victim at a time. This article explores the nature of such efforts. SR-models are essentially ways to structure the distribution of potential social costs of decisions about hazardous activities (e.g., costs of risk reduction, of land use forgone). First, the various ways to describe SR quantitatively, and to set limits to SR will be presented in short. Next, using a scheme developed by Fischhoff and colleagues, the various approaches will be placed in broad categories of reaching acceptable risk decisions: bootstrapping, formal analysis, and professional judgment. Each of the three categories offers a particular appreciation of the risks as 'external costs'. This has important political implications. In the discussion it is argued that local SR-limits, by the very nature of SR, should be set in a way that creates consistency with any potential supra-local interests involved. Second, particular attention is paid to the validity of claims that SR-limits should reflect a strong risk aversion.  相似文献   

9.
The recent decision of the U.S. Supreme Court on the regulation of CO2 emissions from new motor vehicles( 1 ) shows the need for a robust methodology to evaluate the fraction of attributable risk from such emissions. The methodology must enable decisionmakers to reach practically relevant conclusions on the basis of expert assessments the decisionmakers see as an expression of research in progress, rather than as knowledge consolidated beyond any reasonable doubt.( 2,3,4 ) This article presents such a methodology and demonstrates its use for the Alpine heat wave of 2003. In a Bayesian setting, different expert assessments on temperature trends and volatility can be formalized as probability distributions, with initial weights (priors) attached to them. By Bayesian learning, these weights can be adjusted in the light of data. The fraction of heat wave risk attributable to anthropogenic climate change can then be computed from the posterior distribution. We show that very different priors consistently lead to the result that anthropogenic climate change has contributed more than 90% to the probability of the Alpine summer heat wave in 2003. The present method can be extended to a wide range of applications where conclusions must be drawn from divergent assessments under uncertainty.  相似文献   

10.
Since substantial bias can result from assigning some type of mean exposure to a group, risk assessments based on epidemiological data should avoid the grouping of data whenever possible. However, ungrouped data are frequently unavailable, and the question arises as to whether an arithmetic or geometric mean is the most appropriate summary measure of exposure. It is argued in this paper that one should use the type of mean for which the total risk that would result if every member of the population was exposed to the mean level is as close as possible to the actual total population risk. Using this criterion an arithmetic mean is always preferred over a geometric mean whenever the dose response is convex. In each of several data sets examined in this paper for which the dose response was not convex, an arithmetic mean was still preferred based on this criterion.  相似文献   

11.
The Color Additives Scientific Review Panel considered whether there was information sufficient to perform a carcinogenic risk assessment on the colors D&C Red No. 19 (R-19), D&C Red No. 37 (R-37), D&C Orange No. 17 (O-17), D&C Red No. 9 (R-9), D&C Red No. 8 (R-8) and FD&C Red No. 3 (R-3) and to evaluate the assessments sent to FDA as part of the petitions for use of the colors for drug and external uses by the Cosmetic, Toiletry and Fragrance Association (CTFA). There is a lack of human data concerning the colors for making a human health assessment, so the assessments are based upon the extrapolation of animal data. The risk assessments are determined for exposure to single chemicals. Excluded from consideration are possible effects from exposure to multiple chemicals, such as co-carcinogenesis, promotion, synergism, antagonism, etc. In the light of recent efforts in establishing a consensus in risk assessment, the Panel has determined that the CTFA assessments for R-10, O-17, and R-9 are consistent with present acceptable usages, although it questions some of the assumptions used in the assessments. The Panel identified a number of general assumptions made, and discusses their validity, their impact on total uncertainty, and the potential options to address the gaps in understanding that necessitate the assumption. The Panel also derived revised risk estimates using more "reasonable" assumptions than "worst-case" situations, for 90th percentile and average exposure. For those assumptions that are easily quantifiable, the Panel's estimates are less than an order of magnitude lower than the CTFA risk estimates, indicating that the underestimates and overestimates of the CTFA risk estimates tend to balance each other. The impact of most of the assumptions is not quantifiable. The assessment for R-3 is complicated by the fact that there is no good skin penetrance study for this color. It was assumed that the penetrance is similar to that of another water-soluble xanthene color, R-19. It is expected that the absorption of the color is not likely to exceed that of the smaller molecule, R-19. Therefore, the risk estimates are similar to the CTFA estimates, but with different reasoning. The estimates for R-8 and R-37 are different from the others in that there is a lack of any exposure or toxicological information on these colors.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

12.
Adrian Kent 《Risk analysis》2004,24(1):157-168
Recent articles by Busza et al. (BJSW) and Dar et al. (DDH) argue that astrophysical data can be used to establish small bounds on the risk of a "killer strangelet" catastrophe scenario in the RHIC and ALICE collider experiments. The case for the safety of the experiments set out by BJSW does not rely solely on these bounds, but on theoretical arguments, which BJSW find sufficiently compelling to firmly exclude any possibility of catastrophe. Nonetheless, DDH and other commentators (initially including BJSW) suggested that these empirical bounds alone do give sufficient reassurance. This seems unsupportable when the bounds are expressed in terms of expectation value-a good measure, according to standard risk analysis arguments. For example, DDH's main bound, p(catastrophe) < 2 x 10(-8), implies only that the expectation value of the number of deaths is bounded by 120; BJSW's most conservative bound implies the expectation value of the number of deaths is bounded by 60,000. This article reappraises the DDH and BJSW risk bounds by comparing risk policy in other areas. For example, it is noted that, even if highly risk-tolerant assumptions are made and no value is placed on the lives of future generations, a catastrophe risk no higher than approximately 10(-15) per year would be required for consistency with established policy for radiation hazard risk minimization. Allowing for risk aversion and for future lives, a respectable case can be made for requiring a bound many orders of magnitude smaller. In summary, the costs of small risks of catastrophe have been significantly underestimated by BJSW (initially), by DDH, and by other commentators. Future policy on catastrophe risks would be more rational, and more deserving of public trust, if acceptable risk bounds were generally agreed upon ahead of time and if serious research on whether those bounds could indeed be guaranteed was carried out well in advance of any hypothetically risky experiment, with the relevant debates involving experts with no stake in the experiments under consideration.  相似文献   

13.
Criteria are proposed for both an acceptable upper bound of nuclear power plant risk and a lower bound as a design target. Recognizing that the public risk associated with a power plant can be estimated only by probabilistic analysis of the design features, the spread between the lower design target and the upper bound provides a margin for uncertainty in th probabilistic estimate. The combination of a low probabilistic design target and this margin provides a reasonable expectation that the overall performance will be in the domain of an acceptable risk level. Because the exposure to potential risk is chiefly in the locality of the nuclear station, it is also proposed that compensatory benefits should be provided locally and that these be included as a cost of operation. It is suggested that the upper bound be set at a risk level equivalent to those risks of routine living which are normally accepted, i.e., about 10-4 deaths per year per person (100 deaths/yr/million). The proposed lower design target is 10-8 (0.1 deaths/yr/million), about one-hundredth of the minimal risk from the natural hazards all people are exposed to.  相似文献   

14.
假定股票和期货服从无漂移项的算术布朗运动,投资者效用为均值-方差形式,价格冲击为线性,在连续时间框架下,求解单只股票与股指期货套期保值同步出清问题,得到出清轨迹。参数分析表明:当风险厌恶程度较大时、组合标准差越大,投资者倾向于在出清初期出清较大规模的头寸,以降低后期的风险;当ρ<0,随套期保值比的增加,投资者更倾向于快速的出清过程,当ρ>0则相反;在给定套期保值比的情况下,出清速率与相关系数呈反向变化。  相似文献   

15.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose–response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose–response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose–response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose–response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose–response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

16.
Many recent contributions to risk communication research stress the importance of the element of "trust" in the process of successful communication. This paper uses that theme in considering risk communication within the context of seeking consensus on matters of health and environmental risk controversies through stakeholder negotiation. It suggests that there are very good reasons, based on historical experience, for the parties to mistrust each other deeply in such settings. For example there is abundant evidence involving episodes in which risk promoters have concealed or ignored relevant risk data or simply have sought to advance their own interests by selective use of such data. These well-established practices compound the difficulties other stakeholders face, in all such negotiation, by virtue of the inescapable uncertainties (as well as absence of needed data) inherent in risk assessments. These factors encourage the participants to treat such negotiations as poker games in which bluffing, raising the ante, and calling the perceived bluffs of others are matters of survival. In the end we should recognize the genuine dilemmas that citizens face in trying to figure out who and what to believe in making sensible decisions among the range of risks, benefits, and tradeoffs that confront us.  相似文献   

17.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose-response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose-response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose-response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose-response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose-response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

18.
This paper addresses the methodological issue of how researchers gain access and build trust in order to conduct research in organizations. It focuses, in particular, on the role of interests (what actors want or what they stand to gain or lose) in the research relationship. The analysis shows how notions of interests, stake and motive were managed during an action research study in a UK subsidiary of a multinational corporation. The study uses an approach to discourse analysis inspired by the field of discursive psychology to identify four discursive devices: stake inoculation; stake confession; stake attribution; and stake construction. The paper contributes to the understanding of research methodology by identifying the importance of interest‐talk in the process of doing management research.  相似文献   

19.
Public opinion poll data have consistently shown that the proportion of respondents who are willing to have a nuclear power plant in their own community is smaller than the proportion who agree that more nuclear plants should be built in this country. Respondents' judgments of the minimum safe distance from each of eight hazardous facilities confirmed that this finding results from perceived risk gradients that differ by facility (e.g., nuclear vs. natural gas power plants) and social group (e.g., chemical engineers vs. environmentalists) but are relatively stable over time. Ratings of the facilities on thirteen perceived risk dimensions were used to determine whether any of the dimensions could explain the distance data. Because the rank order of the facilities with respect to acceptable distance was very similar to the rank order on a number of the perceived risk dimensions, it is difficult to determine which of the latter is the critical determinant of acceptable distance if, indeed, there is only one. There were, however, a number of reversals of rank order that indicate that the respondents had a differentiated view of technological risk. Finally, data from this and other studies were interpreted as suggesting that perceived lack of any other form of personal control over risk exposure may be an important factor in stimulating public opposition to the siting of hazardous facilities.  相似文献   

20.
Reliability and higher levels of safety are thought to be achieved by using systematic approaches to managing risks. The assessment of risks has produced a range of different approaches to assessing these uncertainties, presenting models for how risks affect individuals or organizations. Contemporary risk assessment tools based on this approach have proven difficult for practitioners to use as tools for tactical and operational decision making. This article presents an alternative to these assessments by utilizing a resilience perspective, arguing that complex systems are inclined to variety and uncertainty regarding the results they produce and are therefore prone to systemic failures. A continuous improvement approach is a source of reliability when managing complex systems and is necessary to manage varieties and uncertainties. For an organization to understand how risk events occur, it is necessary to define what is believed to be the equilibrium of the system in time and space. By applying a resilience engineering (RE) perspective to risk assessment, it is possible to manage this complexity by assessing the ability to respond, monitor, learn, and anticipate risks, and in so doing to move away from the flawed frequency and consequences approach. Using a research station network in the Arctic as an example illustrates how an RE approach qualifies assessments by bridging risk assessments with value-creation processes. The article concludes by arguing that a resilience-based risk assessment can improve on current practice, including for organizations located outside the Arctic region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号