首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article discusses to what extent risk analysis is scientific in view of a set of commonly used definitions and criteria. We consider scientific knowledge to be characterized by its subject matter, its success in developing the best available knowledge in its fields of study, and the epistemic norms and values that guide scientific investigations. We proceed to assess the field of risk analysis according to these criteria. For this purpose, we use a model for risk analysis in which science is used as a base for decision making on risks, which covers the five elements evidence, knowledge base, broad risk evaluation, managerial review and judgment, and the decision; and that relates these elements to the domains experts and decisionmakers, and to the domains fact‐based or value‐based. We conclude that risk analysis is a scientific field of study, when understood as consisting primarily of (i) knowledge about risk‐related phenomena, processes, events, etc., and (ii) concepts, theories, frameworks, approaches, principles, methods and models to understand, assess, characterize, communicate, and manage risk, in general and for specific applications (the instrumental part).  相似文献   

2.
We develop and apply a judgment‐based approach to selecting robust alternatives, which are defined here as reasonably likely to achieve objectives, over a range of uncertainties. The intent is to develop an approach that is more practical in terms of data and analysis requirements than current approaches, informed by the literature and experience with probability elicitation and judgmental forecasting. The context involves decisions about managing forest lands that have been severely affected by mountain pine beetles in British Columbia, a pest infestation that is climate‐exacerbated. A forest management decision was developed as the basis for the context, objectives, and alternatives for land management actions, to frame and condition the judgments. A wide range of climate forecasts, taken to represent the 10–90% levels on cumulative distributions for future climate, were developed to condition judgments. An elicitation instrument was developed, tested, and revised to serve as the basis for eliciting probabilistic three‐point distributions regarding the performance of selected alternatives, over a set of relevant objectives, in the short and long term. The elicitations were conducted in a workshop comprising 14 regional forest management specialists. We employed the concept of stochastic dominance to help identify robust alternatives. We used extensive sensitivity analysis to explore the patterns in the judgments, and also considered the preferred alternatives for each individual expert. The results show that two alternatives that are more flexible than the current policies are judged more likely to perform better than the current alternatives on average in terms of stochastic dominance. The results suggest judgmental approaches to robust decision making deserve greater attention and testing.  相似文献   

3.
Researchers in judgment and decision making have long debunked the idea that we are economically rational optimizers. However, problematic assumptions of rationality remain common in studies of agricultural economics and climate change adaptation, especially those that involve quantitative models. Recent movement toward more complex agent‐based modeling provides an opportunity to reconsider the empirical basis for farmer decision making. Here, we reconceptualize farmer decision making from the ground up, using an in situ mental models approach to analyze weather and climate risk management. We assess how large‐scale commercial grain farmers in South Africa (n = 90) coordinate decisions about weather, climate variability, and climate change with those around other environmental, agronomic, economic, political, and personal risks that they manage every day. Contrary to common simplifying assumptions, we show that these farmers tend to satisfice rather than optimize as they face intractable and multifaceted uncertainty; they make imperfect use of limited information; they are differently averse to different risks; they make decisions on multiple time horizons; they are cautious in responding to changing conditions; and their diverse risk perceptions contribute to important differences in individual behaviors. We find that they use two important nonoptimizing strategies, which we call cognitive thresholds and hazy hedging, to make practical decisions under pervasive uncertainty. These strategies, evident in farmers' simultaneous use of conservation agriculture and livestock to manage weather risks, are the messy in situ performance of naturalistic decision‐making techniques. These results may inform continued research on such behavioral tendencies in narrower lab‐ and modeling‐based studies.  相似文献   

4.
5.
The mental models approach, a leading strategy to develop risk communications, involves a time‐ and labor‐intensive interview process and a lengthy questionnaire to elicit group‐level risk perceptions. We propose that a similarity ratings approach for structural knowledge elicitation can be adopted to assist the risk mental models approach. The LinkIT game, inspired by games with a purpose (GWAP) technology, is a ludic elicitation tool designed to elicit group understanding of the relations between risk factors in a more enjoyable and productive manner when compared to traditional approaches. That is, consistent with the idea of ludic elicitation, LinkIT was designed to make the elicitation process fun and enjoyable in the hopes of increasing participation and data quality in risk studies. Like the mental models approach, the group mental model obtained via the LinkIT game can hence be generated and represented in a form of influence diagrams. In order to examine the external validity of LinkIT, we conducted a study to compare its performance with respect to a more conventional questionnaire‐driven approach. Data analysis results conclude that the two group mental models elicited from the two approaches are similar to an extent. Yet, LinkIT was more productive and enjoyable than the questionnaire. However, participants commented that the current game has some usability concerns. This presentation summarizes the design and evaluation of the LinkIT game and suggests areas for future work.  相似文献   

6.
The development of the Association of Business Schools (ABS) list in 2007 and its rapid adoption by UK business schools has had a profound effect on the nature of business and management academics’ ways of working. Using a large‐scale survey of UK business academics, we assess the extent to which individuals use the Academic Journal Guide (AJG/ABS) list in their day‐to‐day professional activities. In particular, we explore how their perceptions of the list, the academic influence of their research, academic rank and organizational context drive the varied use. Building on prior research on the importance of univalent attitudes in predicting behaviour, we find those who have either strong positive or negative views of the list are more extensive users than those who are ambivalent. We also find that the extent of use of the AJG/ABS list is greatest among those academics who have lower academic influence, in the middle or junior ranks within business schools and in middle and low‐status universities. We explore the implications of these findings for the value of journal rankings and for the management of business schools.  相似文献   

7.
Cluster‐based segmentation usually involves two sets of variables: (i) the needs‐based variables (referred to as the bases variables), which are used in developing the original segments to identify the value, and (ii) the classification or background variables, which are used to profile or target the customers. The managers’ goal is to utilize these two sets of variables in the most efficient manner. Pragmatic managerial interests recognize the underlying need to start shifting from methodologies that obtain highly precise value‐based segments but may be of limited practical use as they provide less targetable segments. Consequently, the imperative is to shift toward newer segmentation approaches that provide greater focus on targetable segments while maintaining homogeneity. This requires dual objective segmentation, which is a combinatorially difficult problem. Hence, we propose and examine a new evolutionary methodology based on genetic algorithms to address this problem. We show, based on a large‐scale Monte Carlo simulation and a case study, that the proposed approach consistently outperforms the existing methods for a wide variety of problem instances. We are able to obtain statistically significant and managerially important improvements in targetability with little diminution in the identifiability of value‐based segments. Moreover, the proposed methodology provides a set of good solutions, unlike existing methodologies that provide a single solution. We also show how these good solutions can be used to plot an efficient Pareto frontier. Finally, we present useful insights that would help managers in implementing the proposed solution approach effectively.  相似文献   

8.
We study a joint capacity leasing and demand acceptance problem in intermodal transportation. The model features multiple sources of evolving supply and demand, and endogenizes the interplay of three levers—forecasting, leasing, and demand acceptance. We characterize the optimal policy, and show how dynamic forecasting coordinates leasing and acceptance. We find (i) the value of dynamic forecasting depends critically on scarcity, stochasticity, and volatility; (ii) traditional mean‐value equivalence approach performs poorly in volatile intermodal context; (iii) mean‐value‐based forecast may outperform stationary distribution‐based forecast. Our work enriches revenue management models and applications. It advances our understanding on when and how to use dynamic forecasting in intermodal revenue management.  相似文献   

9.
In many innovation settings, ideas are generated over time and managers face a decision about if and how to provide in‐process feedback to the idea generators about the quality of submissions. In this article, we use design contests allowing repeated entry to examine the effect of in‐process feedback on idea generation. We report on a set of field experiments using two online contest websites to compare the performance of three different feedback treatments—no feedback, random feedback, and directed feedback (i.e., in‐process feedback highly correlated with the final quality rating of the entry). We posted six logo design contests for consumer products and accepted submissions for 1 week. We provided daily feedback during the contest period using one of the three treatments. We then used a panel of target consumers to rate the quality of each idea. We find that directed feedback is associated positively with agent participation. For outcome, while directed feedback benefits the average quality of entries submitted, we don't find that relationship for the best entries—indeed, no feedback or random feedback may produce better top‐end entry quality. We also find that, under directed feedback, the variance in quality declines as the contest progresses.  相似文献   

10.
In a make‐to‐order product recovery environment, we consider the allocation decision for returned products decision under stochastic demand of a firm with three options: refurbishing to resell, parts harvesting, and recycling. We formulate the problem as a multiperiod Markov decision process (MDP) and present a linear programming (LP) approximation that provides an upper bound on the optimal objective function value of the MDP model. We then present two solution approaches to the MDP using the LP solution: a static approach that uses the LP solution directly and a dynamic approach that adopts a revenue management perspective and employs bid‐price controls technique where the LP is resolved after each demand arrival. We calculate the bid prices based on the shadow price interpretation of the dual variables for the inventory constraints and accept a demand if the marginal value is higher than the bid price. Since the need for solving the LP at each demand arrival requires a very efficient solution procedure, we present a transportation problem formulation of the LP via variable redefinitions and develop a one‐pass optimal solution procedure for it. We carry out an extensive numerical analysis to compare the two approaches and find that the dynamic approach provides better performance in all of the tested scenarios. Furthermore, the solutions obtained are within 2% of the upper bound on the optimal objective function value of the MDP model.  相似文献   

11.
We propose a framework for out‐of‐sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine‐tune the forecast selection to current economic conditions. To this end, we propose a two‐step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter‐reduction methods for macroeconomic forecasting using a large number of predictors.  相似文献   

12.
Quality‐related incidents involving contract manufacturers (CMs) are becoming increasingly prevalent. The quality management (QM) literature, however, has focused mostly on QM within a single firm. Thus, the need for data‐driven research on managing quality with outsourced production is evident. We investigate the use and effectiveness of external failure penalties and audits of CMs’ facilities to manage inter‐firm quality. Building on agency theory and extant QM literature, this study addresses two research questions: (i) whether the control mechanisms of quality audits and contractual external quality failure penalties are substitutes or complements in use and (ii) whether they are substitutes or complements in their effectiveness at aligning the quality interests of customers and their CMs. Our analysis uses dyadic data gathered from brand‐owning firms and their CMs representing 95 contract manufacturing relationships in Food and Drug Administration (FDA)‐regulated industries. The results indicate that more severe external failure penalties correspond to a lower use of facility audits (i.e., they are substitutes‐in‐use). We also find that both external failure penalties and facility audits have a unique positive effect on the CM's perception of relative quality importance. Finally, some evidence supports the hypothesis that each mechanism is more effective in the presence of the other (i.e., they are complements‐in‐effectiveness).  相似文献   

13.
Since the publication of Darwin's Origin of Species, a number of scholars have explored the possibility of expanding Darwinism beyond the domain of biology to fields of study as diverse as language, psychology, economics, behaviour and culture. In the last half century, some of these scholars have generalized Darwinian principles to study socio‐economic change, with developments being made in the study of technological innovation, organizational diversity, multi‐level co‐evolution, memetics and organizational change. However, these developments have been hampered not only by disagreement between the scholars themselves, but more broadly by criticisms from a diverse range of established scientific traditions within economics and organization science. In light of these developments, the aim of this paper is to provide a timely critical review of the use of the Generalized Darwinist approach to the study of socio‐economic change. In the process, key disagreements between the different conceptual and empirical approaches taken by scholars, and key criticisms against using a Generalized Darwinist approach are highlighted. Building on this review, the paper outlines some key challenges and opportunities facing the Generalized Darwinist approach in the study of technological innovation, organizational change and multi‐level co‐evolution. The paper concludes with outlines for future research, and in particular further conceptual and empirical developments.  相似文献   

14.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

15.
We study how professional players and college students play zero‐sum two‐person strategic games in a laboratory setting. We first ask professionals to play a 2 × 2 game that is formally identical to a strategic interaction situation that they face in their natural environment. Consistent with their behavior in the field, they play very close to the equilibrium of the game. In particular, (i) they equate their strategies' payoffs to the equilibrium ones and (ii) they generate sequences of choices that are serially independent. In sharp contrast, however, we find that college students play the game far from the equilibrium predictions. We then study the behavior of professional players and college students in the classic O'Neill 4 × 4 zero‐sum game, a game that none of the subjects has encountered previously, and find the same differences in the behavior of these two pools of subjects. The transfer of skills and experience from the familiar field to the unfamiliar laboratory observed for professional players is relevant to evaluate the circumstances under which behavior in a laboratory setting may be a reliable indicator of behavior in a naturally occurring setting. From a cognitive perspective, it is useful for research on recognition processes, intuition, and similarity as a basis for inductive reasoning.  相似文献   

16.
Abstract

We begin by juxtaposing the pervasive presence of technology in organizational work with its absence from the organization studies literature. Our analysis of four leading journals in the field confirms that over 95% of the articles published in top management research outlets do not take into account the role of technology in organizational life. We then examine the research that has been done on technology, and categorize this literature into two research streams according to their view of technology: discrete entities or mutually dependent ensembles. For each stream, we discuss three existing reviews spanning the last three decades of scholarship to highlight that while there have been many studies and approaches to studying organizational interactions and implications of technology, empirical research has produced mixed and often‐conflicting results. Going forward, we suggest that further work is needed to theorize the fusion of technology and work in organizations, and that additional perspectives are needed to add to the palette of concepts in use. To this end, we identify a promising emerging genre of research that we refer to under the umbrella term: sociomateriality. Research framed according to the tenets of a sociomaterial approach challenges the deeply taken‐for‐granted assumption that technology, work, and organizations should be conceptualized separately, and advances the view that there is an inherent inseparability between the technical and the social. We discuss the intellectual motivation for proposing a sociomaterial research approach and point to some common themes evident in recent studies. We conclude by suggesting that a reconsideration of conventional views of technology may help us more effectively study and understand the multiple, emergent, and dynamic sociomaterial configurations that constitute contemporary organizational practices.  相似文献   

17.
In many financial markets, dealers have the advantage of observing the orders of their customers. To quantify the economic benefit that dealers derive from this advantage, we study detailed data from Canadian Treasury auctions, where dealers observe customer bids while preparing their own bids. In this setting, dealers can use information on customer bids to learn about (i) competition, that is, the distribution of competing bids in the auction, and (ii) fundamentals, that is, the ex post value of the security being auctioned. We devise formal hypothesis tests for both sources of informational advantage. In our data, we do not find evidence that dealers are learning about fundamentals. We find that the “information about competition” contained in customer bids accounts for 13–27% of dealers' expected profits.  相似文献   

18.
Today, chemical risk and safety assessments rely heavily on the estimation of environmental fate by models. The key compound‐related properties in such models describe partitioning and reactivity. Uncertainty in determining these properties can be separated into random and systematic (incompleteness) components, requiring different types of representation. Here, we evaluate two approaches that are suitable to treat also systematic errors, fuzzy arithmetic, and probability bounds analysis. When a best estimate (mode) and a range can be computed for an input parameter, then it is possible to characterize the uncertainty with a triangular fuzzy number (possibility distribution) or a corresponding probability box bound by two uniform distributions. We use a five‐compartment Level I fugacity model and reported empirical data from the literature for three well‐known environmental pollutants (benzene, pyrene, and DDT) as illustrative cases for this evaluation. Propagation of uncertainty by discrete probability calculus or interval arithmetic can be done at a low computational cost and gives maximum flexibility in applying different approaches. Our evaluation suggests that the difference between fuzzy arithmetic and probability bounds analysis is small, at least for this specific case. The fuzzy arithmetic approach can, however, be regarded as less conservative than probability bounds analysis if the assumption of independence is removed. Both approaches are sensitive to repeated parameters that may inflate the uncertainty estimate. Uncertainty described by probability boxes was therefore also propagated through the model by Monte Carlo simulation to show how this problem can be avoided.  相似文献   

19.
Adopting a multi-stakeholder perspective on brand management, this paper discusses different methodological approaches that allow for a cross-stakeholder evaluation of associations the brand triggers. Our main contribution is the proposal and illustration of a Venn-diagram approach as a simple-to-implement, yet insightful methodology to visualize findings from free association questions. This approach helps brand management understand and compare the associations attached to a brand by multiple stakeholders and their degree of match with management-desired brand associations. We illustrate the managerial relevance of this approach with results from an international study comparing brand associations desired by the management of a company with brand associations elicited by customers and employees, with some 1500 respondents respectively. For the particular case investigated, we find that management-desired associations may not (yet) be top-of-mind for customers, employees or both groups, while these groups hold (and partly share) associations not desired by the organization. The findings also show that specific types of associations are more likely to be top-of-mind with multiple stakeholders than others. We discuss how brand management should use the insights gained via this Venn-diagram approach in their brand-building efforts.  相似文献   

20.
Yifan Zhang 《Risk analysis》2013,33(1):109-120
Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject‐matter experts about the value of a decision‐relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal‐weight combination (the average of the experts’ distributions). We evaluate the properties of five combination methods (equal‐weight, best‐expert, performance, frequentist, and copula) using simulated expert‐judgment data for which we know the process generating the experts’ distributions. We examine cases in which two well‐calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best‐expert approaches perform better and the equal‐weight combination method performs worse than the alternative approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号