首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
We show that a simple “reputation‐style” test can always identify which of two experts is informed about the true distribution. The test presumes no prior knowledge of the true distribution, achieves any desired degree of precision in some fixed finite time, and does not use “counterfactual” predictions. Our analysis capitalizes on a result of Fudenberg and Levine (1992) on the rate of convergence of supermartingales. We use our setup to shed some light on the apparent paradox that a strategically motivated expert can ignorantly pass any test. We point out that this paradox arises because in the single‐expert setting, any mixed strategy for Nature over distributions is reducible to a pure strategy. This eliminates any meaningful sense in which Nature can randomize. Comparative testing reverses the impossibility result because the presence of an expert who knows the realized distribution eliminates the reducibility of Nature's compound lotteries.  相似文献   

2.
The difficulties in properly anticipating key economic variables may encourage decision makers to rely on experts' forecasts. Professional forecasters, however, may not be reliable and so their forecasts must be empirically tested. This may induce experts to forecast strategically in order to pass the test. A test can be ignorantly passed if a false expert, with no knowledge of the data‐generating process, can pass the test. Many tests that are unlikely to reject correct forecasts can be ignorantly passed. Tests that cannot be ignorantly passed do exist, but these tests must make use of predictions contingent on data not yet observed at the time the forecasts are rejected. Such tests cannot be run if forecasters report only the probability of the next period's events on the basis of the actually observed data. This result shows that it is difficult to dismiss false, but strategic, experts who know how theories are tested. This result also shows an important role that can be played by predictions contingent on data not yet observed.  相似文献   

3.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

4.
《Risk analysis》2018,38(4):666-679
We test here the risk communication proposition that explicit expert acknowledgment of uncertainty in risk estimates can enhance trust and other reactions. We manipulated such a scientific uncertainty message, accompanied by probabilities (20%, 70%, implicit [“will occur”] 100%) and time periods (10 or 30 years) in major (≥magnitude 8) earthquake risk estimates to test potential effects on residents potentially affected by seismic activity on the San Andreas fault in the San Francisco Bay Area (n = 750). The uncertainty acknowledgment increased belief that these specific experts were more honest and open, and led to statistically (but not substantively) significant increases in trust in seismic experts generally only for the 20% probability (vs. certainty) and shorter versus longer time period. The acknowledgment did not change judged risk, preparedness intentions, or mitigation policy support. Probability effects independent of the explicit admission of expert uncertainty were also insignificant except for judged risk, which rose or fell slightly depending upon the measure of judged risk used. Overall, both qualitative expressions of uncertainty and quantitative probabilities had limited effects on public reaction. These results imply that both theoretical arguments for positive effects, and practitioners’ potential concerns for negative effects, of uncertainty expression may have been overblown. There may be good reasons to still acknowledge experts’ uncertainties, but those merit separate justification and their own empirical tests.  相似文献   

5.
Yifan Zhang 《Risk analysis》2013,33(1):109-120
Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject‐matter experts about the value of a decision‐relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal‐weight combination (the average of the experts’ distributions). We evaluate the properties of five combination methods (equal‐weight, best‐expert, performance, frequentist, and copula) using simulated expert‐judgment data for which we know the process generating the experts’ distributions. We examine cases in which two well‐calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best‐expert approaches perform better and the equal‐weight combination method performs worse than the alternative approaches.  相似文献   

6.
This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15–140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution—suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies are needed to better understand the explosion risks of UXO.  相似文献   

7.
Expert elicitations are now frequently used to characterize uncertain future technology outcomes. However, their usefulness is limited, in part because: estimates across studies are not easily comparable; choices in survey design and expert selection may bias results; and overconfidence is a persistent problem. We provide quantitative evidence of how these choices affect experts’ estimates. We standardize data from 16 elicitations, involving 169 experts, on the 2030 costs of five energy technologies: nuclear, biofuels, bioelectricity, solar, and carbon capture. We estimate determinants of experts’ confidence using survey design, expert characteristics, and public R&D investment levels on which the elicited values are conditional. Our central finding is that when experts respond to elicitations in person (vs. online or mail) they ascribe lower confidence (larger uncertainty) to their estimates, but more optimistic assessments of best‐case (10th percentile) outcomes. The effects of expert affiliation and country of residence vary by technology, but in general: academics and public‐sector experts express lower confidence than private‐sector experts; and E.U. experts are more confident than U.S. experts. Finally, extending previous technology‐specific work, higher R&D spending increases experts’ uncertainty rather than resolves it. We discuss ways in which these findings should be seriously considered in interpreting the results of existing elicitations and in designing new ones.  相似文献   

8.
Listeria monocytogenes is among the foodborne pathogens with the highest death toll in the United States. Ready‐to‐eat foods contaminated at retail are an important source of infection. Environmental sites in retail deli operations can be contaminated. However, commonly contaminated sites are unlikely to come into direct contact with food and the public health relevance of environmental contamination has remained unclear. To identify environmental sites that may pose a considerable cross‐contamination risk, to elucidate potential transmission pathways, and to identify knowledge gaps, we performed a structured expert elicitation of 41 experts from state regulatory agencies and the food retail industry with practical experience in retail deli operations. Following the “Delphi” method, the elicitation was performed in three consecutive steps: questionnaire, review and discussion of results, second questionnaire. Hands and gloves were identified as important potential contamination sources. However, bacterial transfers to and from hands or gloves represented a major data gap. Experts agreed about transfer probabilities from cutting boards, scales, deli cases, and deli preparation sinks to product, and about transfer probabilities from floor drains, walk‐in cooler floors, and knife racks to food contact surfaces. Comparison of experts' opinions to observational data revealed a tendency among experts with certain demographic characteristics and professional opinions to overestimate prevalence. Experts’ votes clearly clustered into separate groups not defined by place of employment, even though industry experts may have been somewhat overrepresented in one cluster. Overall, our study demonstrates the value and caveats of expert elicitation to identify data gaps and prioritize research efforts.  相似文献   

9.
Good policy making should be based on available scientific knowledge. Sometimes this knowledge is well established through research, but often scientists must simply express their judgment, and this is particularly so in risk scenarios that are characterized by high levels of uncertainty. Usually in such cases, the opinions of several experts will be sought in order to pool knowledge and reduce error, raising the question of whether individual expert judgments should be given different weights. We argue—against the commonly advocated “classical method”—that no significant benefits are likely to accrue from unequal weighting in mathematical aggregation. Our argument hinges on the difficulty of constructing reliable and valid measures of substantive expertise upon which to base weights. Practical problems associated with attempts to evaluate experts are also addressed. While our discussion focuses on one specific weighting scheme that is currently gaining in popularity for expert knowledge elicitation, our general thesis applies to externally imposed unequal weighting schemes more generally.  相似文献   

10.
An important requisite for improving risk communication practice related to contentious environmental issues is having a better theoretical understanding of how risk perceptions function in real‐world social systems. Our study applied Scherer and Cho's social network contagion theory of risk perception (SNCTRP) to cormorant management (a contentious environmental management issue) in the Great Lakes Basin to: (1) assess contagion effects on cormorant‐related risk perceptions and individual factors believed to influence those perceptions and (2) explore the extent of social contagion in a full network (consisting of interactions between and among experts and laypeople) and three “isolated” models separating different types of interactions from the full network (i.e., expert‐to‐expert, layperson‐to‐layperson, and expert‐to‐layperson). We conducted interviews and administered questionnaires with experts (e.g., natural resource professionals) and laypeople (e.g., recreational and commercial anglers, business owners, bird enthusiasts) engaged in cormorant management in northern Lake Huron (n = 115). Our findings generally support the SNCTRP; however, the scope and scale of social contagion varied considerably based on the variables (e.g., individual risk perception factors), actors (i.e., experts or laypeople), and interactions of interest. Contagion effects were identified more frequently, and were stronger, in the models containing interactions between experts and laypeople than in those models containing only interactions among experts or laypeople.  相似文献   

11.
Quality issues in milk—arising primarily from deliberate adulteration by producers—have been reported in several developing countries. In the milk supply chain, a station buys raw milk from a number of producers, mixes the milk and sells it to a firm (that then sells the processed milk to end consumers). We study a non‐cooperative game between a station and a population of producers. Apart from penalties on proven low‐quality producers, two types of incentives are analyzed: confessor rewards for low‐quality producers who confess and quality rewards for producers of high‐quality milk. Contrary to our expectations, whereas (small) confessor rewards can help increase both the quality of milk and the station's profit, quality rewards can be detrimental. We examine two structures based on the ordering of individual and mixed testing of milk: pre‐mixed individual testing (first test a fraction of producers individually and then [possibly] perform a mixed test on the remaining producers) and post‐mixed individual testing (first test the mixed milk from all producers and then test a fraction of producers individually). Whereas pre‐mixed individual testing can be socially harmful, a combination of post‐mixed individual testing and other incentives achieves a desirable outcome: all producers supply high‐quality milk with only one mixed test and no further testing by the station.  相似文献   

12.
The choice of performance measure has long been a difficult issue facing researchers. This article investigates the comparability of four common measures of acquisition performance: cumulative abnormal returns, managers' assessments, divestment data and expert informants' assessments. Independently each of these measures indicated a mean acquisition success rate of between 44–56%, within a sample of British cross‐border acquisitions. However, with the exception of a positive relationship between managers' and expert informants' subjective assessments, no significant correlation was found between the performance data generated by the alternative metrics. In particular, ex‐ante capital market reactions to an acquisition announcement exhibited little relation to corporate managers' ex‐post assessment. This is seen to reflect the information asymmetry that can exist between investors and company management, particularly regarding implementation aspects. Overall, the results suggest that future acquisitions studies should consider employing multiple performance measures in order to gain a holistic view of outcome, while in the longer term, opportunities remain to identify and refine improved metrics.  相似文献   

13.
I recently discussed pitfalls in attempted causal inference based on reduced‐form regression models. I used as motivation a real‐world example from a paper by Dr. Sneeringer, which interpreted a reduced‐form regression analysis as implying the startling causal conclusion that “doubling of [livestock] production leads to a 7.4% increase in infant mortality.” This conclusion is based on: (A) fitting a reduced‐form regression model to aggregate (e.g., county‐level) data; and (B) (mis)interpreting a regression coefficient in this model as a causal coefficient, without performing any formal statistical tests for potential causation (such as conditional independence, Granger‐Sims, or path analysis tests). Dr. Sneeringer now adds comments that confirm and augment these deficiencies, while advocating methodological errors that, I believe, risk analysts should avoid if they want to reach logically sound, empirically valid, conclusions about cause and effect. She explains that, in addition to (A) and (B) above, she also performed other steps such as (C) manually selecting specific models and variables and (D) assuming (again, without testing) that hand‐picked surrogate variables are valid (e.g., that log‐transformed income is an adequate surrogate for poverty). In her view, these added steps imply that “critiques of A and B are not applicable” to her analysis and that therefore “a causal argument can be made” for “such a strong, robust correlation” as she believes her regression coefficient indicates. However, multiple wrongs do not create a right. Steps (C) and (D) exacerbate the problem of unjustified causal interpretation of regression coefficients, without rendering irrelevant the fact that (A) and (B) do not provide evidence of causality. This reply focuses on whether any statistical techniques can produce the silk purse of a valid causal inference from the sow's ear of a reduced‐form regression analysis of ecological data. We conclude that Dr. Sneeringer's analysis provides no valid indication that air pollution from livestock operations causes any increase in infant mortality rates. More generally, reduced‐form regression modeling of aggregate population data—no matter how it is augmented by fitting multiple models and hand‐selecting variables and transformations—is not adequate for valid causal inference about health effects caused by specific, but unmeasured, exposures.  相似文献   

14.
Building models of expert decision-making behavior from examples of experts’ decisions continues to receive considerable research attention. In the 1960's and 70's, linear models derived by statistical methods were studied extensively. More recently, rule-based expert systems derived by induction algorithms have been the focus of attention. Few studies compare the two approaches. This paper reports on a study that compared linear models derived by logistic regression with rule-based systems produced by two induction algorithms—ID3 and the genetic algorithm. The techniques performed comparably in modeling the experts at one task, graduate admissions, but differed significantly at a second task, bidder selection.  相似文献   

15.
Many environmental and risk management decisions are made jointly by technical experts and members of the public. Frequently, their task is to select from among management alternatives whose outcomes are subject to varying degrees of uncertainty. Although it is recognized that how this uncertainty is interpreted can significantly affect decision‐making processes and choices, little research has examined similarities and differences between expert and public understandings of uncertainty. We present results from a web‐based survey that directly compares expert and lay interpretations and understandings of different expressions of uncertainty in the context of evaluating the consequences of proposed environmental management actions. Participants responded to two hypothetical but realistic scenarios involving trade‐offs between environmental and other objectives and were asked a series of questions about their comprehension of the uncertainty information, their preferred choice among the alternatives, and the associated difficulty and amount of effort. Results demonstrate that experts and laypersons tend to use presentations of numerical ranges and evaluative labels differently; interestingly, the observed differences between the two groups were not explained by differences in numeracy or concerns for the predicted environmental losses. These findings question many of the usual presumptions about how uncertainty should be presented as part of deliberative risk‐ and environmental‐management processes.  相似文献   

16.
Increasing evidence suggests that persistence of Listeria monocytogenes in food processing plants has been the underlying cause of a number of human listeriosis outbreaks. This study extracts criteria used by food safety experts in determining bacterial persistence in the environment, using retail delicatessen operations as a model. Using the Delphi method, we conducted an expert elicitation with 10 food safety experts from academia, industry, and government to classify L. monocytogenes persistence based on environmental sampling results collected over six months for 30 retail delicatessen stores. The results were modeled using variations of random forest, support vector machine, logistic regression, and linear regression; variable importance values of random forest and support vector machine models were consolidated to rank important variables in the experts’ classifications. The duration of subtype isolation ranked most important across all expert categories. Sampling site category also ranked high in importance and validation errors doubled when this covariate was removed. Support vector machine and random forest models successfully classified the data with average validation errors of 3.1% and 2.2% (n = 144), respectively. Our findings indicate that (i) the frequency of isolations over time and sampling site information are critical factors for experts determining subtype persistence, (ii) food safety experts from different sectors may not use the same criteria in determining persistence, and (iii) machine learning models have potential for future use in environmental surveillance and risk management programs. Future work is necessary to validate the accuracy of expert and machine classification against biological measurement of L. monocytogenes persistence.  相似文献   

17.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

18.
Despite ambitious efforts in various fields of research over multiple decades, the goal of making academic research relevant to the practitioner remains elusive: theoretical and academic research interests do not seem to coincide with the interests of managerial practice. This challenge is more fundamental than knowledge transfer, it is one of diverging knowledge interests and means of knowledge production. In this article, we look at this fundamental challenge through the lens of design science, which is an approach aimed primarily at discovery and problem solving as opposed to accumulation of theoretical knowledge. We explore in particular the ways in which problem‐solving research and theory‐oriented academic research can complement one another. In operations management (OM) research, recognizing and building on this complementarity is especially crucial, because problem‐solving–oriented research produces the very artifacts (e.g., technologies) that empirical OM research subsequently evaluates in an attempt to build explanatory theory. It is indeed the practitioner—not the academic scientist—who engages in basic research in OM. This idiosyncrasy prompts the question: how can we enhance the cross‐fertilization between academic research and research practice to make novel theoretical insights and practical relevance complementary? This article proposes a design science approach to bridge practice to theory rather than theory to practice.  相似文献   

19.
We contrast two potential explanations of the substantial differences in entrepreneurial activity observed across geographical areas: entry costs and external effects. We extend the Lucas model of entrepreneurship to allow for heterogeneous entry costs and for externalities that shift the distribution of entrepreneurial talents. We show that these assumptions have opposite predictions on the relation between entrepreneurial activity and firm‐level TFP: with different entry costs, in areas with more entrepreneurs firms' average productivity should be lower; with heterogeneous external effects it should be higher. We test these implications on a sample of Italian firms and unambiguously reject the entry costs explanation in favor of the externalities explanation. We also investigate the sources of external effects, finding robust evidence that learning externalities are an important determinant of cross‐sectional differences in entrepreneurial activity.  相似文献   

20.
A dedicated subnetwork (DSN) refers to a subset of lanes, with associated loads, in a shipper's transportation network, for which resources—trucks, drivers, and other equipment—are exclusively assigned to accomplish shipping requirements. The resources assigned to a DSN are not shared with the rest of the shipper's network. Thus, a DSN is an autonomously operated subnetwork and, hence, can be subcontracted. We address a novel problem of extracting a DSN for outsourcing to one or more subcontractors, with the objective of maximizing the shipper's savings. In their pure form, the defining conditions of a DSN are often too restrictive to enable the extraction of a sizable subnetwork. We consider two notions—deadheading and lane‐sharing—that aid in improving the size of the DSN. We show that all the optimization problems involved are both strongly NP‐hard and APX‐hard, and demonstrate several polynomially solvable special cases arising from topological properties of the network and parametric relationships. Next, we develop a network‐flow‐based heuristic that provides near‐optimal solutions to practical instances in reasonable time. Finally, using a test bed based on data obtained from a national 3PL company, we demonstrate the substantial monetary impact of subcontracting a DSN and offer useful managerial insights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号