首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within‐subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience‐based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience‐based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience‐based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus‐response compatibility.  相似文献   

2.
Communicating probability information about risks to the public is more difficult than might be expected. Many studies have examined this subject, so that their resulting recommendations are scattered over various publications, diverse research fields, and are about different presentation formats. An integration of empirical findings in one review would be useful therefore to describe the evidence base for communication about probability information and to present the recommendations that can be made so far. We categorized the studies in the following presentation formats: frequencies, percentages, base rates and proportions, absolute and relative risk reduction, cumulative probabilities, verbal probability information, numerical versus verbal probability information, graphs, and risk ladders. We suggest several recommendations for these formats. Based on the results of our review, we show that the effects of presentation format depend not only on the type of format, but also on the context in which the format is used. We therefore argue that the presentation format has the strongest effect when the receiver processes probability information heuristically instead of systematically. We conclude that future research and risk communication practitioners should not only concentrate on the presentation format of the probability information but also on the situation in which this message is presented, as this may predict how people process the information and how this may influence their interpretation of the risk.  相似文献   

3.
The simplified Conjoint Expected Risk (CER) model by Holtgrave and Weber posits that perceived risk is a linear combination of the subjective judgments of the probabilities of harm, benefit, and status quo, and the expected harm and benefit of an activity. It modifies Luce and Weber's original CER model—that uses objective information to evaluate financial gambles—to accommodate activities such as health/technology activities where values of the model variables are subjective. If the simplified model is a valid modification of the original model, its performance should not be sensitive to the use of subjective information. However, because people may evaluate information differently when objective information is provided to them than when they generate information on their own, the performance of the simplified CER model may not be robust to the source of model-variable information. We compared the use of objective and subjective information, and results indicate that the estimates of the simplified CER model parameters and the proportion of variance in risk judgments accounted for by the model are similar under these two conditions. Thus, the simplified CER model is viable with activities for which harm and benefit information is subjective.  相似文献   

4.
Event-tree analysis with imprecise probabilities   总被引:1,自引:0,他引:1  
You X  Tonon F 《Risk analysis》2012,32(2):330-344
Novel methods are proposed for dealing with event-tree analysis under imprecise probabilities, where one could measure chance or uncertainty without sharp numerical probabilities and express available evidence as upper and lower previsions (or expectations) of gambles (or bounded real functions). Sets of upper and lower previsions generate a convex set of probability distributions (or measures). Any probability distribution in this convex set should be considered in the event-tree analysis. This article focuses on the calculation of upper and lower bounds of the prevision (or the probability) of some outcome at the bottom of the event-tree. Three cases of given information/judgments on probabilities of outcomes are considered: (1) probabilities conditional to the occurrence of the event at the upper level; (2) total probabilities of occurrences, that is, not conditional to other events; (3) the combination of the previous two cases. Corresponding algorithms with imprecise probabilities under the three cases are explained and illustrated by simple examples.  相似文献   

5.
Null events—not detecting a pernicious agent—are the basis for declaring the agent is absent. Repeated nulls strengthen confidence in the declaration. However, correlations between observations are difficult to assess in many situations and introduce uncertainty in interpreting repeated nulls. We quantify uncertain correlations using an info‐gap model, which is an unbounded family of nested sets of possible probabilities. An info‐gap model is nonprobabilistic and entails no assumption about a worst case. We then evaluate the robustness, to uncertain correlations, of estimates of the probability of a null event. This is then the basis for evaluating a nonprobabilistic robustness‐based confidence interval for the probability of a null.  相似文献   

6.
Organizations in several domains including national security intelligence communicate judgments under uncertainty using verbal probabilities (e.g., likely) instead of numeric probabilities (e.g., 75% chance), despite research indicating that the former have variable meanings across individuals. In the intelligence domain, uncertainty is also communicated using terms such as low, moderate, or high to describe the analyst's confidence level. However, little research has examined how intelligence professionals interpret these terms and whether they prefer them to numeric uncertainty quantifiers. In two experiments (N = 481 and 624, respectively), uncertainty communication preferences of expert (n = 41 intelligence analysts in Experiment 1) and nonexpert intelligence consumers were elicited. We examined which format participants judged to be more informative and simpler to process. We further tested whether participants treated verbal probability and confidence terms as independent constructs and whether participants provided coherent numeric probability translations of verbal probabilities. Results showed that although most nonexperts favored the numeric format, experts were about equally split, and most participants in both samples regarded the numeric format as more informative. Experts and nonexperts consistently conflated probability and confidence. For instance, confidence intervals inferred from verbal confidence terms had a greater effect on the location of the estimate than the width of the estimate, contrary to normative expectation. Approximately one-fourth of experts and over one-half of nonexperts provided incoherent numeric probability translations for the terms likely and unlikely when the elicitation of best estimates and lower and upper bounds were briefly spaced by intervening tasks.  相似文献   

7.
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio‐scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low‐probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real‐world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats.  相似文献   

8.
Despite the key role that subjective probabilities play in decisions made under conditions of uncertainty, little is known about the ability of probability assessors in developing these estimates. A literature survey is followed by a review of results from a continuing series of experiments designed to investigate the external accuracy of subjectively assessed probability distributions. Initial findings confirm that probability assessments provided by untrained assessors are of questionable value in predicting the distribution of actual outcomes of uncertain events. Particular difficulty is encountered when subjects attempt to quantify the extremes of their subjective distributions. The impact of extended assessor training and hypotheses regarding the effects of variation in the assessor's information level and the complexity of the assessment task are explored. Implications for applied decision making are drawn, and directions for future investigations are suggested.  相似文献   

9.
Subjects were instructed on how to use simple subjective probability and utility scales, and they were asked to actively role-play a decision maker in seven risk-dilemma situations. Each scenario provided subjects with specific subjective expected utility (SEU) information for both a certain and uncertain decision alternative, but left out one critical SEU component. Subjects supplied either the lowest probability or the lowest utility for success that they found necessary before they would select the uncertain over the certain alternative in each dilemma. Three experiments examined: (a) the degree to which Ss' estimations deviated from a pattern predicted by SEU models; (b) differences in choice patterns induced by response format variations (e.g., probability vs. utility estimation); (c) the effects of sex of S; and (d) the effects of the sex-role framing of the decision problems. Ss generally chose in accord with SEU maximization principles and did so with decreasing deviations from theoretical values as practice over situations increased (Experiments I, II and III). Decisions were initially more conservative on items requesting probability estimates (Experiment I), but this effect washed out over situations. Sex differences were revealed (Experiments I and III), but in limited fashion. Rather, a replicable (Experiments I, II and III) sex-by-sex role appropriateness by response format interaction was found, in which females responded “rationally” under both probability and utility estimation conditions and under both role sets (male and female). Males, however, responded extremely conservatively under female-framed, probability estimate conditions. Ss' choices were stable over a three-week interval (Experiment III).  相似文献   

10.
Interruptions are a frequent occurrence in the work life of most decision makers. This paper investigated the influence of interruptions on different types of decision‐making tasks and the ability of information presentation formats, an aspect of information systems design, to alleviate them. Results from the experimental study indicate that interruptions facilitate performance on simple tasks, while inhibiting performance on more complex tasks. Interruptions also influenced the relationship between information presentation format and the type of task performed: spatial presentation formats were able to mitigate the effects of interruptions while symbolic formats were not. The paper presents a broad conceptualization of interruptions and interprets the ramifications of the experimental findings within this conceptualization to develop a program for future research.  相似文献   

11.
To study people's processing of hurricane forecast advisories, we conducted a computer‐based experiment that examined 11 research questions about the information seeking patterns of students assuming the role of a county emergency manager in a sequence of six hurricane forecast advisories for each of four different hurricanes. The results show that participants considered a variety of different sources of information—textual, graphic, and numeric—when tracking hurricanes. Click counts and click durations generally gave the same results but there were some significant differences. Moreover, participants’ information search strategies became more efficient over forecast advisories and with increased experience tracking the four hurricanes. These changes in the search patterns from the first to the fourth hurricane suggest that the presentation of abstract principles in a training manual was not sufficient for them to learn how to track hurricanes efficiently but they were able to significantly improve their search efficiency with a modest amount (roughly an hour) of practice. Overall, these data indicate that information search patterns are complex and deserve greater attention in studies of dynamic decision tasks.  相似文献   

12.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   

13.
The Constrained Extremal Distribution Selection Method   总被引:5,自引:0,他引:5  
Engineering design and policy formulation often involve the assessment of the likelihood of future events commonly expressed through a probability distribution. Determination of these distributions is based, when possible, on observational data. Unfortunately, these data are often incomplete, biased, and/or incorrect. These problems are exacerbated when policy formulation involves the risk of extreme events—situations of low likelihood and high consequences. Usually, observational data simply do not exist for such events. Therefore, determination of probabilities which characterize extreme events must utilize all available knowledge, be it subjective or observational, so as to most accurately reflect the likelihood of such events. Extending previous work on the statistics of extremes, the Constrained Extremal Distribution Selection Method is a methodology that assists in the selection of probability distributions that characterize the risk of extreme events using expert opinion to constrain the feasible values for parameters which explicitly define a distribution. An extremal distribution is then "fit" to observational data, conditional that the selection of parameters does not violate any constraints. Using a random search technique, genetic algorithms, parameters that minimize a measure of fit between a hypothesized distribution and observational data are estimated. The Constrained Extremal Distribution Selection Method is applied to a real world policy problem faced by the U.S. Environmental Protection Agency. Selected distributions characterize the likelihood of extreme, fatal hazardous material accidents in the United States. These distributions are used to characterize the risk of large scale accidents with numerous fatalities.  相似文献   

14.
We examined the risk perception that is derived from hypothetical physician risk communications. Subjects (n= 217) completed a questionnaire on the Web for $3. Subjects were presented with four hypothetical cancer risk scenarios that included a physician risk communication in one of three risk communication formats: verbal only, verbal plus numeric probability as a percent, and verbal plus numeric probability as a fraction. In each scenario, subjects were asked to imagine themselves as the patient described and to state their perceived personal susceptibility to the cancer (i.e., risk perception) on a 0 to 100 scale, as well as responses to other measures. Subjects' risk perceptions were highly variable, spanning nearly the entire probability scale for each scenario, and the degree of variation was only slightly less in the risk communication formats in which a numeric statement of risk was provided. Subjects were more likely to overestimate than underestimate their risk relative to the stated risk in the numeric versions, and overestimation was associated with the belief that the physician minimized the risk so they wouldn't worry, innumeracy, and worry, as well as decisions about testing for the cancer. These results demonstrate significant gaps between the intended message and the message received in physician risk communications. Implications for medical decisions, patient distress, and future research are discussed.  相似文献   

15.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

16.
Information format can influence the extent to which target audiences understand and respond to risk-related information. This study examined four elements of risk information presentation format. Using printed materials, we examined target audience perceptions about: (a) reading level; (b) use of diagrams vs. text; (c) commanding versus cajoling tone; and (d) use of qualitative vs. quantitative information presented in a risk ladder. We used the risk communication topic of human health concerns related to eating noncommercial Great Lakes fish affected by chemical contaminants. Results from the comparisons of specific communication formats indicated that multiple formats are required to meet the needs of a significant percent of anglers for three of the four format types examined. Advisory text should be reviewed to ensure the reading level is geared to abilities of the target audience. For many audiences, a combination of qualitative and quantitative information, and a combination of diagrams and text may be most effective. For most audiences, a cajoling rather than commanding tone better provides them with the information they need to make a decision about fish consumption. Segmenting audiences regarding information needs and communication formats may help clarify which approaches to take with each audience.  相似文献   

17.
In this paper, I take risk to mean a composite of the probability of an adverse event and the severity of the consequences of the event. I explore two issues in the economic valuation of changes in individual risks brought about by public policies. These are: (1) the relationship between the values of risk prevention (i.e., the lowering of the probabilities of adverse events) and risk reduction (i.e., the reduction of the severity of the consequences of adverse events); and (2) the relationship between ex ante and ex post measures of the value of changes in risk.  相似文献   

18.
《决策科学》2017,48(2):307-335
A pervasive challenge for decision‐makers is evaluating data of varying form (e.g., quantitative vs. qualitative) and credibility in arriving at an overall risk assessment judgment. The current study tests the efficacy of a Decision Support System (DSS) for facilitating auditors’ evaluation and assimilation of financial and nonfinancial information in accurately assessing the risk of material misstatements (RMM) in financial information. Utilizing the proximity compatibility principle, the DSS manipulates the display of cues either in an integral (where pieces of information are displayed on one computer screen) or separable (where pieces of information are displayed on different computer screens) format. Based on cognitive fit theory, we expect that the integral (separable) display best supports financial (nonfinancial) information processing, leading to enhanced risk assessment performance. In addition, we predict that consistent DSS display of financial and nonfinancial information facilitates risk assessment performance. Further, this study accentuates the importance of auditors’ preference for presentation of financial and nonfinancial information and consistent presentation of all the information in strengthening the effect of DSS display format on risk assessment performance. We design a case which includes a seeded high fraud risk. A total of 112 audit seniors participated in the experiment where the DSS display format was manipulated and the auditors’ RMM assessments and display preferences were measured. The results support the hypotheses and highlight the value of the DSS in enhancing risk assessment performance.  相似文献   

19.
The hyper‐Poisson distribution can handle both over‐ and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation‐specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper‐Poisson distribution in analyzing motor vehicle crash count data. The hyper‐Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway‐highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness‐of‐fit measures indicated that the hyper‐Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper‐Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway‐Maxwell‐Poisson model previously developed for the same data set. The advantages of the hyper‐Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper‐Poisson model can handle both over‐ and underdispersed crash data. Although not a major issue for the Conway‐Maxwell‐Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.  相似文献   

20.
The question addressed in the present research is whether in naturalistic risky decision environments people are sensitive to information about the probability parameter. In Study 1, we showed that in naturalistic scenarios participants generally revealed little interest in obtaining information about the outcomes and probabilities. Moreover, participants asked fewer questions about probabilities for scenarios containing moral considerations. In Study 2, it was shown that, when supplied with information on probabilities, people could be sensitive to this information. This sensitivity depends on two factors. People were less sensitive to probabilities in scenarios perceived as containing ethical considerations. People were also less sensitive to probabilities when they were faced with a single-choice situation than when they were faced with a series of lotteries with different probabilities. This can be accounted for in terms of the evaluability principle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号