首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Congress is currently considering adopting a mathematical formula to assign shares in cancer causation to specific doses of radiation, for use in establishing liability and compensation awards. The proposed formula, if it were sound, would allow difficult problems in tort law and public policy to be resolved by reference to tabulated "probabilities of causation." This article examines the statistical and conceptual bases for the proposed methodology. We find that the proposed formula is incorrect as an expression for "probability and causation," that it implies hidden, debatable policy judgments in its treatment of factor interactions and uncertainties, and that it can not in general be quantified with sufficient precision to be useful. Three generic sources of statistical uncertainty are identified--sampling variability, population heterogeneity, and error propagation--that prevent accurate quantification of "assigned shares." These uncertainties arise whenever aggregate epidemiological or risk data are used to draw causal inferences about individual cases.  相似文献   

2.
Epidemiology textbooks often interpret population attributable fractions based on 2 x 2 tables or logistic regression models of exposure-response associations as preventable fractions, i.e., as fractions of illnesses in a population that would be prevented if exposure were removed. In general, this causal interpretation is not correct, since statistical association need not indicate causation; moreover, it does not identify how much risk would be prevented by removing specific constituents of complex exposures. This article introduces and illustrates an approach to calculating useful bounds on preventable fractions, having valid causal interpretations, from the types of partial but useful molecular epidemiological and biological information often available in practice. The method applies probabilistic risk assessment concepts from systems reliability analysis, together with bounding constraints for the relationship between event probabilities and causation (such as that the probability that exposure X causes response Y cannot exceed the probability that exposure X precedes response Y, or the probability that both X and Y occur) to bound the contribution to causation from specific causal pathways. We illustrate the approach by estimating an upper bound on the contribution to lung cancer risk made by a specific, much-discussed causal pathway that links smoking to a polycyclic aromatic hydrocarbon (PAH) (specifically, benzo(a)pyrene diol epoxide-DNA) adducts at hot spot codons at p53 in lung cells. The result is a surprisingly small preventable fraction (of perhaps 7% or less) for this pathway, suggesting that it will be important to consider other mechanisms and non-PAH constituents of tobacco smoke in designing less risky tobacco-based products.  相似文献   

3.
As we enter the new millennium, we are witnessing the rapid appreciation for and development of all aspects of global and international activities and issues associated with and affected by human resource management. In order to understand the internationalization of human resource management, this paper reviews three recently published works by Poole (1999), Schuler and Jackson (1999), and Storey (2000) to map out past research and emerging areas within this field of study.  相似文献   

4.
《Risk analysis》2018,38(10):2087-2104
In the United Kingdom, dwelling fires are responsible for the majority of all fire‐related fatalities. The development of these incidents involves the interaction of a multitude of variables that combine in many different ways. Consequently, assessment of dwelling fire risk can be complex, which often results in ambiguity during fire safety planning and decision making. In this article, a three‐part Bayesian network model is proposed to study dwelling fires from ignition through to extinguishment in order to improve confidence in dwelling fire safety assessment. The model incorporates both hard and soft data, delivering posterior probabilities for selected outcomes. Case studies demonstrate how the model functions and provide evidence of its use for planning and accident investigation.  相似文献   

5.
Using results from the 1999 Eurobarometer survey and a parallel telephone survey done in the United States in 2000, this study explored the relationship between levels of knowledge, educational levels, and degrees of encouragement for biotechnology development across a number of medical and agricultural applications. This cross-cultural exploration found only weak relationships among these variables, calling into question the common assumption that higher science literacy produces greater acceptance (whether or not mediated by lower perceived risk). The relationship between encouragement and trust in specific social institutions was also weak. However, regression analysis based on "trust gap" variables (defined as numerical differences between trust in specific pairs of actors) did predict national levels of encouragement for several applications, suggesting an opinion formation climate in which audiences are actively choosing among competing claims. Differences between European and U.S. reactions to biotechnology appear to be a result of different trust and especially "trust gap" patterns, rather than differences in knowledge or education.  相似文献   

6.
Ecological risk from the development of a wetland is assessed quantitatively by means of a new risk measure, expected loss of biodiversity (ELB). ELB is defined as the weighted sum of the increments in the probabilities of extinction of the species living in the wetland due to its loss. The weighting for a particular species is calculated according to the length of the branch on the phylogenetic tree that will be lost if the species becomes extinct. The length of the branch on the phylogenetic tree is regarded as reflecting the extent of contribution of the species to the taxonomic diversity of the world of living things. The increments in the probabilities of extinction are calculated by a simulation used for making the Red List for vascular plants in Japan. The resulting ELB for the loss of Nakaikemi wetland is 9,200 years. This result is combined with the economic costs for conservation of the wetland to produce a value for the indicator of the "cost per unit of biodiversity saved." Depending on the scenario, the value is 13,000 yen per year-ELB or 110,000 to 420,000 yen per year-ELB (1 US dollar = 110 yen in 1999).  相似文献   

7.
This article examines how scientists use human, animal, and bacterial evidence to develop policy recommendations about the health consequences of human exposure to modern chemicals. Human evidence is limited because many epidemiological studies are contaminated with selection effects or unobserved heterogeneity. Changes in the aggregate incidence of morbidity (such as cancer) in the population over time are not a substitute for the lack of good individual-level data because incidence data are contaminated by the medicalization of cancer. Animal tests are also problematic because the expense of conducting experiments leads researchers to use only enough animals to allow detection of large differences in cancer incidence between controls and experimental animals that can only arise if the exposure doses are large. Predictions about the cancer incidence that would result in humans at much lower exposure levels, thus, require statistical inferences that implicitly make choices between false positive and false negative inference errors. Policy recommendations about carcinogens, therefore, are as much the product of value choices as "scientific" knowledge.  相似文献   

8.
The aim of this article is to build a methodology allowing the study and the comparison of the potential spread of BSE at the scale of countries under different routine slaughtering conditions in order to evaluate the risk of nonextinction due to this slaughtering. We first model the evolution in discrete time of the proportion of animals in the latent period and that of infectives, assuming a very large branching population not necessarily constant in size, two age classes, less than 1-year-old animals, and adult animals. We analytically derive a bifurcation parameter rho(0) allowing us to predict either endemicity or extinction of the disease, which has the meaning of an epidemiological reproductive rate. We show that the classical reproductive number R(0) cannot be used for prediction if the size of the population, when healthy, does not remain stable throughout time. We illustrate the qualitative results by means of simulations with either the British routine slaughtering probabilities or the French ones, the other conditions being assumed identical in both countries. We show that the French probabilities lead to a higher risk of spread of the disease than the British ones, this result being mainly due to a smaller value of the routine slaughtering probability of the adult animals in France than in Great Britain.  相似文献   

9.
A question has been raised in recent years as to whether the risk field, including analysis, assessment, and management, ought to be considered a discipline on its own. As suggested by Terje Aven, unification of the risk field would require a common understanding of basic concepts, such as risk and probability; hence, more discussion is needed of what he calls “foundational issues.” In this article, we show that causation is a foundational issue of risk, and that a proper understanding of it is crucial. We propose that some old ideas about the nature of causation must be abandoned in order to overcome certain persisting challenges facing risk experts over the last decade. In particular, we discuss the challenge of including causally relevant knowledge from the local context when studying risk. Although it is uncontroversial that the receptor plays an important role for risk evaluations, we show how the implementation of receptor‐based frameworks is hindered by methodological shortcomings that can be traced back to Humean orthodoxies about causation. We argue that the first step toward the development of frameworks better suited to make realistic risk predictions is to reconceptualize causation, by examining a philosophical alternative to the Humean understanding. In this article, we show how our preferred account, causal dispositionalism, offers a different perspective in how risk is evaluated and understood.  相似文献   

10.
Abstract

The job demand–control(–support) model is frequently used as a theoretical framework in studies on determinants of psychological well-being. Consequently, these studies are confined to the impact of job characteristics on worker outcomes. In the present study the relation between work conditions and outcomes (job satisfaction, emotional exhaustion, psychological distress, and somatic complaints) is examined from a broader organizational perspective. This paper reports on an analysis that examines both the unique and the additional contribution of organizational characteristics to well-being indicators, beyond those attributed to job characteristics. A total of 706 care staff from three public residential institutions for people with mental or physical disabilities in the Netherlands took part in this research. To assess organizational risk factors a measurement instrument was developed, the organizational Risk Factors Questionnaire (ORFQ), based on the safety-critical factors of the Tripod accident causation model. Factor analyses and reliability testing resulted in a 52-item scale consisting of six reliable sub-scales: staffing resources, communication, social hindrance, training opportunities, job skills, and material resources. These organizational risk factors explained important parts of the variance in each of the outcome measures, beyond that accounted for by demographic variables and job demand–control–support (JDCS) measures. Communication and training opportunities were of central importance to carers’ job satisfaction. Social hindrance, job skills, and material resources explained a substantial amount of unique variance on the negative outcomes investigated.  相似文献   

11.
A Flexible Count Data Regression Model for Risk Analysis   总被引:1,自引:0,他引:1  
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.  相似文献   

12.
The purpose of this paper is to provide theoretical justification for some existing methods for constructing confidence intervals for the sum of coefficients in autoregressive models. We show that the methods of Stock (1991), Andrews (1993), and Hansen (1999) provide asymptotically valid confidence intervals, whereas the subsampling method of Romano and Wolf (2001) does not. In addition, we generalize the three valid methods to a larger class of statistics. We also clarify the difference between uniform and pointwise asymptotic approximations, and show that a pointwise convergence of coverage probabilities for all values of the parameter does not guarantee the validity of the confidence set.  相似文献   

13.
This article proposes a systematic procedure for computing probabilities of operator action failure in the cognitive reliability and error analysis method (CREAM). The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm that is here further extended to account for: (1) the ambiguity in the qualification of the conditions under which the action is performed (common performance conditions, CPCs) and (2) the fact that the effects of such conditions on human performance reliability may not all be equal.  相似文献   

14.
T Oka  H Matsuda  Y Kadono 《Risk analysis》2001,21(6):1011-1023
Ecological risk from the development of a wetland is assessed quantitatively by means of a new risk measure, expected loss of biodiversity (ELB). ELB is defined as the weighted sum of the increments in the probabilities of extinction of the species living in the wetland due to its loss. The weighting for a particular species is calculated according to the length of the branch on the phylogenetic tree that will be lost if the species becomes extinct. The length of the branch on the phylogenetic tree is regarded as reflecting the extent of contribution of the species to the taxonomic diversity of the world of living things. The increments in the probabilities of extinction are calculated by a simulation used for making the Red List for vascular plants in Japan. The resulting ELB for the loss of Nakaikemi wetland is 9,200 years. This result is combined with the economic costs for conservation of the wetland to produce a value for the indicator of the "cost per unit of biodiversity saved." Depending on the scenario, the value is 13,000 yen per year-ELB or 110,000 to 420,000 yen per year-ELB (1 US dollar = 110 yen in 1999).  相似文献   

15.
In a recent article in this journal, De Bodt and Van Wassenhove [1] presented analytic derivations related to lot-sizing behavior under uncertainty. Although their models appear to have been verified in the aggregate by simulation experiments, detailed justifications for several of the derivations are missing. The present paper looks at De Bodt and Van Wassenhove's analysis and provides verifications of (and corrections to) the ordering probabilities and order cycles used by the authors to estimate the cost effects of forecast errors in the particular operating environment studied. The probabilities simulated in this study also generate additional insight into the “system nervousness” caused by lot-sizing and forecast errors.  相似文献   

16.
Event-tree analysis with imprecise probabilities   总被引:1,自引:0,他引:1  
You X  Tonon F 《Risk analysis》2012,32(2):330-344
Novel methods are proposed for dealing with event-tree analysis under imprecise probabilities, where one could measure chance or uncertainty without sharp numerical probabilities and express available evidence as upper and lower previsions (or expectations) of gambles (or bounded real functions). Sets of upper and lower previsions generate a convex set of probability distributions (or measures). Any probability distribution in this convex set should be considered in the event-tree analysis. This article focuses on the calculation of upper and lower bounds of the prevision (or the probability) of some outcome at the bottom of the event-tree. Three cases of given information/judgments on probabilities of outcomes are considered: (1) probabilities conditional to the occurrence of the event at the upper level; (2) total probabilities of occurrences, that is, not conditional to other events; (3) the combination of the previous two cases. Corresponding algorithms with imprecise probabilities under the three cases are explained and illustrated by simple examples.  相似文献   

17.
Stakeholders making decisions in public health and world trade need improved estimations of the burden‐of‐illness of foodborne infectious diseases. In this article, we propose a Bayesian meta‐analysis or more precisely a Bayesian evidence synthesis to assess the burden‐of‐illness of campylobacteriosis in France. Using this case study, we investigate campylobacteriosis prevalence, as well as the probabilities of different events that guide the disease pathway, by (i) employing a Bayesian approach on French and foreign human studies (from active surveillance systems, laboratory surveys, physician surveys, epidemiological surveys, and so on) through the chain of events that occur during an episode of illness and (ii) including expert knowledge about this chain of events. We split the target population using an exhaustive and exclusive partition based on health status and the level of disease investigation. We assume an approximate multinomial model over this population partition. Thereby, each observed data set related to the partition brings information on the parameters of the multinomial model, improving burden‐of‐illness parameter estimates that can be deduced from the parameters of the basic multinomial model. This multinomial model serves as a core model to perform a Bayesian evidence synthesis. Expert knowledge is introduced by way of pseudo‐data. The result is a global estimation of the burden‐of‐illness parameters with their accompanying uncertainty.  相似文献   

18.
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean‐variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean‐variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean‐variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.  相似文献   

19.
In decision-making under uncertainty, a decision-maker is required to specify, possibly with the help of decision analysts, point estimates of the probabilities of uncertain events. In this setting, it is often difficult to obtain very precise measurements of the decision-maker׳s probabilities on the states of nature particularly when little information is available to evaluate probabilities, available information is not specific enough, or we have to model the conflict case where several information sources are available.In this paper, imprecise probabilities are considered for representing the decision-maker׳s perception or past experience about the states of nature, to be specific, interval probabilities, which can be further categorized as (a) intervals of individual probabilities, (b) intervals of probability differences, and (c) intervals of ratio probabilities. We present a heuristic approach to modeling a wider range of types of probabilities as well as three types of interval probabilities. In particular, the intervals of ratio probabilities, which are widely used in the uncertain AHP context, are analyzed to find extreme points by the use of change of variables, not to mention the first two types of interval probabilities. Finally, we examine how these extreme points can be used to determine an ordering or partial ordering of the expected values of strategies.  相似文献   

20.
Although there is ample empirical evidence that trust in risk regulation is strongly related to the perception and acceptability of risk, it is less clear what the direction of this relationship is. This article explores the nature of the relationship, using three separate data sets on perceptions of genetically modified (GM) food among the British public. The article has two discrete but closely interrelated objectives. First, it compares two models of trust. More specifically, it investigates whether trust is the cause (causal chain account) or the consequence (associationist view) of the acceptability of GM food. Second, this study explores whether the affect heuristic can be applied to a wider number of risk-relevant concepts than just perceived risk and benefit. The results suggest that, rather than a determinant, trust is an expression or indicator of the acceptability of GM food. In addition, and as predicted, "affect" accounts for a large portion of the variance between perceived risk, perceived benefit, trust in risk regulation, and acceptability. Overall, the results support the associationist view that specific risk judgments are driven by more general evaluative judgments The implications of these results for risk communication and policy are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号