首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Setting action levels or limits for health protection is complicated by uncertainty in the dose-response relation across a range of hazards and exposures. To address this issue, we consider the classic newsboy problem. The principles used to manage uncertainty for that case are applied to two stylized exposure examples, one for high dose and high dose rate radiation and the other for ammonia. Both incorporate expert judgment on uncertainty quantification in the dose-response relationship. The mathematical technique of probabilistic inversion also plays a key role. We propose a coupled approach, whereby scientists quantify the dose-response uncertainty using techniques such as structured expert judgment with performance weights and probabilistic inversion, and stakeholders quantify associated loss rates.  相似文献   

2.
3.
The hyper‐Poisson distribution can handle both over‐ and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation‐specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper‐Poisson distribution in analyzing motor vehicle crash count data. The hyper‐Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway‐highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness‐of‐fit measures indicated that the hyper‐Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper‐Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway‐Maxwell‐Poisson model previously developed for the same data set. The advantages of the hyper‐Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper‐Poisson model can handle both over‐ and underdispersed crash data. Although not a major issue for the Conway‐Maxwell‐Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.  相似文献   

4.
This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15–140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution—suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies are needed to better understand the explosion risks of UXO.  相似文献   

5.
Operations management methods have been applied profitably to a wide range of technology portfolio management problems, but have been slow to be adopted by governments and policy makers. We develop a framework that allows us to apply such techniques to a large and important public policy problem: energy technology R&D portfolio management under climate change. We apply a multi‐model approach, implementing probabilistic data derived from expert elicitations into a novel stochastic programming version of a dynamic integrated assessment model. We note that while the unifying framework we present can be applied to a range of models and data sets, the specific results depend on the data and assumptions used and therefore may not be generalizable. Nevertheless, the results are suggestive, and we find that the optimal technology portfolio for the set of projects considered is fairly robust to different specifications of climate uncertainty, to different policy environments, and to assumptions about the opportunity cost of investing. We also conclude that policy makers would do better to over‐invest in R&D rather than under‐invest. Finally, we show that R&D can play different roles in different types of policy environments, sometimes leading primarily to cost reduction, other times leading to better environmental outcomes.  相似文献   

6.
A probabilistic and interdisciplinary risk–benefit assessment (RBA) model integrating microbiological, nutritional, and chemical components was developed for infant milk, with the objective of predicting the health impact of different scenarios of consumption. Infant feeding is a particular concern of interest in RBA as breast milk and powder infant formula have both been associated with risks and benefits related to chemicals, bacteria, and nutrients, hence the model considers these three facets. Cronobacter sakazakii, dioxin‐like polychlorinated biphenyls (dl‐PCB), and docosahexaenoic acid (DHA) were three risk/benefit factors selected as key issues in microbiology, chemistry, and nutrition, respectively. The present model was probabilistic with variability and uncertainty separated using a second‐order Monte Carlo simulation process. In this study, advantages and limitations of undertaking probabilistic and interdisciplinary RBA are discussed. In particular, the probabilistic technique was found to be powerful in dealing with missing data and to translate assumptions into quantitative inputs while taking uncertainty into account. In addition, separation of variability and uncertainty strengthened the interpretation of the model outputs by enabling better consideration and distinction of natural heterogeneity from lack of knowledge. Interdisciplinary RBA is necessary to give more structured conclusions and avoid contradictory messages to policymakers and also to consumers, leading to more decisive food recommendations. This assessment provides a conceptual development of the RBA methodology and is a robust basis on which to build upon.  相似文献   

7.
David M. Stieb 《Risk analysis》2012,32(12):2133-2151
The monetized value of avoided premature mortality typically dominates the calculated benefits of air pollution regulations; therefore, characterization of the uncertainty surrounding these estimates is key to good policymaking. Formal expert judgment elicitation methods are one means of characterizing this uncertainty. They have been applied to characterize uncertainty in the mortality concentration‐response function, but have yet to be used to characterize uncertainty in the economic values placed on avoided mortality. We report the findings of a pilot expert judgment study for Health Canada designed to elicit quantitative probabilistic judgments of uncertainties in Value‐per‐Statistical‐Life (VSL) estimates for use in an air pollution context. The two‐stage elicitation addressed uncertainties in both a base case VSL for a reduction in mortality risk from traumatic accidents and in benefits transfer‐related adjustments to the base case for an air quality application (e.g., adjustments for age, income, and health status). Results for each expert were integrated to develop example quantitative probabilistic uncertainty distributions for VSL that could be incorporated into air quality models.  相似文献   

8.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

9.
设计专家权重和属性指标权重的计算模型已成为近年来备受关注的两个重要研究课题。针对评价信息为概率语义信任函数的社会网络群决策问题,提出一种基于信任关系和信息测度的概率语义社会网络群决策模型。首先,构建基于信任关系的概率语义决策空间,探究专家之间的信任传递模型,通过专家之间信任关系计算专家的权重;其次,引入概率语义信任函数的熵和相似度概念,并运用三角函数设计概率语义信任函数信息熵和相似度的衡量方法;最后,构建基于信任关系和信息测度的概率语义社会网络群决策模型,进而得到合理可靠的决策结果,同时将提出的社会网络群决策模型用于电动汽车供应商的选择实例,对比分析实验验证了模型的合理性和有效性。  相似文献   

10.
Fariba Hashemi 《LABOUR》2002,16(1):89-102
This paper proposes a model to describe the continuous time‐evolution of density of the cross‐sectional distribution of unemployment rates. The model is founded on the theory of analytical diffusion processes. The steady‐state distribution as well as the dynamic behaviour of the model are analytically derived. Parameters in the resulting analytical expressions are then fitted to US regional data. The empirical portion of the paper illustrates the usefulness of modeling the temporal evolution of the cross‐sectional distribution of unemployment rate, rather than simply attending to equilibrium implications of the process.  相似文献   

11.
《决策科学》2017,48(3):561-585
Inspired by recent discussions of the systematic costs that external rankings impose on academic institutions, and the undeniable shifts in the landscape of institutional data, a concerted and pragmatic re‐evaluation of ranking efforts has begun. In this study, multiple administrators and researchers representing both public and private institutions across the United States weigh in on these issues. While reaffirming the social contract we hold with society, we argue that the fundamental methodological shortcomings of existing rankings, and ultimately any ordinal ranking system, limit the value of current rankings. These shortcomings emerge from the conceptualization and the architecture of comparisons, and are evident in survey designs, data collection methods, and data aggregation procedures. Our discussion continues by outlining the minimal requirements that a socially responsible, transparent, flexible, and highly representative rating (vs. ranking) approach should employ. Ultimately, we call on academic institutions and organizing bodies to take a collective stand against existing rankings and to embrace the strategic use of multidimensional alternatives that faithfully serve prospective students, parents, and other key stakeholders. We conclude with a number of suggestions and opportunities for practice‐oriented research in the decision sciences aimed to support this fundamental shift in evaluative framing.  相似文献   

12.
Knowledge on failure events and their associated factors, gained from past construction projects, is regarded as potentially extremely useful in risk management. However, a number of circumstances are constraining its wider use. Such knowledge is usually scarce, seldom documented, and even unavailable when it is required. Further, there exists a lack of proven methods to integrate and analyze it in a cost‐effective way. This article addresses possible options to overcome these difficulties. Focusing on limited but critical potential failure events, the article demonstrates how knowledge on a number of important potential failure events in tunnel works can be integrated. The problem of unavailable or incomplete information was addressed by gathering judgments from a group of experts. The elicited expert knowledge consisted of failure scenarios and associated probabilistic information. This information was integrated using Bayesian belief‐networks‐based models that were first customized in order to deal with the expected divergence in judgments caused by epistemic uncertainty of risks. The work described in the article shows that the developed models that integrate risk‐related knowledge provide guidance as to the use of specific remedial measures.  相似文献   

13.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

14.
The choice of performance measure has long been a difficult issue facing researchers. This article investigates the comparability of four common measures of acquisition performance: cumulative abnormal returns, managers' assessments, divestment data and expert informants' assessments. Independently each of these measures indicated a mean acquisition success rate of between 44–56%, within a sample of British cross‐border acquisitions. However, with the exception of a positive relationship between managers' and expert informants' subjective assessments, no significant correlation was found between the performance data generated by the alternative metrics. In particular, ex‐ante capital market reactions to an acquisition announcement exhibited little relation to corporate managers' ex‐post assessment. This is seen to reflect the information asymmetry that can exist between investors and company management, particularly regarding implementation aspects. Overall, the results suggest that future acquisitions studies should consider employing multiple performance measures in order to gain a holistic view of outcome, while in the longer term, opportunities remain to identify and refine improved metrics.  相似文献   

15.
Various methods exist to calculate confidence intervals for the benchmark dose in risk analysis. This study compares the performance of three such methods in fitting nonlinear dose-response models: the delta method, the likelihood-ratio method, and the bootstrap method. A data set from a developmental toxicity test with continuous, ordinal, and quantal dose-response data is used for the comparison of these methods. Nonlinear dose-response models, with various shapes, were fitted to these data. The results indicate that a few thousand runs are generally needed to get stable confidence limits when using the bootstrap method. Further, the bootstrap and the likelihood-ratio method were found to give fairly similar results. The delta method, however, resulted in some cases in different (usually narrower) intervals, and appears unreliable for nonlinear dose-response models. Since the bootstrap method is more time consuming than the likelihood-ratio method, the latter is more attractive for routine dose-response analysis. In the context of a probabilistic risk assessment the bootstrap method has the advantage that it directly links to Monte Carlo analysis.  相似文献   

16.
Dose–response modeling of biological agents has traditionally focused on describing laboratory‐derived experimental data. Limited consideration has been given to understanding those factors that are controlled in a laboratory, but are likely to occur in real‐world scenarios. In this study, a probabilistic framework is developed that extends Brookmeyer's competing‐risks dose–response model to allow for variation in factors such as dose‐dispersion, dose‐deposition, and other within‐host parameters. With data sets drawn from dose–response experiments of inhalational anthrax, plague, and tularemia, we illustrate how for certain cases, there is the potential for overestimation of infection numbers arising from models that consider only the experimental data in isolation.  相似文献   

17.
Bayesian network methodology is used to model key linkages of the service‐profit chain within the context of transportation service satisfaction. Bayesian networks offer some advantages for implementing managerially focused models over other statistical techniques designed primarily for evaluating theoretical models. These advantages are (1) providing a causal explanation using observable variables within a single multivariate model, (2) analysis of nonlinear relationships contained in ordinal measurements, (3) accommodation of branching patterns that occur in data collection, and (4) the ability to conduct probabilistic inference for prediction and diagnostics with an output metric that can be understood by managers and academics. Sample data from 1,101 recent transport service customers are utilized to select and validate a Bayesian network and conduct probabilistic inference.  相似文献   

18.
Empowered by virtualization technology, service requests from cloud users can be honored through creating and running virtual machines. Virtual machines established for different users may be allocated to the same physical server, making the cloud vulnerable to co‐residence attacks where a malicious attacker can steal a user's data through co‐residing their virtual machines on the same server. For protecting data against the theft, the data partition technique is applied to divide the user's data into multiple blocks with each being handled by a separate virtual machine. Moreover, early warning agents (EWAs) are deployed to possibly detect and prevent co‐residence attacks at a nascent stage. This article models and analyzes the attack success probability (complement of data security) in cloud systems subject to competing attack detection process (by EWAs) and data theft process (by co‐residence attackers). Based on the suggested probabilistic model, the optimal data partition and protection policy is determined with the objective of minimizing the user's cost subject to providing a desired level of data security. Examples are presented to illustrate effects of different model parameters (attack rate, number of cloud servers, number of data blocks, attack detection time, and data theft time distribution parameters) on the attack success probability and optimization solutions.  相似文献   

19.
Groundwater leakage into subsurface constructions can cause reduction of pore pressure and subsidence in clay deposits, even at large distances from the location of the construction. The potential cost of damage is substantial, particularly in urban areas. The large‐scale process also implies heterogeneous soil conditions that cannot be described in complete detail, which causes a need for estimating uncertainty of subsidence with probabilistic methods. In this study, the risk for subsidence is estimated by coupling two probabilistic models, a geostatistics‐based soil stratification model with a subsidence model. Statistical analyses of stratification and soil properties are inputs into the models. The results include spatially explicit probabilistic estimates of subsidence magnitude and sensitivities of included model parameters. From these, areas with significant risk for subsidence are distinguished from low‐risk areas. The efficiency and usefulness of this modeling approach as a tool for communication to stakeholders, decision support for prioritization of risk‐reducing measures, and identification of the need for further investigations and monitoring are demonstrated with a case study of a planned tunnel in Stockholm.  相似文献   

20.
The Strait of Istanbul, the narrow waterway separating Europe from Asia, holds a strategic importance in maritime transportation as it links the Black Sea to the Mediterranean. It is considered as one of the world's most congested and difficult-to-navigate waterways. Over 55,000 transit vessels pass through the Strait annually, roughly 20% of which carry dangerous cargo. In this study, we have analyzed safety risks pertaining to transit vessel maritime traffic in the Strait of Istanbul and proposed ways to mitigate them. Safety risk analysis was performed by incorporating a probabilistic accident risk model into the simulation model. A mathematical risk model was developed based on probabilistic arguments regarding instigators, situations, accidents, consequences, and historical data, as well as subject-matter expert opinions. Scenario analysis was carried out to study the behavior of the accident risks, with respect to changes in the surrounding geographical, meteorological, and traffic conditions. Our numerical investigations suggested some significant policy indications. Local traffic density and pilotage turned out to be two main factors affecting the risks at the Strait of Istanbul. Results further indicate that scheduling changes to allow more vessels into the Strait will increase risks to extreme levels. Conversely, scheduling policy changes that are opted to reduce risks may cause major increases in average vessel waiting times. This in turn signifies that the current operations at the Strait of Istanbul have reached a critical level beyond which both risks and vessel delays are unacceptable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号