首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose the use of signal detection theory (SDT) to evaluate the performance of both probabilistic forecasting systems and individual forecasters. The main advantage of SDT is that it provides a principled way to distinguish the response from system diagnosticity, which is defined as the ability to distinguish events that occur from those that do not. There are two challenges in applying SDT to probabilistic forecasts. First, the SDT model must handle judged probabilities rather than the conventional binary decisions. Second, the model must be able to operate in the presence of sparse data generated within the context of human forecasting systems. Our approach is to specify a model of how individual forecasts are generated from underlying representations and use Bayesian inference to estimate the underlying latent parameters. Given our estimate of the underlying representations, features of the classic SDT model, such as the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC), follow immediately. We show how our approach allows ROC curves and AUCs to be applied to individuals within a group of forecasters, estimated as a function of time, and extended to measure differences in forecastability across different domains. Among the advantages of this method is that it depends only on the ordinal properties of the probabilistic forecasts. We conclude with a brief discussion of how this approach might facilitate decision making.  相似文献   

2.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

3.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

4.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

5.
Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine‐tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross‐validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.  相似文献   

6.
7.
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well‐known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates.  相似文献   

8.
Vulnerability of human beings exposed to a catastrophic disaster is affected by multiple factors that include hazard intensity, environment, and individual characteristics. The traditional approach to vulnerability assessment, based on the aggregate‐area method and unsupervised learning, cannot incorporate spatial information; thus, vulnerability can be only roughly assessed. In this article, we propose Bayesian network (BN) and spatial analysis techniques to mine spatial data sets to evaluate the vulnerability of human beings. In our approach, spatial analysis is leveraged to preprocess the data; for example, kernel density analysis (KDA) and accumulative road cost surface modeling (ARCSM) are employed to quantify the influence of geofeatures on vulnerability and relate such influence to spatial distance. The knowledge‐ and data‐based BN provides a consistent platform to integrate a variety of factors, including those extracted by KDA and ARCSM to model vulnerability uncertainty. We also consider the model's uncertainty and use the Bayesian model average and Occam's Window to average the multiple models obtained by our approach to robust prediction of the risk and vulnerability. We compare our approach with other probabilistic models in the case study of seismic risk and conclude that our approach is a good means to mining spatial data sets for evaluating vulnerability.  相似文献   

9.
构建了包含时变系数和动态方差的贝叶斯HAR潜在因子模型(DMA(DMS)-FAHAR),并对我国金融期货(主要是股指期货和国债期货)的高频已实现波动率进行预测.通过构建贝叶斯动态潜在因子模型提取包含波动率变量、跳跃变量和考虑杠杆效应的符号跳跃变量等预测变量的重要信息.同时,在模型中加入了投机活动变量,以考察市场投机活动对中国金融期货市场波动率预测的影响.预测结果表明,时变贝叶斯潜在因子模型在所有参与比较的预测模型当中具有最优的短期、中期和长期预测效果.同时,具有时变参数和时变预测变量的贝叶斯HAR族模型在很大程度上提高了固定参数HAR族模型的预测能力.在股指期货和国债期货的预测模型中加入投机活动变量可以获得更好的预测效果.  相似文献   

10.
Domino Effect Analysis Using Bayesian Networks   总被引:1,自引:0,他引:1  
A new methodology is introduced based on Bayesian network both to model domino effect propagation patterns and to estimate the domino effect probability at different levels. The flexible structure and the unique modeling techniques offered by Bayesian network make it possible to analyze domino effects through a probabilistic framework, considering synergistic effects, noisy probabilities, and common cause failures. Further, the uncertainties and the complex interactions among the domino effect components are captured using Bayesian network. The probabilities of events are updated in the light of new information, and the most probable path of the domino effect is determined on the basis of the new data gathered. This study shows how probability updating helps to update the domino effect model either qualitatively or quantitatively. The methodology is applied to a hypothetical example and also to an earlier‐studied case study. These examples accentuate the effectiveness of Bayesian network in modeling domino effects in processing facility.  相似文献   

11.
结合中国养老保险基金投资现状,利用随机规划建立中国养老基金投资策略模型,依据Minnesota法则改进贝叶斯向量自回归参数分布的确定方法.根据改进的贝叶斯向量自回归模型生成资本市场未来收益情景,得到养老基金最优投资策略并给出模拟计算具体步骤.最后结合历史数据进行模拟分析,结果表明模型能够根据实际情况优化资产配置.  相似文献   

12.
It is common for service providers to collect data from customers as part of efforts to monitor quality. Often, this data is passively collected, meaning (a) any solicitation of feedback is done without direct customer interaction, and (b) the customer initiates any response given. Examples include customer comment cards, toll-free telephone numbers, and comment links on World Wide Web pages. This article compares passive data collection with active methods (e.g., interviews and mail surveys). Passive methods generally have lower response rates and are inherently biased, but have cost and sample frame advantages when used to monitor quality on a continuous basis. Despite the biased nature of passive methods, this article describes the successful validation of a common customer-response model with passively collected empirical data. The model is expanded to consider the impact of complaint and compliment solicitation on customers' evaluation of the service provider. Results show that this impact is negative, and that customers who spontaneously register complaints generally record higher ratings of the service provider than customers who complain in response to a complaint solicitation. Discussion and conclusions are given.  相似文献   

13.
DHL, an international air‐express courier, has been operating in Hong Kong for many years. In 1998, the new international airport located at a site considerably distant from the old location opened in Hong Kong (HK). Other airport‐related infrastructure facilities have also been developed or are being developed, resulting in major changes in transport structure as well as a shift in customer demand. In this paper a multiyear distribution network is designed for DHL(HK) using an integrated network design methodology, which consists of a macro model and a micro model. The macro model, a mixed 0–1 LP, determines in an aggregate manner the least‐cost distribution network. The micro model, a simulation, evaluates the operational viability and efficacy of the network according to its service coverage and service reliability. We also illustrate how coverage and reliability can be improved via the integrated use of the two models. Extensive discussion on relevant planning and operational issues of an air‐express courier are included. The methodology has been successfully implemented at DHL(HK). It has been used to design the network, to test strategic decisions, and to update the network.  相似文献   

14.
Estimation from Zero-Failure Data   总被引:2,自引:0,他引:2  
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques.  相似文献   

15.
A. Pielaat 《Risk analysis》2011,31(9):1434-1450
A novel purpose of the use of mathematical models in quantitative microbial risk assessment (QMRA) is to identify the sources of microbial contamination in a food chain (i.e., biotracing). In this article we propose a framework for the construction of a biotracing model, eventually to be used in industrial food production chains where discrete numbers of products are processed that may be contaminated by a multitude of sources. The framework consists of steps in which a Monte Carlo model, simulating sequential events in the chain following a modular process risk modeling (MPRM) approach, is converted to a Bayesian belief network (BBN). The resulting model provides a probabilistic quantification of concentrations of a pathogen throughout a production chain. A BBN allows for updating the parameters of the model based on observational data, and global parameter sensitivity analysis is readily performed in a BBN. Moreover, a BBN enables “backward reasoning” when downstream data are available and is therefore a natural framework for answering biotracing questions. The proposed framework is illustrated with a biotracing model of Salmonella in the pork slaughter chain, based on a recently published Monte Carlo simulation model. This model, implemented as a BBN, describes the dynamics of Salmonella in a Dutch slaughterhouse and enables finding the source of contamination of specific carcasses at the end of the chain.  相似文献   

16.
This article compares two nonparametric tree‐based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high‐resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2‐km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree‐leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources.  相似文献   

17.
The current dominant conceptualization of consumer reactions to services is the SERVQUAL model. This article proposes the FAIRSERV model as an alternative or additional conceptualization of consumer reactions to services. FAIRSERV involves seeing service evaluation through the lens of organizational fairness (justice) theory applied to the relationship between the service consumer and the service provider. FAIRSERV is premised on the claim that, especially in relational service contexts, consumers are interested in service fairness as well as service quality (service favorableness) as represented by SERVQUAL. Service fairness or justice is a multidimensional construct based on equity theory. In this article, the FAIRSERV model is tested with the SERVQUAL model in the context of information system services. The two models are used to predict service satisfaction and repatronage intention. The FAIRSERV model appears to add a significant new set of predictors of service satisfaction and repatronage intention that should be considered in the future by service providers.  相似文献   

18.
In this article, we develop a conceptual model of adaptive versus proactive recovery behavior by self‐managing teams (SMTs) in service recovery operations. To empirically test the conceptual model a combination of bank employee, customer, and archival data is collected. The results demonstrate support for independent group‐level effects of intrateam support on adaptive and proactive recovery behavior, indicating that perceptual consensus within service teams has incremental value in explaining service recovery performance. In addition, we provide evidence that adaptive and proactive recovery behavior have differential effects on external performance measures. More specifically, higher levels of adaptive performance positively influence customer‐based parameters (i.e., service recovery satisfaction and loyalty intentions), while employee proactive recovery behavior contributes to higher share of customer rates.  相似文献   

19.
Understanding the nature of service failures and their impact on customer responses and designing cost‐effective recovery strategies have been recognized as important issues by both service researchers and practitioners. We first propose a conceptual framework of service failure and recovery strategies. We then transform it into a mathematical model to assist managers in deciding on appropriate resource allocations for outcome and process recovery strategies based on customer risk profiles and the firm's cost structures. Based on this mathematical model we derive optimal recovery strategies, conduct sensitivity analyses of the optimal solutions for different model parameters, and illustrate them through numerical examples. We conclude with a discussion of managerial implications and directions for future research.  相似文献   

20.
This article examines how customer value may be affected by deploying radio frequency identification (RFID) technologies within service environments. Business articles promote operational cost savings and improved inventory management as key benefits of deploying RFID. In response, service firms are using RFID to reengineer service transactions and customer touchpoints. Customers may view these RFID applications to offer both benefits and drawbacks. This article demonstrates that individuals will recognize far more value from RFID service applications than just cost savings and inventory availability. The article analyzes qualitative survey responses on the value gained from RFID to identify a broad list of value objectives—benefits and drawbacks—associated with RFID service applications. The article contributes to academic literature by providing salient value dimensions for return on investment models of service RFID applications and for future empirical analyses of means‐ends and value‐profit chain models. Managers can use the list of dimensions to develop rich business cases for evaluating the benefits and costs from enhancing service operations with RFID. The identified drawbacks also provide managers with a resource for understanding potential risks of RFID applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号