首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article compares two nonparametric tree‐based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high‐resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2‐km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree‐leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources.  相似文献   

2.
This article proposes a novel mathematical optimization framework for the identification of the vulnerabilities of electric power infrastructure systems (which is a paramount example of critical infrastructure) due to natural hazards. In this framework, the potential impacts of a specific natural hazard on an infrastructure are first evaluated in terms of failure and recovery probabilities of system components. Then, these are fed into a bi‐level attacker–defender interdiction model to determine the critical components whose failures lead to the largest system functionality loss. The proposed framework bridges the gap between the difficulties of accurately predicting the hazard information in classical probability‐based analyses and the over conservatism of the pure attacker–defender interdiction models. Mathematically, the proposed model configures a bi‐level max‐min mixed integer linear programming (MILP) that is challenging to solve. For its solution, the problem is casted into an equivalent one‐level MILP that can be solved by efficient global solvers. The approach is applied to a case study concerning the vulnerability identification of the georeferenced RTS24 test system under simulated wind storms. The numerical results demonstrate the effectiveness of the proposed framework for identifying critical locations under multiple hazard events and, thus, for providing a useful tool to help decisionmakers in making more‐informed prehazard preparation decisions.  相似文献   

3.
Steven M. Quiring 《Risk analysis》2011,31(12):1897-1906
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out‐of‐sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy.  相似文献   

4.
In this article, an agent‐based framework to quantify the seismic resilience of an electric power supply system (EPSS) and the community it serves is presented. Within the framework, the loss and restoration of the EPSS power generation and delivery capacity and of the power demand from the served community are used to assess the electric power deficit during the damage absorption and recovery processes. Damage to the components of the EPSS and of the community‐built environment is evaluated using the seismic fragility functions. The restoration of the community electric power demand is evaluated using the seismic recovery functions. However, the postearthquake EPSS recovery process is modeled using an agent‐based model with two agents, the EPSS Operator and the Community Administrator. The resilience of the EPSS–community system is quantified using direct, EPSS‐related, societal, and community‐related indicators. Parametric studies are carried out to quantify the influence of different seismic hazard scenarios, agent characteristics, and power dispatch strategies on the EPSS–community seismic resilience. The use of the agent‐based modeling framework enabled a rational formulation of the postearthquake recovery phase and highlighted the interaction between the EPSS and the community in the recovery process not quantified in resilience models developed to date. Furthermore, it shows that the resilience of different community sectors can be enhanced by different power dispatch strategies. The proposed agent‐based EPSS–community system resilience quantification framework can be used to develop better community and infrastructure system risk governance policies.  相似文献   

5.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

6.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   

7.
In this article, we discuss an outage‐forecasting model that we have developed. This model uses very few input variables to estimate hurricane‐induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.  相似文献   

8.
A conventional dose–response function can be refitted as additional data become available. A predictive dose–response function in contrast does not require a curve-fitting step, only additional data and presents the unconditional probabilities of illness, reflecting the level of information it contains. In contrast, the predictive Bayesian dose–response function becomes progressively less conservative as more information is included. This investigation evaluated the potential for using predictive Bayesian methods to develop a dose–response for human infection that improves on existing models, to show how predictive Bayesian statistical methods can utilize additional data, and expand the Bayesian methods for a broad audience including those concerned about an oversimplification of dose–response curve use in quantitative microbial risk assessment (QMRA). This study used a dose–response relationship incorporating six separate data sets for Cryptosporidium parvum. A Pareto II distribution with known priors was applied to one of the six data sets to calibrate the model, while the others were used for subsequent updating. While epidemiological principles indicate that local variations, host susceptibility, and organism strain virulence may vary, the six data sets all appear to be well characterized using the Bayesian approach. The adaptable model was applied to an existing data set for Campylobacter jejuni for model validation purposes, which yielded results that demonstrate the ability to analyze a dose–response function with limited data using and update those relationships with new data. An analysis of the goodness of fit compared to the beta-Poisson methods also demonstrated correlation between the predictive Bayesian model and the data.  相似文献   

9.
Eren Demir 《决策科学》2014,45(5):849-880
The number of emergency (or unplanned) readmissions in the United Kingdom National Health Service (NHS) has been rising for many years. This trend, which is possibly related to poor patient care, places financial pressures on hospitals and on national healthcare budgets. As a result, clinicians and key decision makers (e.g., managers and commissioners) are interested in predicting patients at high risk of readmission. Logistic regression is the most popular method of predicting patient‐specific probabilities. However, these studies have produced conflicting results with poor prediction accuracies. We compared the predictive accuracy of logistic regression with that of regression trees for predicting emergency readmissions within 45 days after been discharged from hospital. We also examined the predictive ability of two other types of data‐driven models: generalized additive models (GAMs) and multivariate adaptive regression splines (MARS). We used data on 963 patients readmitted to hospitals with chronic obstructive pulmonary disease and asthma. We used repeated split‐sample validation: the data were divided into derivation and validation samples. Predictive models were estimated using the derivation sample and the predictive accuracy of the resultant model was assessed using a number of performance measures, such as area under the receiver operating characteristic (ROC) curve in the validation sample. This process was repeated 1,000 times—the initial data set was divided into derivation and validation samples 1,000 times, and the predictive accuracy of each method was assessed each time. The mean ROC curve area for the regression tree models in the 1,000 derivation samples was .928, while the mean ROC curve area of a logistic regression model was .924. Our study shows that logistic regression model and regression trees had performance comparable to that of more flexible, data‐driven models such as GAMs and MARS. Given that the models have produced excellent predictive accuracies, this could be a valuable decision support tool for clinicians (healthcare managers, policy makers, etc.) for informed decision making in the management of diseases, which ultimately contributes to improved measures for hospital performance management.  相似文献   

10.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

11.
Choice models and neural networks are two approaches used in modeling selection decisions. Defining model performance as the out‐of‐sample prediction power of a model, we test two hypotheses: (i) choice models and neural network models are equal in performance, and (ii) hybrid models consisting of a combination of choice and neural network models perform better than each stand‐alone model. We perform statistical tests for two classes of linear and nonlinear hybrid models and compute the empirical integrated rank (EIR) indices to compare the overall performances of the models. We test the above hypotheses by using data for various brand and store choices for three consumer products. Extensive jackknifing and out‐of‐sample tests for four different model specifications are applied for increasing the external validity of the results. Our results show that using neural networks has a higher probability of resulting in a better performance. Our findings also indicate that hybrid models outperform stand‐alone models, in that using hybrid models guarantee overall results equal or better than the two stand‐alone models. The improvement is particularly significant in cases where neither of the two stand‐alone models is very accurate in prediction, indicating that the proposed hybrid models may capture aspects of predictive accuracy that neither stand‐alone model is capable of on their own. Our results are particularly important in brand management and customer relationship management, indicating that multiple technologies and mixture of technologies may yield more accurate and reliable outcomes than individual ones.  相似文献   

12.
Critical infrastructures provide society with services essential to its functioning, and extensive disruptions give rise to large societal consequences. Risk and vulnerability analyses of critical infrastructures generally focus narrowly on the infrastructure of interest and describe the consequences as nonsupplied commodities or the cost of unsupplied commodities; they rarely holistically consider the larger impact with respect to higher‐order consequences for the society. From a societal perspective, this narrow focus may lead to severe underestimation of the negative effects of infrastructure disruptions. To explore this theory, an integrated modeling approach, combining models of critical infrastructures and economic input–output models, is proposed and applied in a case study. In the case study, a representative model of the Swedish power transmission system and a regionalized economic input–output model are utilized. This enables exploration of how a narrow infrastructure or a more holistic societal consequence perspective affects vulnerability‐related mitigation decisions regarding critical infrastructures. Two decision contexts related to prioritization of different vulnerability‐reducing measures are considered—identifying critical components and adding system components to increase robustness. It is concluded that higher‐order societal consequences due to power supply disruptions can be up to twice as large as first‐order consequences, which in turn has a significant effect on the identification of which critical components are to be protected or strengthened and a smaller effect on the ranking of improvement measures in terms of adding system components to increase system redundancy.  相似文献   

13.
Resilient infrastructure systems are essential for cities to withstand and rapidly recover from natural and human‐induced disasters, yet electric power, transportation, and other infrastructures are highly vulnerable and interdependent. New approaches for characterizing the resilience of sets of infrastructure systems are urgently needed, at community and regional scales. This article develops a practical approach for analysts to characterize a community's infrastructure vulnerability and resilience in disasters. It addresses key challenges of incomplete incentives, partial information, and few opportunities for learning. The approach is demonstrated for Metro Vancouver, Canada, in the context of earthquake and flood risk. The methodological approach is practical and focuses on potential disruptions to infrastructure services. In spirit, it resembles probability elicitation with multiple experts; however, it elicits disruption and recovery over time, rather than uncertainties regarding system function at a given point in time. It develops information on regional infrastructure risk and engages infrastructure organizations in the process. Information sharing, iteration, and learning among the participants provide the basis for more informed estimates of infrastructure system robustness and recovery that incorporate the potential for interdependent failures after an extreme event. Results demonstrate the vital importance of cross‐sectoral communication to develop shared understanding of regional infrastructure disruption in disasters. For Vancouver, specific results indicate that in a hypothetical M7.3 earthquake, virtually all infrastructures would suffer severe disruption of service in the immediate aftermath, with many experiencing moderate disruption two weeks afterward. Electric power, land transportation, and telecommunications are identified as core infrastructure sectors.  相似文献   

14.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

15.
Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked.  相似文献   

16.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

17.
We consider the problem of estimating the probability of detection (POD) of flaws in an industrial steel component. Modeled as an increasing function of the flaw height, the POD characterizes the detection process; it is also involved in the estimation of the flaw size distribution, a key input parameter of physical models describing the behavior of the steel component when submitted to extreme thermodynamic loads. Such models are used to assess the resistance of highly reliable systems whose failures are seldom observed in practice. We develop a Bayesian method to estimate the flaw size distribution and the POD function, using flaw height measures from periodic in‐service inspections conducted with an ultrasonic detection device, together with measures from destructive lab experiments. Our approach, based on approximate Bayesian computation (ABC) techniques, is applied to a real data set and compared to maximum likelihood estimation (MLE) and a more classical approach based on Markov Chain Monte Carlo (MCMC) techniques. In particular, we show that the parametric model describing the POD as the cumulative distribution function (cdf) of a log‐normal distribution, though often used in this context, can be invalidated by the data at hand. We propose an alternative nonparametric model, which assumes no predefined shape, and extend the ABC framework to this setting. Experimental results demonstrate the ability of this method to provide a flexible estimation of the POD function and describe its uncertainty accurately.  相似文献   

18.
Shahid Suddle 《Risk analysis》2009,29(7):1024-1040
Buildings above roads, railways, and existing buildings themselves are examples of multifunctional urban locations. The construction stage of those buildings is in general extremely complicated. Safety is one of the critical issues during the construction stage. Because the traffic on the infrastructure must continue during the construction of the building above the infrastructure, falling objects due to construction activities form a major hazard for third parties, i.e., people present on the infrastructure or beneath it, such as car drivers and passengers. This article outlines a systematic approach to conduct quantitative risk assessment (QRA) and risk management of falling elements for third parties during the construction stage of the building above the infrastructure in multifunctional urban locations. In order to set up a QRA model, quantifiable aspects influencing the risk for third parties were determined. Subsequently, the conditional probabilities of these aspects were estimated by historical data or engineering judgment. This was followed by integrating those conditional probabilities, now used as input parameters for the QRA, into a Bayesian network representing the relation and the conditional dependence between the quantified aspects. The outcome of the Bayesian network—the calculation of both the human and financial risk in quantitative terms—is compared with the risk acceptance criteria as far as possible. Furthermore, the effect of some safety measures were analyzed and optimized in relation with decision making. Finally, the possibility of integration of safety measures in the functional and structural building design above the infrastructure are explored.  相似文献   

19.
In this paper, we present a comparative analysis of the forecasting accuracy of univariate and multivariate linear models that incorporate fundamental accounting variables (i.e., inventory, accounts receivable, and so on) with the forecast accuracy of neural network models. Unique to this study is the focus of our comparison on the multivariate models to examine whether the neural network models incorporating the fundamental accounting variables can generate more accurate forecasts of future earnings than the models assuming a linear combination of these same variables. We investigate four types of models: univariate‐linear, multivariate‐linear, univariate‐neural network, and multivariate‐neural network using a sample of 283 firms spanning 41 industries. This study shows that the application of the neural network approach incorporating fundamental accounting variables results in forecasts that are more accurate than linear forecasting models. The results also reveal limitations of the forecasting capacity of investors in the security market when compared to neural network models.  相似文献   

20.
We study decision problems in which consequences of the various alternative actions depend on states determined by a generative mechanism representing some natural or social phenomenon. Model uncertainty arises because decision makers may not know this mechanism. Two types of uncertainty result, a state uncertainty within models and a model uncertainty across them. We discuss some two‐stage static decision criteria proposed in the literature that address state uncertainty in the first stage and model uncertainty in the second (by considering subjective probabilities over models). We consider two approaches to the Ellsberg‐type phenomena characteristic of such decision problems: a Bayesian approach based on the distinction between subjective attitudes toward the two kinds of uncertainty; and a non‐Bayesian approach that permits multiple subjective probabilities. Several applications are used to illustrate concepts as they are introduced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号