首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This article models flood occurrence probabilistically and its risk assessment. It incorporates atmospheric parameters to forecast rainfall in an area. This measure of precipitation, together with river and ground parameters, serve as parameters in the model to predict runoff and subsequently inundation depth of an area. The inundation depth acts as a guide for predicting flood proneness and associated hazard. The vulnerability owing to flood has been analyzed as social vulnerability ( V S ) , vulnerability to property ( V P ) , and vulnerability to the location in terms of awareness ( V A ) . The associated risk has been estimated for each area. The distribution of risk values can be used to classify every area into one of the six risk zones—namely, very low risk, low risk, moderately low risk, medium risk, high risk, and very high risk. The prioritization regarding preparedness, evacuation planning, or distribution of relief items should be guided by the range on the risk scale within which the area under study falls. The flood risk assessment model framework has been tested on a real‐life case study. The flood risk indices for each of the municipalities in the area under study have been calculated. The risk indices and hence the flood risk zone under which a municipality is expected to lie would alter every day. The appropriate authorities can then plan ahead in terms of preparedness to combat the impending flood situation in the most critical and vulnerable areas.  相似文献   

2.
In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the ( 1 ? α ) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.  相似文献   

3.
Hormesis refers to a nonmonotonic (biphasic) dose–response relationship in toxicology, environmental science, and related fields. In the presence of hormesis, a low dose of a toxic agent may have a lower risk than the risk at the control dose, and the risk may increase at high doses. When the sample size is small due to practical, logistic, and ethical considerations, a parametric model may provide an efficient approach to hypothesis testing at the cost of adopting a strong assumption, which is not guaranteed to be true. In this article, we first consider alternative parameterizations based on the traditional three‐parameter logistic regression. The new parameterizations attempt to provide robustness to model misspecification by allowing an unspecified dose–response relationship between the control dose and the first nonzero experimental dose. We then consider experimental designs including the uniform design (the same sample size per dose group) and the c ‐optimal design (minimizing the standard error of an estimator for a parameter of interest). Our simulation studies showed that (1) the c ‐optimal design under the traditional three‐parameter logistic regression does not help reducing an inflated Type I error rate due to model misspecification, (2) it is helpful under the new parameterization with three parameters (Type I error rate is close to a fixed significance level), and (3) the new parameterization with four parameters and the c ‐optimal design does not reduce statistical power much while preserving the Type I error rate at a fixed significance level.  相似文献   

4.
The increasing development of autonomous vehicles (AVs) influences the future of transportation. Beyond the potential benefits in terms of safety, efficiency, and comfort, also potential risks of novel driving technologies need to be addressed. In this article, we explore risk perceptions toward connected and autonomous driving in comparison to conventional driving. In order to gain a deeper understanding of individual risk perceptions, we adopted a two‐step empirical procedure. First, focus groups ( N = 17 ) were carried out to identify relevant risk factors for autonomous and connected driving. Further, a questionnaire was developed, which was answered by 516 German participants. In the questionnaire, three driving technologies (connected, autonomous, conventional) were evaluated via semantic differential (rating scale to identify connotative meaning of technologies). Second, participants rated perceived risk levels (for data, traffic environment, vehicle, and passenger) and perceived benefits and barriers of connected/autonomous driving. Since previous experience with automated functions of driver assistance systems can have an impact on the evaluation, three experience groups have been formed. The effect of experience on benefits and barrier perceptions was also analyzed. Risk perceptions were significantly smaller for conventional driving compared to connected/autonomous driving. With increasing experience, risk perception decreases for novel driving technologies with one exception: the perceived risk in handling data is not influenced by experience. The findings contribute to an understanding of risk perception in autonomous driving, which helps to foster a successful implementation of AVs on the market and to develop public information strategies.  相似文献   

5.
6.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

7.
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single‐hit dose‐response models are the most commonly used dose‐response models in QMRA. Denoting as the probability of infection at a given mean dose d, a three‐parameter generalized QMRA beta‐Poisson dose‐response model, , is proposed in which the minimum number of organisms required for causing infection, Kmin, is not fixed, but a random variable following a geometric distribution with parameter . The single‐hit beta‐Poisson model, , is a special case of the generalized model with Kmin = 1 (which implies ). The generalized beta‐Poisson model is based on a conceptual model with greater detail in the dose‐response mechanism. Since a maximum likelihood solution is not easily available, a likelihood‐free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median estimates produced fall short of meeting the required condition of = 1 for single‐hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single‐hit assumption for characterizing the dose‐response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three‐parameter generalized model provides a possibility to investigate the mechanism of a dose‐response process in greater detail than is possible under a single‐hit model.  相似文献   

8.
9.
Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert–Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to ◂,▸k=30,000 inputs, and finally with the traditional Level E benchmark code.  相似文献   

10.
The error estimate of Borgonovo's moment‐independent index is considered, and it shows that the possible computational complexity of is mainly due to the probability density function (PDF) estimate because the PDF estimate is an ill‐posed problem and its convergence rate is quite slow. So it reminds us to compute Borgonovo's index using other methods. To avoid the PDF estimate, , which is based on the PDF, is first approximatively represented by the cumulative distribution function (CDF). The CDF estimate is well posed and its convergence rate is always faster than that of the PDF estimate. From the representation, a stable approach is proposed to compute with an adaptive procedure. Since the small probability multidimensional integral needs to be computed in this procedure, a computational strategy named asymptotic space integration is introduced to reduce a high‐dimensional integral to a one‐dimensional integral. Then we can compute the small probability multidimensional integral by adaptive numerical integration in one dimension with an improved convergence rate. From the comparison of numerical error analysis of some examples, it can be shown that the proposed method is an effective approach to uncertainty importance measure computation.  相似文献   

11.
12.
13.
14.
15.
16.
17.
Microbiological food safety is an important economic and health issue in the context of globalization and presents food business operators with new challenges in providing safe foods. The hazard analysis and critical control point approach involve identifying the main steps in food processing and the physical and chemical parameters that have an impact on the safety of foods. In the risk‐based approach, as defined in the Codex Alimentarius, controlling these parameters in such a way that the final products meet a food safety objective (FSO), fixed by the competent authorities, is a big challenge and of great interest to the food business operators. Process risk models, issued from the quantitative microbiological risk assessment framework, provide useful tools in this respect. We propose a methodology, called multivariate factor mapping (MFM), for establishing a link between process parameters and compliance with a FSO. For a stochastic and dynamic process risk model of in soft cheese made from pasteurized milk with many uncertain inputs, multivariate sensitivity analysis and MFM are combined to (i) identify the critical control points (CCPs) for throughout the food chain and (ii) compute the critical limits of the most influential process parameters, located at the CCPs, with regard to the specific process implemented in the model. Due to certain forms of interaction among parameters, the results show some new possibilities for the management of microbiological hazards when a FSO is specified.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号