首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article models flood occurrence probabilistically and its risk assessment. It incorporates atmospheric parameters to forecast rainfall in an area. This measure of precipitation, together with river and ground parameters, serve as parameters in the model to predict runoff and subsequently inundation depth of an area. The inundation depth acts as a guide for predicting flood proneness and associated hazard. The vulnerability owing to flood has been analyzed as social vulnerability ( V S ) , vulnerability to property ( V P ) , and vulnerability to the location in terms of awareness ( V A ) . The associated risk has been estimated for each area. The distribution of risk values can be used to classify every area into one of the six risk zones—namely, very low risk, low risk, moderately low risk, medium risk, high risk, and very high risk. The prioritization regarding preparedness, evacuation planning, or distribution of relief items should be guided by the range on the risk scale within which the area under study falls. The flood risk assessment model framework has been tested on a real‐life case study. The flood risk indices for each of the municipalities in the area under study have been calculated. The risk indices and hence the flood risk zone under which a municipality is expected to lie would alter every day. The appropriate authorities can then plan ahead in terms of preparedness to combat the impending flood situation in the most critical and vulnerable areas.  相似文献   

2.
3.
In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the ( 1 ? α ) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.  相似文献   

4.
The increasing development of autonomous vehicles (AVs) influences the future of transportation. Beyond the potential benefits in terms of safety, efficiency, and comfort, also potential risks of novel driving technologies need to be addressed. In this article, we explore risk perceptions toward connected and autonomous driving in comparison to conventional driving. In order to gain a deeper understanding of individual risk perceptions, we adopted a two‐step empirical procedure. First, focus groups ( N = 17 ) were carried out to identify relevant risk factors for autonomous and connected driving. Further, a questionnaire was developed, which was answered by 516 German participants. In the questionnaire, three driving technologies (connected, autonomous, conventional) were evaluated via semantic differential (rating scale to identify connotative meaning of technologies). Second, participants rated perceived risk levels (for data, traffic environment, vehicle, and passenger) and perceived benefits and barriers of connected/autonomous driving. Since previous experience with automated functions of driver assistance systems can have an impact on the evaluation, three experience groups have been formed. The effect of experience on benefits and barrier perceptions was also analyzed. Risk perceptions were significantly smaller for conventional driving compared to connected/autonomous driving. With increasing experience, risk perception decreases for novel driving technologies with one exception: the perceived risk in handling data is not influenced by experience. The findings contribute to an understanding of risk perception in autonomous driving, which helps to foster a successful implementation of AVs on the market and to develop public information strategies.  相似文献   

5.
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single‐hit dose‐response models are the most commonly used dose‐response models in QMRA. Denoting as the probability of infection at a given mean dose d, a three‐parameter generalized QMRA beta‐Poisson dose‐response model, , is proposed in which the minimum number of organisms required for causing infection, Kmin, is not fixed, but a random variable following a geometric distribution with parameter . The single‐hit beta‐Poisson model, , is a special case of the generalized model with Kmin = 1 (which implies ). The generalized beta‐Poisson model is based on a conceptual model with greater detail in the dose‐response mechanism. Since a maximum likelihood solution is not easily available, a likelihood‐free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median estimates produced fall short of meeting the required condition of = 1 for single‐hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single‐hit assumption for characterizing the dose‐response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three‐parameter generalized model provides a possibility to investigate the mechanism of a dose‐response process in greater detail than is possible under a single‐hit model.  相似文献   

6.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

7.
The error estimate of Borgonovo's moment‐independent index is considered, and it shows that the possible computational complexity of is mainly due to the probability density function (PDF) estimate because the PDF estimate is an ill‐posed problem and its convergence rate is quite slow. So it reminds us to compute Borgonovo's index using other methods. To avoid the PDF estimate, , which is based on the PDF, is first approximatively represented by the cumulative distribution function (CDF). The CDF estimate is well posed and its convergence rate is always faster than that of the PDF estimate. From the representation, a stable approach is proposed to compute with an adaptive procedure. Since the small probability multidimensional integral needs to be computed in this procedure, a computational strategy named asymptotic space integration is introduced to reduce a high‐dimensional integral to a one‐dimensional integral. Then we can compute the small probability multidimensional integral by adaptive numerical integration in one dimension with an improved convergence rate. From the comparison of numerical error analysis of some examples, it can be shown that the proposed method is an effective approach to uncertainty importance measure computation.  相似文献   

8.
Microbiological food safety is an important economic and health issue in the context of globalization and presents food business operators with new challenges in providing safe foods. The hazard analysis and critical control point approach involve identifying the main steps in food processing and the physical and chemical parameters that have an impact on the safety of foods. In the risk‐based approach, as defined in the Codex Alimentarius, controlling these parameters in such a way that the final products meet a food safety objective (FSO), fixed by the competent authorities, is a big challenge and of great interest to the food business operators. Process risk models, issued from the quantitative microbiological risk assessment framework, provide useful tools in this respect. We propose a methodology, called multivariate factor mapping (MFM), for establishing a link between process parameters and compliance with a FSO. For a stochastic and dynamic process risk model of in soft cheese made from pasteurized milk with many uncertain inputs, multivariate sensitivity analysis and MFM are combined to (i) identify the critical control points (CCPs) for throughout the food chain and (ii) compute the critical limits of the most influential process parameters, located at the CCPs, with regard to the specific process implemented in the model. Due to certain forms of interaction among parameters, the results show some new possibilities for the management of microbiological hazards when a FSO is specified.  相似文献   

9.
Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert–Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to ◂,▸k=30,000 inputs, and finally with the traditional Level E benchmark code.  相似文献   

10.
Few studies have focused on the different roles risk factors play in the multistate temporal natural course of breast cancer. We proposed a three‐state Markov regression model to predict the risk from free of breast cancer (FBC) to the preclinical screen‐detectable phase (PCDP) and from the PCDP to the clinical phase (CP). We searched the initiators and promoters affecting onset and subsequent progression of breast tumor to build up a three‐state temporal natural history model with state‐dependent genetic and environmental covariates. This risk assessment model was applied to a 1 million Taiwanese women cohort. The proposed model was verified by external validation with another independent data set. We identified three kinds of initiators, including the BRCA gene, seven single nucleotides polymorphism, and breast density. ER, Ki‐67, and HER‐2 were found as promoters. Body mass index and age at first pregnancy both played a role. Among women carrying the BRCA gene, the 10‐year predicted risk for the transition from FBC to CP was 25.83%, 20.31%, and 13.84% for the high‐, intermediate‐, and low‐risk group, respectively. The corresponding figures were 1.55%, 1.22%, and 0.76% among noncarriers. The mean sojourn time of staying at the PCDP ranged from 0.82 years for the highest risk group to 6.21 years for the lowest group. The lack of statistical significance for external validation () revealed the adequacy of our proposed model. The three‐state model with state‐dependent covariates of initiators and promoters was proposed for achieving individually tailored screening and also for personalized clinical surveillance of early breast cancer.  相似文献   

11.
12.
Despite their diverse applications in many domains, the variable precision rough sets (VPRS) model lacks a feasible method to determine a precision parameter (β)(β) value to control the choice of ββ-reducts. In this study we propose an effective method to find the ββ-reducts. First, we calculate a precision parameter value to find the subsets of information system that are based on the least upper bound of the data misclassification error. Next, we measure the quality of classification and remove redundant attributes from each subset. We use a simple example to explain this method and even a real-world example is analyzed. Comparing the implementation results from the proposed method with the neural network approach, our proposed method demonstrates a better performance.  相似文献   

13.
14.
Acceptance sampling plans are practical tools for quality control applications, which involve quality contracting on product orders between the vendor and the buyer. Those sampling plans provide the vendor and the buyer rules for lot sentencing while meeting their preset requirements on product quality. In this paper, we introduce a variables sampling plan for unilateral processes based on the one-sided process capability indices CPUCPU (or CPL)CPL), to deal with lot sentencing problem with very low fraction of defectives. The proposed new sampling plan is developed based on the exact sampling distribution rather than approximation. Practitioners can use the proposed sampling plan to determine accurate number of product items to be inspected and the corresponding critical acceptance value, to make reliable decisions. We also tabulate the required sample size nn and the corresponding critical acceptance value C0C0 for various αα-risks, ββ-risks, and the levels of lot or process fraction of defectives that correspond to acceptable and rejecting quality levels.  相似文献   

15.
This paper deals with the optimal selection of m out of n facilities to first perform m   given primary jobs in Stage-I followed by the remaining (n-m)(n-m) facilities performing optimally the (n-m)(n-m) secondary jobs in Stage-II. It is assumed that in both the stages facilities perform in parallel. The aim of the proposed study is to find that set of m   facilities performing the primary jobs in Stage-I for which the sum of the overall completion times of jobs in Stage-I and the corresponding optimal completion time of the secondary jobs in Stage-II by the remaining (n-m)(n-m) facilities is the minimum. The developed solution methodology involves solving the standard time minimizing and cost minimizing assignment problems alternately after forbidding some facility-job pairings and suggests a polynomially bound algorithm. This proposed algorithm has been implemented and tested on a variety of test problems and its performance is found to be quite satisfactory.  相似文献   

16.
17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号