首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 966 毫秒
1.
Sensitivity analysis (SA) of a numerical model, for instance simulating physical phenomena, is useful to quantify the influence of the inputs on the model responses. This paper proposes a new sensitivity index, based upon the modification of the probability density function (pdf) of the random inputs, when the quantity of interest is a failure probability (probability that a model output exceeds a given threshold). An input is considered influential if the input pdf modification leads to a broad change in the failure probability. These sensitivity indices can be computed using the sole set of simulations that has already been used to estimate the failure probability, thus limiting the number of calls to the numerical model. In the case of a Monte Carlo sample, asymptotical properties of the indices are derived. Based on Kullback–Leibler divergence, several types of input perturbations are introduced. The relevance of this new SA method is analysed through three case studies.  相似文献   

2.
ABSTRACT

Hazard rate functions are often used in modeling of lifetime data. The Exponential Power Series (EPS) family has a monotone hazard rate function. In this article, the influence of input factors such as time and parameters on the variability of hazard rate function is assessed by local and global sensitivity analysis. Two different indices based on local and global sensitivity indices are presented. The simulation results for two datasets show that the hazard rate functions of the EPS family are sensitive to input parameters. The results also show that the hazard rate function of the EPS family is more sensitive to the exponential distribution than power series distributions.  相似文献   

3.
Global sensitivity analysis with variance-based measures suffers from several theoretical and practical limitations, since they focus only on the variance of the output and handle multivariate variables in a limited way. In this paper, we introduce a new class of sensitivity indices based on dependence measures which overcomes these insufficiencies. Our approach originates from the idea to compare the output distribution with its conditional counterpart when one of the input variables is fixed. We establish that this comparison yields previously proposed indices when it is performed with Csiszár f-divergences, as well as sensitivity indices which are well-known dependence measures between random variables. This leads us to investigate completely new sensitivity indices based on recent state-of-the-art dependence measures, such as distance correlation and the Hilbert–Schmidt independence criterion. We also emphasize the potential of feature selection techniques relying on such dependence measures as alternatives to screening in high dimension.  相似文献   

4.
Global sensitivity analysis (GSA) can help practitioners focusing on the inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. Screening, as the qualitative method of GSA, is to identify and exclude non- or less-influential input variables in high-dimensional models. However, for non-parametric problems, there remains the challenging problem of finding an efficient screening procedure, as one needs to properly handle the non-parametric high-order interactions among input variables and keep the size of the screening experiment economically feasible. In this study, we design a novel screening approach based on analysis of variance decomposition of the model. This approach combines the virtues of run-size economy and model independence. The core idea is to choose a low-level complete orthogonal array to derive the sensitivity estimates for all input factors and their interactions with low cost, and then develop a statistical process to screen out the non-influential ones without assuming the effect-sparsity of the model. Simulation studies show that the proposed approach performs well in various settings.  相似文献   

5.
Compartmental models have been widely used in modelling systems in pharmaco-kinetics, engineering, biomedicine and ecology since 1943 and turn out to be very good approximations for many different real-life systems. Sensitivity analysis (SA) is commonly employed at a preliminary stage of model development process to increase the confidence in the model and its predictions by providing an understanding of how the model response variables respond to changes in the inputs, data used to calibrate it and model structures. This paper concerns the application of some SA techniques to a linear, deterministic, time-invariant compartmental model of global carbon cycle (GCC). The same approach is also illustrated with a more complex GCC model which has some nonlinear components. By focusing on these two structurally different models for estimating the atmospheric CO2 content in the year 2100, sensitivity of model predictions to uncertainty attached to the model input factors is studied. The application/modification of SA techniques to compartmental models with steady-state constraint is explored using the 8-compartment model, and computational methods developed to maintain the initial steady-state condition are presented. In order to adjust the values of model input factors to achieve an acceptable match between observed and predicted model conditions, windowing analysis is used.  相似文献   

6.
Uncertainty and sensitivity analyses for systems that involve both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty are discussed. In such analyses, the dependent variable is usually a complementary cumulative distribution function (CCDF) that arises from stochastic uncertainty; uncertainty analysis involves the determination of a distribution of CCDFs that results from subjective uncertainty, and sensitivity analysis involves the determination of the effects of subjective uncertainty in individual variables on this distribution of CCDFs. Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made of Latin hypercube sampling, importance sampling and regression-based sensitivity analysis techniques. The underlying ideas, which are initially presented in an abstract form, are central to the design and performance of real analyses. To emphasize the connection between concept and computational practice, these ideas are illustrated with an analysis involving the MACCS reactor accident consequence model a, performance assessment for the Waste Isolation Pilot Plant, and a probabilistic risk assessment for a nuclear power station.  相似文献   

7.
In this paper, we investigate the use of the contribution to the sample mean plot (CSM plot) as a graphical tool for sensitivity analysis (SA) of computational models. We first provide an exact formula that links, for each uncertain model input Xj, the CSM plot Cj(·) with the first-order variance-based sensitivity index Sj. We then build a new estimate for Sj using polynomial regression of the CSM plot. This estimation procedure allows the computation of Sj from given data, without any SA-specific design of experiment. Numerical results show that this new Sj estimate is efficient for large sample sizes, but that at small sample sizes it does not compare well with other Sj estimation techniques based on given data, such as the effective algorithm for computing global sensitivity indices method or metamodel-based approaches.  相似文献   

8.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste.  相似文献   

9.
The Fourier amplitude sensitivity test (FAST) can be used to calculate the relative variance contribution of model input parameters to the variance of predictions made with functional models. It is widely used in the analyses of complicated process modeling systems. This study provides an improved transformation procedure of the Fourier amplitude sensitivity test (FAST) for non-uniform distributions that can be used to represent the input parameters. Here it is proposed that the cumulative probability be used instead of probability density when transforming non-uniform distributions for FAST. This improvement will increase the accuracy of transformation by reducing errors, and makes the transformation more convenient to be used in practice. In an evaluation of the procedure, the improved procedure was demonstrated to have very high accuracy in comparison to the procedure that is currently widely in use.  相似文献   

10.
Missing data are a common problem in almost all areas of empirical research. Ignoring the missing data mechanism, especially when data are missing not at random (MNAR), can result in biased and/or inefficient inference. Because MNAR mechanism is not verifiable based on the observed data, sensitivity analysis is often used to assess it. Current sensitivity analysis methods primarily assume a model for the response mechanism in conjunction with a measurement model and examine sensitivity to missing data mechanism via the parameters of the response model. Recently, Jamshidian and Mata (Post-modelling sensitivity analysis to detect the effect of missing data mechanism, Multivariate Behav. Res. 43 (2008), pp. 432–452) introduced a new method of sensitivity analysis that does not require the difficult task of modelling the missing data mechanism. In this method, a single measurement model is fitted to all of the data and to a sub-sample of the data. Discrepancy in the parameter estimates obtained from the the two data sets is used as a measure of sensitivity to missing data mechanism. Jamshidian and Mata describe their method mainly in the context of detecting data that are missing completely at random (MCAR). They used a bootstrap type method, that relies on heuristic input from the researcher, to test for the discrepancy of the parameter estimates. Instead of using bootstrap, the current article obtains confidence interval for parameter differences on two samples based on an asymptotic approximation. Because it does not use bootstrap, the developed procedure avoids likely convergence problems with the bootstrap methods. It does not require heuristic input from the researcher and can be readily implemented in statistical software. The article also discusses methods of obtaining sub-samples that may be used to test missing at random in addition to MCAR. An application of the developed procedure to a real data set, from the first wave of an ongoing longitudinal study on aging, is presented. Simulation studies are performed as well, using two methods of missing data generation, which show promise for the proposed sensitivity method. One method of missing data generation is also new and interesting in its own right.  相似文献   

11.
Symmetrical global sensitivity analysis (SGSA) can help practitioners focusing on the symmetrical terms of inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. However, there remains the challenging problem of finding an efficient method to get symmetrical global sensitivity indices (SGSI) when the functional form of the symmetrical terms is unknown, including numerical and non-parametric situations. In this study, we propose a novel sampling plan, called symmetrical design, for SGSA. As a preliminary experiment for model feature extracting, such plan offers the virtue of run-size economy due to its closure respective to the given group. Using the design, we give estimation methods of SGSI as well as their asymptotic properties respectively for numerical model and non-parametrical model directly by the model outputs, and further propose a significance test for SGSI in non-parametric situation. A case study for a benchmark of GSA and a real data analysis show the effectiveness of the proposed design.  相似文献   

12.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

13.
The global sensitivity analysis method used to quantify the influence of uncertain input variables on the variability in numerical model responses has already been applied to deterministic computer codes; deterministic means here that the same set of input variables always gives the same output value. This paper proposes a global sensitivity analysis methodology for stochastic computer codes, for which the result of each code run is itself random. The framework of the joint modeling of the mean and dispersion of heteroscedastic data is used. To deal with the complexity of computer experiment outputs, nonparametric joint models are discussed and a new Gaussian process-based joint model is proposed. The relevance of these models is analyzed based upon two case studies. Results show that the joint modeling approach yields accurate sensitivity index estimators even when heteroscedasticity is strong.  相似文献   

14.
Complex computer codes are widely used in science to model physical systems. Sensitivity analysis aims to measure the contributions of the inputs on the code output variability. An efficient tool to perform such analysis is the variance-based methods which have been recently investigated in the framework of dependent inputs. One of their issue is that they require a large number of runs for the complex simulators. To handle it, a Gaussian process (GP) regression model may be used to approximate the complex code. In this work, we propose to decompose a GP into a high-dimensional representation. This leads to the definition of a variance-based sensitivity measure well tailored for non-independent inputs. We give a methodology to estimate these indices and to quantify their uncertainty. Finally, the approach is illustrated on toy functions and on a river flood model.  相似文献   

15.
In this work it is investigated theoretically whether the support's length of a continuous variable, which represents a simple health-related index, affects the index's diagnostic ability of a binary health outcome. The aforementioned is attempted by studying the monotony of the index's sensitivity function, which is a measure of its diagnostic ability, in the cases that the index's distribution was either unknown or the uniform. The case of a composite health-related index which is formed by the sum of m component variables is also presented when the distribution of its component variables was either unknown or the uniform. It is proved that a health-related index's sensitivity is a non-decreasing function as to the finite length of its components' support, under certain condition. In addition, similar propositions are presented in the case that a health-related index is distributed normally according to its distribution parameters.  相似文献   

16.
The hierarchically orthogonal functional decomposition of any measurable function η of a random vector X=(X1,?…?, Xp) consists in decomposing η(X) into a sum of increasing dimension functions depending only on a subvector of X. Even when X1,?…?, Xp are assumed to be dependent, this decomposition is unique if the components are hierarchically orthogonal. That is, two of the components are orthogonal whenever all the variables involved in one of the summands are a subset of the variables involved in the other. Setting Y=η(X), this decomposition leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X [Chastaing G, Gamboa F, Prieur C. Generalized Hoeffding–Sobol decomposition for dependent variables – application to sensitivity analysis. Electron J Statist. 2012;6:2420–2448]. In this paper, a numerical method is developed to identify the component functions of the decomposition using the hierarchical orthogonality property. Furthermore, the asymptotic properties of the components estimation is studied, as well as the numerical estimation of the generalized sensitivity indices of a toy model. Lastly, the method is applied to a model arising from a real-world problem.  相似文献   

17.
In an observational study in which each treated subject is matched to several untreated controls by using observed pretreatment covariates, a sensitivity analysis asks how hidden biases due to unobserved covariates might alter the conclusions. The bounds required for a sensitivity analysis are the solution to an optimization problem. In general, this optimization problem is not separable, in the sense that one cannot find the needed optimum by performing a separate optimization in each matched set and combining the results. We show, however, that this optimization problem is asymptotically separable, so that when there are many matched sets a separate optimization may be performed in each matched set and the results combined to yield the correct optimum with negligible error. This is true when the Wilcoxon rank sum test or the Hodges-Lehmann aligned rank test is applied in matching with multiple controls. Numerical calculations show that the asymptotic approximation performs well with as few as 10 matched sets. In the case of the rank sum test, a table is given containing the separable solution. With this table, only simple arithmetic is required to conduct the sensitivity analysis. The method also supplies estimates, such as the Hodges-Lehmann estimate, and confidence intervals associated with rank tests. The method is illustrated in a study of dropping out of US high schools and the effects on cognitive test scores.  相似文献   

18.
For the structural systems with both the uncertainties of input variables and their distribution parameters, three sensitivity indices are proposed to measure the influence of input variables, distribution parameters and their interactive effects. With those sensitivity indices, analysts can make a decision that whether it is worth to accumulate data of one distribution parameter to reduce its uncertainty. Due to the large computational cost, the analytical solutions are derived for quadratic polynomial output responses. Whereas for the complex models, state dependent parameter (SDP) method is utilized to solve the proposed sensitivity indices efficiently.  相似文献   

19.
Sensitivity analysis is to study the influence of a small change in the input data on the output of the analysis. Han and Huh (1995) developed a quantification method for the ranked data. However, the question of stability in the analysis of ranked data has not been considered. Here, we propose a method of sensitivity analysis for ranked data. Our aim is to evaluate perturbations by using a graphical approach suggested by Han and Huh (1995). It extends the results obtained by Tanaka (1984) and Huh (1989) for the sensitivity analysis in Hayashi’s third method of quantification and those by Huh and Park (1990) for the principal component reduction of the case influence derivatives in regression. A numerical example is provided to explain how to conduct sensitivity analysis based on the proposed approach.  相似文献   

20.
The case sensitivity function approach to influence analysis is introduced as a natural smooth extension of influence curve methodology in which both the insights of geometry and the power of (convex) analysis are available. In it, perturbation is defined as movement between probability vectors defining weighted empirical distributions. A Euclidean geometry is proposed giving such perturbations both size and direction. The notion of the salience of a perturbation is emphasized. This approach has several benefits. A general probability case weight analysis results. Answers to a number of outstanding questions follow directly. Rescaled versions of the three usual finite sample influence curve measures—seen now to be required for comparability across different-sized subsets of cases—are readily available. These new diagnostics directly measure the salience of the (infinitesimal) perturbations involved. Their essential unity, both within and between subsets, is evident geometrically. Finally it is shown how a relaxation strategy, in which a high dimensional ( O ( nCm )) discrete problem is replaced by a low dimensional ( O ( n )) continuous problem, can combine with (convex) optimization results to deliver better performance in challenging multiple-case influence problems. Further developments are briefly indicated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号