首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

Physical phenomena are commonly modelled by time consuming numerical simulators, function of many uncertain parameters whose influences can be measured via a global sensitivity analysis. The usual variance-based indices require too many simulations, especially as the inputs are numerous. To address this limitation, we consider recent advances in dependence measures, focusing on the distance correlation and the Hilbert–Schmidt independence criterion. We study and use these indices for a screening purpose. Numerical tests reveal differences between variance-based indices and dependence measures. Then, two approaches are proposed to use the latter for a screening purpose. The first approach uses independence tests, with existing asymptotic versions and spectral extensions; bootstrap versions are also proposed. The second considers a linear model with dependence measures, coupled to a bootstrap selection method or a Lasso penalization. Numerical experiments show their potential in the presence of many non-influential inputs and give successful results for a nuclear reliability application.  相似文献   

2.
Measures of association between two sets of random variables have long been of interest to statisticians. The classical canonical correlation analysis (LCCA) can characterize, but also is limited to, linear association. This article introduces a nonlinear and nonparametric kernel method for association study and proposes a new independence test for two sets of variables. This nonlinear kernel canonical correlation analysis (KCCA) can also be applied to the nonlinear discriminant analysis. Implementation issues are discussed. We place the implementation of KCCA in the framework of classical LCCA via a sequence of independent systems in the kernel associated Hilbert spaces. Such a placement provides an easy way to carry out the KCCA. Numerical experiments and comparison with other nonparametric methods are presented.  相似文献   

3.
Global sensitivity analysis (GSA) can help practitioners focusing on the inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. Screening, as the qualitative method of GSA, is to identify and exclude non- or less-influential input variables in high-dimensional models. However, for non-parametric problems, there remains the challenging problem of finding an efficient screening procedure, as one needs to properly handle the non-parametric high-order interactions among input variables and keep the size of the screening experiment economically feasible. In this study, we design a novel screening approach based on analysis of variance decomposition of the model. This approach combines the virtues of run-size economy and model independence. The core idea is to choose a low-level complete orthogonal array to derive the sensitivity estimates for all input factors and their interactions with low cost, and then develop a statistical process to screen out the non-influential ones without assuming the effect-sparsity of the model. Simulation studies show that the proposed approach performs well in various settings.  相似文献   

4.
Since correspondence analysis appears to be sensitive to outliers, it is important to be able to evaluate the sensitivity of the data on the results. This article deals with measuring the influence of rows and columns on the results obtained with correspondence analysis. To establish the influence of individuals on the analysis, we use the notion of influence curve and we propose a general criterion based on the mean square error to measure the sensitivity of the correspondence analysis and its robustness. A numerical example is presented to illustrate the notions developed in this article.  相似文献   

5.
Symmetrical global sensitivity analysis (SGSA) can help practitioners focusing on the symmetrical terms of inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. However, there remains the challenging problem of finding an efficient method to get symmetrical global sensitivity indices (SGSI) when the functional form of the symmetrical terms is unknown, including numerical and non-parametric situations. In this study, we propose a novel sampling plan, called symmetrical design, for SGSA. As a preliminary experiment for model feature extracting, such plan offers the virtue of run-size economy due to its closure respective to the given group. Using the design, we give estimation methods of SGSI as well as their asymptotic properties respectively for numerical model and non-parametrical model directly by the model outputs, and further propose a significance test for SGSI in non-parametric situation. A case study for a benchmark of GSA and a real data analysis show the effectiveness of the proposed design.  相似文献   

6.
Probabilistic sensitivity analysis (SA) allows to incorporate background knowledge on the considered input variables more easily than many other existing SA techniques. Incorporation of such knowledge is performed by constructing a joint density function over the input domain. However, it rarely happens that available knowledge directly and uniquely translates into such a density function. A naturally arising question is then to what extent the choice of density function determines the values of the considered sensitivity measures. In this paper we perform simulation studies to address this question. Our empirical analysis suggests some guidelines, but also cautions to practitioners in the field of probabilistic SA.  相似文献   

7.
This article reviews symmetrical global sensitivity analysis based on the analysis of variance of high-dimensional model representation. To overcome the computational difficulties and explore the use of symmetrical design of experiment (SDOE), two methods are presented. If the form of the objective function f is known, we use SDOE to estimate the symmetrical global sensitivity indices instead of Monte Carlo or quasi-Monte Carlo simulation. Otherwise, we use the observed values of the experiment to do symmetrical global sensitivity analysis. These methods are easy to implement and can reduce the computational cost. An example is given by symmetrical design of experiment.  相似文献   

8.
Sensitivity analysis (SA) of a numerical model, for instance simulating physical phenomena, is useful to quantify the influence of the inputs on the model responses. This paper proposes a new sensitivity index, based upon the modification of the probability density function (pdf) of the random inputs, when the quantity of interest is a failure probability (probability that a model output exceeds a given threshold). An input is considered influential if the input pdf modification leads to a broad change in the failure probability. These sensitivity indices can be computed using the sole set of simulations that has already been used to estimate the failure probability, thus limiting the number of calls to the numerical model. In the case of a Monte Carlo sample, asymptotical properties of the indices are derived. Based on Kullback–Leibler divergence, several types of input perturbations are introduced. The relevance of this new SA method is analysed through three case studies.  相似文献   

9.
Compartmental models have been widely used in modelling systems in pharmaco-kinetics, engineering, biomedicine and ecology since 1943 and turn out to be very good approximations for many different real-life systems. Sensitivity analysis (SA) is commonly employed at a preliminary stage of model development process to increase the confidence in the model and its predictions by providing an understanding of how the model response variables respond to changes in the inputs, data used to calibrate it and model structures. This paper concerns the application of some SA techniques to a linear, deterministic, time-invariant compartmental model of global carbon cycle (GCC). The same approach is also illustrated with a more complex GCC model which has some nonlinear components. By focusing on these two structurally different models for estimating the atmospheric CO2 content in the year 2100, sensitivity of model predictions to uncertainty attached to the model input factors is studied. The application/modification of SA techniques to compartmental models with steady-state constraint is explored using the 8-compartment model, and computational methods developed to maintain the initial steady-state condition are presented. In order to adjust the values of model input factors to achieve an acceptable match between observed and predicted model conditions, windowing analysis is used.  相似文献   

10.
Abstract

Mutual information is a measure for investigating the dependence between two random variables. The copula based estimation of mutual information reduces the complexity because it is depend only on the copula density. We propose two estimators and discuss the asymptotic properties. To compare the performance of the estimators a simulation study is carried out. The methods are illustrated using real data sets.  相似文献   

11.
Complex computer codes are widely used in science to model physical systems. Sensitivity analysis aims to measure the contributions of the inputs on the code output variability. An efficient tool to perform such analysis is the variance-based methods which have been recently investigated in the framework of dependent inputs. One of their issue is that they require a large number of runs for the complex simulators. To handle it, a Gaussian process (GP) regression model may be used to approximate the complex code. In this work, we propose to decompose a GP into a high-dimensional representation. This leads to the definition of a variance-based sensitivity measure well tailored for non-independent inputs. We give a methodology to estimate these indices and to quantify their uncertainty. Finally, the approach is illustrated on toy functions and on a river flood model.  相似文献   

12.
We consider the role of global robustness measures in Bayes linear analysis. We suggest two such measures, one for expectation comparisons and one for variance comparisons. Geometric interpretations of the measures are presented. The approach is illustrated by considering the robustness of certain multiplicative models to assumptions of independence, with particular application to a problem arising in an asset management model for water resources.  相似文献   

13.
The global sensitivity analysis method used to quantify the influence of uncertain input variables on the variability in numerical model responses has already been applied to deterministic computer codes; deterministic means here that the same set of input variables always gives the same output value. This paper proposes a global sensitivity analysis methodology for stochastic computer codes, for which the result of each code run is itself random. The framework of the joint modeling of the mean and dispersion of heteroscedastic data is used. To deal with the complexity of computer experiment outputs, nonparametric joint models are discussed and a new Gaussian process-based joint model is proposed. The relevance of these models is analyzed based upon two case studies. Results show that the joint modeling approach yields accurate sensitivity index estimators even when heteroscedasticity is strong.  相似文献   

14.
First- and second-order reliability algorithms (FORM AND SORM) have been adapted for use in modeling uncertainty and sensitivity related to flow in porous media. They are called reliability algorithms because they were developed originally for analysis of reliability of structures. FORM and SORM utilize a general joint probability model, the Nataf model, as a basis for transforming the original problem formulation into uncorrelated standard normal space, where a first-order or second-order estimate of the probability related to some failure criterion can easily be made. Sensitivity measures that incorporate the probabilistic nature of the uncertain variables in the problem are also evaluated, and are quite useful in indicating which uncertain variables contribute the most to the probabilistic outcome. In this paper the reliability approach is reviewed and the advantages and disadvantages compared to other typical probabilistic techniques used for modeling flow and transport. Some example applications of FORM and SORM from recent research by the authors and others are reviewed. FORM and SORM have been shown to provide an attractive alternative to other probabilistic modeling techniques in some situations.  相似文献   

15.
Missing data are a common problem in almost all areas of empirical research. Ignoring the missing data mechanism, especially when data are missing not at random (MNAR), can result in biased and/or inefficient inference. Because MNAR mechanism is not verifiable based on the observed data, sensitivity analysis is often used to assess it. Current sensitivity analysis methods primarily assume a model for the response mechanism in conjunction with a measurement model and examine sensitivity to missing data mechanism via the parameters of the response model. Recently, Jamshidian and Mata (Post-modelling sensitivity analysis to detect the effect of missing data mechanism, Multivariate Behav. Res. 43 (2008), pp. 432–452) introduced a new method of sensitivity analysis that does not require the difficult task of modelling the missing data mechanism. In this method, a single measurement model is fitted to all of the data and to a sub-sample of the data. Discrepancy in the parameter estimates obtained from the the two data sets is used as a measure of sensitivity to missing data mechanism. Jamshidian and Mata describe their method mainly in the context of detecting data that are missing completely at random (MCAR). They used a bootstrap type method, that relies on heuristic input from the researcher, to test for the discrepancy of the parameter estimates. Instead of using bootstrap, the current article obtains confidence interval for parameter differences on two samples based on an asymptotic approximation. Because it does not use bootstrap, the developed procedure avoids likely convergence problems with the bootstrap methods. It does not require heuristic input from the researcher and can be readily implemented in statistical software. The article also discusses methods of obtaining sub-samples that may be used to test missing at random in addition to MCAR. An application of the developed procedure to a real data set, from the first wave of an ongoing longitudinal study on aging, is presented. Simulation studies are performed as well, using two methods of missing data generation, which show promise for the proposed sensitivity method. One method of missing data generation is also new and interesting in its own right.  相似文献   

16.
17.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

18.
The fact of estimating how a model output is influenced by the variations of inputs has become an important problematic in reliability and sensitivity analysis. This article is interested in estimating sensitivity indices useful to quantify the contribution of inputs to the variance of model output. A multivariate mixed kernel estimator is investigated since, until now, discrete and continuous inputs have been separately considered in kernel estimation of sensitivity indices. To illustrate the differences between the influence of mixed, discrete, and continuous inputs, analytical expressions of Sobol sensitivity indices are expressed in these three cases for the Ishigami test function. Besides, the performance of the mixed kernel estimator is illustrated through simulations in which the Bayesian procedure is applied for bandwidth parameter choice. An application is also realized on a real example. Finally, the use of an appropriate kernel estimator according to the type of inputs is found to be influential on the accuracy of sensitivity indices estimates.  相似文献   

19.
Uncertainty and sensitivity analyses for systems that involve both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty are discussed. In such analyses, the dependent variable is usually a complementary cumulative distribution function (CCDF) that arises from stochastic uncertainty; uncertainty analysis involves the determination of a distribution of CCDFs that results from subjective uncertainty, and sensitivity analysis involves the determination of the effects of subjective uncertainty in individual variables on this distribution of CCDFs. Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made of Latin hypercube sampling, importance sampling and regression-based sensitivity analysis techniques. The underlying ideas, which are initially presented in an abstract form, are central to the design and performance of real analyses. To emphasize the connection between concept and computational practice, these ideas are illustrated with an analysis involving the MACCS reactor accident consequence model a, performance assessment for the Waste Isolation Pilot Plant, and a probabilistic risk assessment for a nuclear power station.  相似文献   

20.
提出多维时间序列中各分量之间直接联系存在性的信息论检验方法,构造了条件互信息统计量检验分量间的条件独立性,统计量的显著性用置换检验决定.将提出的方法应用到国际股票市场,研究收益率序列相依关系,结果表明,此方法能有效检验各分量之间的直接联系和间接联系.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号