首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste.  相似文献   

2.
Compartmental models have been widely used in modelling systems in pharmaco-kinetics, engineering, biomedicine and ecology since 1943 and turn out to be very good approximations for many different real-life systems. Sensitivity analysis (SA) is commonly employed at a preliminary stage of model development process to increase the confidence in the model and its predictions by providing an understanding of how the model response variables respond to changes in the inputs, data used to calibrate it and model structures. This paper concerns the application of some SA techniques to a linear, deterministic, time-invariant compartmental model of global carbon cycle (GCC). The same approach is also illustrated with a more complex GCC model which has some nonlinear components. By focusing on these two structurally different models for estimating the atmospheric CO2 content in the year 2100, sensitivity of model predictions to uncertainty attached to the model input factors is studied. The application/modification of SA techniques to compartmental models with steady-state constraint is explored using the 8-compartment model, and computational methods developed to maintain the initial steady-state condition are presented. In order to adjust the values of model input factors to achieve an acceptable match between observed and predicted model conditions, windowing analysis is used.  相似文献   

3.
The evaluation of hazards from complex, large scale, technologically advanced systems often requires the construction of computer implemented mathematical models. These models are used to evaluate the safety of the systems and to evaluate the consequences of modifications to the systems. These evaluations, however, are normally surrounded by significant uncertainties related to the uncertainty inherent in natural phenomena such as the weather and those related to uncertainties in the parameters and models used in the evaluation.

Another use of these models is to evaluate strategies for improving information used in the modeling process itself. While sensitivity analysis is useful in defining variables in the model that are important, uncertainty analysis provides a tool for assessing the importance of uncertainty about these variables. A third complementary technique, is decision analysis. It provides a methodology for explicitly evaluating and ranking potential improvements to the model. Its use in the development of information gathering strategies for a nuclear waste repository are discussed in this paper.  相似文献   

4.
Running complex computer models can be expensive in computer time, while learning about the relationships between input and output variables can be difficult. An emulator is a fast approximation to a computationally expensive model that can be used as a surrogate for the model, to quantify uncertainty or to improve process understanding. Here, we examine emulators based on singular value decompositions (SVDs) and use them to emulate global climate and vegetation fields, examining how these fields are affected by changes in the Earth's orbit. The vegetation field may be emulated directly from the orbital variables, but an appealing alternative is to relate it to emulations of the climate fields, which involves high-dimensional input and output. The SVDs radically reduce the dimensionality of the input and output spaces and are shown to clarify the relationships between them. The method could potentially be useful for any complex process with correlated, high-dimensional inputs and/or outputs.  相似文献   

5.
A global sensitivity analysis of complex computer codes is usually performed by calculating the Sobol indices. The indices are estimated using Monte Carlo methods. The Monte Carlo simulations are time-consuming even if the computer response is replaced by a metamodel. This paper proposes a new method for calculating sensitivity indices that overcomes the Monte Carlo estimation. The method assumes a discretization of the domain of simulation and uses the expansion of the computer response on an orthogonal basis of complex functions to built a metamodel. This metamodel is then used to derive an analytical estimation of the Sobol indices. This approach is successfully tested on analytical functions and is compared with two alternative methods.  相似文献   

6.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   

7.
ABSTRACT

Hazard rate functions are often used in modeling of lifetime data. The Exponential Power Series (EPS) family has a monotone hazard rate function. In this article, the influence of input factors such as time and parameters on the variability of hazard rate function is assessed by local and global sensitivity analysis. Two different indices based on local and global sensitivity indices are presented. The simulation results for two datasets show that the hazard rate functions of the EPS family are sensitive to input parameters. The results also show that the hazard rate function of the EPS family is more sensitive to the exponential distribution than power series distributions.  相似文献   

8.
Kriging models have been widely used in computer experiments for the analysis of time-consuming computer codes. Based on kernels, they are flexible and can be tuned to many situations. In this paper, we construct kernels that reproduce the computer code complexity by mimicking its interaction structure. While the standard tensor-product kernel implicitly assumes that all interactions are active, the new kernels are suited for a general interaction structure, and will take advantage of the absence of interaction between some inputs. The methodology is twofold. First, the interaction structure is estimated from the data, using a first initial standard Kriging model, and represented by a so-called FANOVA graph. New FANOVA-based sensitivity indices are introduced to detect active interactions. Then this graph is used to derive the form of the kernel, and the corresponding Kriging model is estimated by maximum likelihood. The performance of the overall procedure is illustrated by several 3-dimensional and 6-dimensional simulated and real examples. A substantial improvement is observed when the computer code has a relatively high level of complexity.  相似文献   

9.
In disease mapping, health outcomes measured at the same spatial locations may be correlated, so one can consider joint modeling the multivariate health outcomes accounting for their dependence. The general approaches often used for joint modeling include shared component models and multivariate models. An alternative way to model the association between two health outcomes, when one outcome can naturally serve as a covariate of the other, is to use ecological regression model. For example, in our application, preterm birth (PTB) can be treated as a predictor for low birth weight (LBW) and vice versa. Therefore, we proposed to blend the ideas from joint modeling and ecological regression methods to jointly model the relative risks for LBW and PTBs over the health districts in Saskatchewan, Canada, in 2000–2010. This approach is helpful when proxy of areal-level contextual factors can be derived based on the outcomes themselves when direct information on risk factors are not readily available. Our results indicate that the proposed approach improves the model fit when compared with the conventional joint modeling methods. Further, we showed that when no strong spatial autocorrelation is present, joint outcome modeling using only independent error terms can still provide a better model fit when compared with the separate modeling.  相似文献   

10.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

11.
Deterministic computer simulations are often used as replacement for complex physical experiments. Although less expensive than physical experimentation, computer codes can still be time-consuming to run. An effective strategy for exploring the response surface of the deterministic simulator is the use of an approximation to the computer code, such as a Gaussian process (GP) model, coupled with a sequential sampling strategy for choosing design points that can be used to build the GP model. The ultimate goal of such studies is often the estimation of specific features of interest of the simulator output, such as the maximum, minimum, or a level set (contour). Before approximating such features with the GP model, sufficient runs of the computer simulator must be completed.Sequential designs with an expected improvement (EI) design criterion can yield good estimates of the features with minimal number of runs. The challenge is that the expected improvement function itself is often multimodal and difficult to maximize. We develop branch and bound algorithms for efficiently maximizing the EI function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of a contour. These branch and bound algorithms outperform other optimization strategies such as genetic algorithms, and can lead to significantly more accurate estimation of the features of interest.  相似文献   

12.
Bayesian calibration of computer models   总被引:5,自引:0,他引:5  
We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.  相似文献   

13.
Global sensitivity analysis (GSA) can help practitioners focusing on the inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. Screening, as the qualitative method of GSA, is to identify and exclude non- or less-influential input variables in high-dimensional models. However, for non-parametric problems, there remains the challenging problem of finding an efficient screening procedure, as one needs to properly handle the non-parametric high-order interactions among input variables and keep the size of the screening experiment economically feasible. In this study, we design a novel screening approach based on analysis of variance decomposition of the model. This approach combines the virtues of run-size economy and model independence. The core idea is to choose a low-level complete orthogonal array to derive the sensitivity estimates for all input factors and their interactions with low cost, and then develop a statistical process to screen out the non-influential ones without assuming the effect-sparsity of the model. Simulation studies show that the proposed approach performs well in various settings.  相似文献   

14.
To build a predictor, the output of a deterministic computer model or “code” is often treated as a realization of a stochastic process indexed by the code's input variables. The authors consider an asymptotic form of the Gaussian correlation function for the stochastic process where the correlation tends to unity. They show that the limiting best linear unbiased predictor involves Lagrange interpolating polynomials; linear model terms are implicitly included. The authors then develop optimal designs based on minimizing the limiting integrated mean squared error of prediction. They show through several examples that these designs lead to good prediction accuracy.  相似文献   

15.
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMMs). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study to jointly model questionnaire-based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.  相似文献   

16.
First- and second-order reliability algorithms (FORM AND SORM) have been adapted for use in modeling uncertainty and sensitivity related to flow in porous media. They are called reliability algorithms because they were developed originally for analysis of reliability of structures. FORM and SORM utilize a general joint probability model, the Nataf model, as a basis for transforming the original problem formulation into uncorrelated standard normal space, where a first-order or second-order estimate of the probability related to some failure criterion can easily be made. Sensitivity measures that incorporate the probabilistic nature of the uncertain variables in the problem are also evaluated, and are quite useful in indicating which uncertain variables contribute the most to the probabilistic outcome. In this paper the reliability approach is reviewed and the advantages and disadvantages compared to other typical probabilistic techniques used for modeling flow and transport. Some example applications of FORM and SORM from recent research by the authors and others are reviewed. FORM and SORM have been shown to provide an attractive alternative to other probabilistic modeling techniques in some situations.  相似文献   

17.
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation.  相似文献   

18.
The autoregressive model for cointegrated variables is analyzed with respect to the role of the constant and linear terms. Various models for 1(1) variables defined by restrictions on the deterministic terms are discussed, and it is shown that statistical inference can be performed by reduced rank regression. The asymptotic distributions of the test statistics and estimators are found. A similar analysis is given for models for 1(2) variables with a constant term.  相似文献   

19.
The autoregressive model for cointegrated variables is analyzed with respect to the role of the constant and linear terms. Various models for 1(1) variables defined by restrictions on the deterministic terms are discussed, and it is shown that statistical inference can be performed by reduced rank regression. The asymptotic distributions of the test statistics and estimators are found. A similar analysis is given for models for 1(2) variables with a constant term.  相似文献   

20.
In this paper, we propose a new methodology for solving stochastic inversion problems through computer experiments, the stochasticity being driven by a functional random variables. This study is motivated by an automotive application. In this context, the simulator code takes a double set of simulation inputs: deterministic control variables and functional uncertain variables. This framework is characterized by two features. The first one is the high computational cost of simulations. The second is that the probability distribution of the functional input is only known through a finite set of realizations. In our context, the inversion problem is formulated by considering the expectation over the functional random variable. We aim at solving this problem by evaluating the model on a design, whose adaptive construction combines the so-called stepwise uncertainty reduction methodology with a strategy for an efficient expectation estimation. Two greedy strategies are introduced to sequentially estimate the expectation over the functional uncertain variable by adaptively selecting curves from the initial set of realizations. Both of these strategies consider functional principal component analysis as a dimensionality reduction technique assuming that the realizations of the functional input are independent realizations of the same continuous stochastic process. The first strategy is based on a greedy approach for functional data-driven quantization, while the second one is linked to the notion of space-filling design. Functional PCA is used as an intermediate step. For each point of the design built in the reduced space, we select the corresponding curve from the sample of available curves, thus guaranteeing the robustness of the procedure to dimension reduction. The whole methodology is illustrated and calibrated on an analytical example. It is then applied on the automotive industrial test case where we aim at identifying the set of control parameters leading to meet the pollutant emission standards of a vehicle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号