首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper deals with the convergence of the expected improvement algorithm, a popular global optimization algorithm based on a Gaussian process model of the function to be optimized. The first result is that under some mild hypotheses on the covariance function k of the Gaussian process, the expected improvement algorithm produces a dense sequence of evaluation points in the search domain, when the function to be optimized is in the reproducing kernel Hilbert space generated by k  . The second result states that the density property also holds for P-almostP-almost all continuous functions, where P is the (prior) probability distribution induced by the Gaussian process.  相似文献   

2.
Computer models simulating a physical process are used in many areas of science. Due to the complex nature of these codes it is often necessary to approximate the code, which is typically done using a Gaussian process. In many situations the number of code runs available to build the Gaussian process approximation is limited. When the initial design is small or the underlying response surface is complicated this can lead to poor approximations of the code output. In order to improve the fit of the model, sequential design strategies must be employed. In this paper we introduce two simple distance based metrics that can be used to augment an initial design in a batch sequential manner. In addition we propose a sequential updating strategy to an orthogonal array based Latin hypercube sample. We show via various real and simulated examples that the distance metrics and the extension of the orthogonal array based Latin hypercubes work well in practice.  相似文献   

3.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   

4.
We propose a method that uses a sequential design instead of a space filling design for estimating tuning parameters of a complex computer model. The goal is to bring the computer model output closer to the real system output. The method fits separate Gaussian process (GP) models to the available data from the physical experiment and the computer experiment and minimizes the discrepancy between the predictions from the GP models to obtain estimates of the tuning parameters. A criterion based on the discrepancy between the predictions from the two GP models and the standard error of prediction for the computer experiment output is then used to obtain a design point for the next run of the computer experiment. The tuning parameters are re-estimated using the augmented data set. The steps are repeated until the budget for the computer experiment data is exhausted. Simulation studies show that the proposed method performs better in bringing a computer model closer to the real system than methods that use a space filling design.  相似文献   

5.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

6.
Deterministic computer simulations are often used as replacement for complex physical experiments. Although less expensive than physical experimentation, computer codes can still be time-consuming to run. An effective strategy for exploring the response surface of the deterministic simulator is the use of an approximation to the computer code, such as a Gaussian process (GP) model, coupled with a sequential sampling strategy for choosing design points that can be used to build the GP model. The ultimate goal of such studies is often the estimation of specific features of interest of the simulator output, such as the maximum, minimum, or a level set (contour). Before approximating such features with the GP model, sufficient runs of the computer simulator must be completed.Sequential designs with an expected improvement (EI) design criterion can yield good estimates of the features with minimal number of runs. The challenge is that the expected improvement function itself is often multimodal and difficult to maximize. We develop branch and bound algorithms for efficiently maximizing the EI function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of a contour. These branch and bound algorithms outperform other optimization strategies such as genetic algorithms, and can lead to significantly more accurate estimation of the features of interest.  相似文献   

7.
This paper proposes a Bayesian integrative analysis method for linking multi-fidelity computer experiments. Instead of assuming covariance structures of multivariate Gaussian process models, we handle the outputs from different levels of accuracy as independent processes and link them via a penalization method that controls the distance between their overall trends. Based on the priors induced by the penalty, we build Bayesian prediction models for the output at the highest accuracy. Simulated and real examples show that the proposed method is better than existing methods in terms of prediction accuracy for many cases.  相似文献   

8.
Experiments that involve the blending of several components are known as mixture experiments. In some mixture experiments, the response depends not only on the proportion of the mixture components, but also on the processing conditions, A new combined model is proposed which is based on Taylor series approximation and is intended to be a compromise between standard mixture models and standard response surface models. Cost and/or time constraints often limit the size of industrial experiments. With this in mind, we present a new class of designs that will accommodate the fitting of the new combined model.  相似文献   

9.
Bayesian calibration of computer models   总被引:5,自引:0,他引:5  
We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.  相似文献   

10.
“Dispersion” effects are considered in addition to “Location” effects of factors in the inferential procedure of sequential factor screening experiments with m factors each at two levels under search linear models. Search designs in measuring "Dispersion" and "Location" effects of factors are presented for both stage one and stage two of factor screening experiments with 4 ≤ m ≤ 10.  相似文献   

11.
To reduce the dimensionality of the second-order response surface design model, variance component indices under imposing and non imposing restrictions on the moment matrix toward the orthogonality are derived and presented and the same is illustrated with suitable examples in this article.  相似文献   

12.
A practical method is suggested for solving complicated D-optimal design problems analytically. Using this method the author has solved the problem for a quadratic log contrast model for experiments with mixtures introduced by J. Aitchison and J. Bacon-Shone. It is found that for a symmetric subspace of the finite dimensional simplex, the vertices and the centroid of this subspace are the only possible support points for a D-optimal design. The weights that must be assigned to these support points contain irrational numbers and are constrained by a system of three simultaneous linear equations, except for the special cases of 1- and 2-dimensional simplexes where the situation is much simpler. Numerical values for the solution are given up to the 19-dimensional simplex  相似文献   

13.
In this paper, a new experimental design and gradient estimation procedure is presented for Phase I response surface optimization. The design is motivated by basic principles of differential calculus, which imply that if a point in Rn has been reached by exactly minimizing a function along a given direction, then the gradient of the function at that point must be orthogonal to the search direction followed. While exact line search is not required for the new design to be effective, this principle implies that the dimension of the gradient estimation procedure may often be reduced from n to n-1 variables, and the experimenter is able to concentrate experimental effort within the most productive region around the center of the design. The new design and gradient estimation procedures are presented, and bias and variance properties are derived. The effectiveness of the new design is shown to depend on the experimenter's ability to terminate line search within a near-stationary region of the line search function A simple heuristic is presented which indicates whether the new design should be used at a given experimental region.  相似文献   

14.
Nowadays it is common to reproduce physical systems using mathematical simulation models and, despite the fact that computing resources continue to increase, computer simulations are growing in complexity. This leads to the adoption of surrogate models and one of the most popular methodologies is the well-known Ordinary Kriging, which is a statistical interpolator extensively used to approximate the output of deterministic simulation. This paper deals with the problem of finding suitable experimental plans for the Ordinary Kriging with exponential correlation structure. In particular, we derive exact optimal designs for prediction, estimation and information gain approaches in the one-dimensional case, giving further theoretical justifications for the adoption of the equidistant design. Moreover, we show that in some circumstances several results related to the uncorrelated setup still hold for correlated observations.  相似文献   

15.
To build a predictor, the output of a deterministic computer model or “code” is often treated as a realization of a stochastic process indexed by the code's input variables. The authors consider an asymptotic form of the Gaussian correlation function for the stochastic process where the correlation tends to unity. They show that the limiting best linear unbiased predictor involves Lagrange interpolating polynomials; linear model terms are implicitly included. The authors then develop optimal designs based on minimizing the limiting integrated mean squared error of prediction. They show through several examples that these designs lead to good prediction accuracy.  相似文献   

16.
Massive correlated data with many inputs are often generated from computer experiments to study complex systems. The Gaussian process (GP) model is a widely used tool for the analysis of computer experiments. Although GPs provide a simple and effective approximation to computer experiments, two critical issues remain unresolved. One is the computational issue in GP estimation and prediction where intensive manipulations of a large correlation matrix are required. For a large sample size and with a large number of variables, this task is often unstable or infeasible. The other issue is how to improve the naive plug-in predictive distribution which is known to underestimate the uncertainty. In this article, we introduce a unified framework that can tackle both issues simultaneously. It consists of a sequential split-and-conquer procedure, an information combining technique using confidence distributions (CD), and a frequentist predictive distribution based on the combined CD. It is shown that the proposed method maintains the same asymptotic efficiency as the conventional likelihood inference under mild conditions, but dramatically reduces the computation in both estimation and prediction. The predictive distribution contains comprehensive information for inference and provides a better quantification of predictive uncertainty as compared with the plug-in approach. Simulations are conducted to compare the estimation and prediction accuracy with some existing methods, and the computational advantage of the proposed method is also illustrated. The proposed method is demonstrated by a real data example based on tens of thousands of computer experiments generated from a computational fluid dynamic simulator.  相似文献   

17.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

18.
Many scientists believe that small experiments, guided by scientific intuition, are simpler and more efficient than design of experiments. This belief is strong and persists even in the face of data demonstrating that it is clearly wrong. In this paper, we present two powerful teaching examples illustrating the dangers of small experiments guided by scientific intuition. We describe two, simple, two‐dimensional spaces. These two spaces give rise to, and at the same time appear to generate supporting data for, scientific intuitions that are deeply flawed or wholly incorrect. We find these spaces useful in unfreezing scientific thinking and challenging the misplaced confidence in scientific intuition. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
A common strategy for avoiding information overload in multi-factor paired comparison experiments is to employ pairs of options which have different levels for only some of the factors in a study. For the practically important case where the factors fall into three groups such that all factors within a group have the same number of levels and where one is only interested in estimating the main effects, a comprehensive catalogue of D-optimal approximate designs is presented. These optimal designs use at most three different types of pairs and have a block diagonal information matrix.  相似文献   

20.
ABSRTACT

Since errors in factor levels affect the traditional statistical properties of response surface designs, an important question to consider is robustness of design to errors. However, when the actual design could be observed in the experimental settings, its optimality and prediction are of interest. Various numerical and graphical methods are useful tools for understanding the behavior of the designs. The D- and G-efficiencies and the fraction of design space plot are adapted to assess second-order response surface designs where the predictor variables are disturbed by a random error. Our study shows that the D-efficiencies of the competing designs are considerably low for big variance of the error, while the G-efficiencies are quite good. Fraction of design space plots display the distribution of the scaled prediction variance through the design space with and without errors in factor levels. The robustness of experimental designs against factor errors is explored through comparative study. The construction and use of the D- and G-efficiencies and the fraction of design space plots are demonstrated with several examples of different designs with errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号