首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we provide a broad introduction to the topic of computer experiments. We begin by briefly presenting a number of applications with different types of output or different goals. We then review modelling strategies, including the popular Gaussian process approach, as well as variations and modifications. Other strategies that are reviewed are based on polynomial regression, non-parametric regression and smoothing spline ANOVA. The issue of multi-level models, which combine simulators of different resolution in the same experiment, is also addressed. Special attention is given to modelling techniques that are suitable for functional data. To conclude the modelling section, we discuss calibration, validation and verification. We then review design strategies including Latin hypercube designs and space-filling designs and their adaptation to computer experiments. We comment on a number of special issues, such as designs for multi-level simulators, nested factors and determination of experiment size.  相似文献   

2.
This article shows how to compute the in-sample effect of exogenous inputs on the endogenous variables in any linear model written in a state–space form. Estimating this component may be either interesting by itself, or a previous step before decomposing a time series into trend, cycle, seasonal and error components. The practical application and usefulness of this method is illustrated by estimating the effect of advertising on the monthly sales of Lydia Pinkham's vegetable compound.  相似文献   

3.
In this paper, measurements from experiments and results of a finite element analysis (FEA) are combined in order to compute accurate empirical models for the temperature distribution before a thermomechanically coupled forming process. To accomplish this, Design and Analysis of Computer Experiments (DACE) is used to separately compute models for the measurements and the functional output of the FEA. Based on a hierarchical approach, a combined model of the process is computed. In this combined modelling approach, the model for the FEA is corrected by taking into account the systematic deviations from the experimental measurements. The large number of observations based on the functional output hinders the direct computation of the DACE models due to the internal inversion of the correlation matrix. Thus, different techniques for identifying a relevant subset of the observations are proposed. The application of the resulting procedure is presented, and a statistical validation of the empirical models is performed.  相似文献   

4.
In human mortality modelling, if a population consists of several subpopulations it can be desirable to model their mortality rates simultaneously while taking into account the heterogeneity among them. The mortality forecasting methods tend to result in divergent forecasts for subpopulations when independence is assumed. However, under closely related social, economic and biological backgrounds, mortality patterns of these subpopulations are expected to be non-divergent in the future. In this article, we propose a new method for coherent modelling and forecasting of mortality rates for multiple subpopulations, in the sense of nondivergent life expectancy among subpopulations. The mortality rates of subpopulations are treated as multilevel functional data and a weighted multilevel functional principal component (wMFPCA) approach is proposed to model and forecast them. The proposed model is applied to sex-specific data for nine developed countries, and the results show that, in terms of overall forecasting accuracy, the model outperforms the independent model and the Product-Ratio model as well as the unweighted multilevel functional principal component approach.  相似文献   

5.

Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression, techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions’ span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest that truly important directions are unimportant.

  相似文献   

6.
The problem of testing the independence of a scalar and a unit vector is re-examined in the context of a specific scientific problem which is described in Section 1. Several alternatives to the Jupp-Mardia (1980) statistic are suggested. The null distributions of all the statistics are discussed under permutations.  相似文献   

7.
This is a survey article on known results about analytic solutions and numerical solutions of optimal designs for various regression models for experiments with mixtures. The regression models include polynomial models, models containing homogeneous functions, models containing inverse terms and ratios, log contrast models, models with quantitative variables, and mod els containing the amount of mixture, Optimality criteria considered include D-, A-, E-,φp- and Iλ-Optimalities. Uniform design and uniform optimal design for mixture components, and efficiencies of the {q,2} simplex-controid design are briefly discussed.  相似文献   

8.
Recently, van der Linde (Comput. Stat. Data Anal. 53:517–533, 2008) proposed a variational algorithm to obtain approximate Bayesian inference in functional principal components analysis (FPCA), where the functions were observed with Gaussian noise. Generalized FPCA under different noise models with sparse longitudinal data was developed by Hall et al. (J. R. Stat. Soc. B 70:703–723, 2008), but no Bayesian approach is available yet. It is demonstrated that an adapted version of the variational algorithm can be applied to obtain a Bayesian FPCA for canonical parameter functions, particularly log-intensity functions given Poisson count data or logit-probability functions given binary observations. To this end a second order Taylor expansion of the log-likelihood, that is, a working Gaussian distribution and hence another step of approximation, is used. Although the approach is conceptually straightforward, difficulties can arise in practical applications depending on the accuracy of the approximation and the information in the data. A modified algorithm is introduced generally for one-parameter exponential families and exemplified for binary and count data. Conditions for its successful application are discussed and illustrated using simulated data sets. Also an application with real data is presented.  相似文献   

9.
For consistency, the parameter space in the Gauss-Markov model with singular covariance matrix is usually restricted by observation vector. This restriction arises some difficulties in comparison of linear experiments. To avoid it we reduce the problem of comparison from singular to nonsingular case.  相似文献   

10.
A- and D-optimal designs are investigated for a log contrast model suggested by Aitchison & Bacon-Shone for experiments with mixtures. It is proved that when the number of mixture components q is an even integer, A- and D-optimal designs are identical; and when q is an odd integer, A- and D-optimal designs are different, but they share some common support points and are very close to each other in efficiency. Optimal designs with a minimum number of support points are also constructed for 3, 4, 5 and 6 mixture components.  相似文献   

11.
The following queuing system is considered:Two independent recurrent input streams (streams 1 and 2) arrive at a server. It is assumed that stream 1 is of Poisson type. Three priority disciplines are studied in case that these customers have priority:head-of-the-line, preemptive-resume, and preemptive-repeat discipline. Formulas derived for the limiting distribution functions of the actual and the virtual waiting time of low priority customers and of the number of these customers in the system, by using of independences of certain random processes when the time tends to infinity.  相似文献   

12.
This paper provides closed form expressions for the sample size for two-level factorial experiments when the response is the number of defectives. The sample sizes are obtained by approximating the two-sided test for no effect through tests for the mean of a normal distribution, and borrowing the classical sample size solution for that problem. The proposals are appraised relative to the exact sample sizes computed numerically, without appealing to any approximation to the binomial distribution, and the use of the sample size tables provided is illustrated through an example.  相似文献   

13.
In the present study we compare three state rotation methods in modelling the impact of the US economy on the Finnish economy, i.e. Schur decomposition, eigenvalue analysis and singular value decomposition. Singular value decomposition is seen to provide a robust approximation of the state rotation in most cases studied, irrespective of whether the characteristic roots of the state transition matrix are complex. Thus, singular value decomposition seems to be a viable computational device not only in estimating the system matrices of the state space model, but also in state rotation, as compared to the more involved techniques based on eigenvalue analysis or Schur decomposition.  相似文献   

14.
We propose a general framework for regression models with functional response containing a potentially large number of flexible effects of functional and scalar covariates. Special emphasis is put on historical functional effects, where functional response and functional covariate are observed over the same interval and the response is only influenced by covariate values up to the current grid point. Historical functional effects are mostly used when functional response and covariate are observed on a common time interval, as they account for chronology. Our formulation allows for flexible integration limits including, e.g., lead or lag times. The functional responses can be observed on irregular curve-specific grids. Additionally, we introduce different parameterizations for historical effects and discuss identifiability issues.The models are estimated by a component-wise gradient boosting algorithm which is suitable for models with a potentially high number of covariate effects, even more than observations, and inherently does model selection. By minimizing corresponding loss functions, different features of the conditional response distribution can be modeled, including generalized and quantile regression models as special cases. The methods are implemented in the open-source R package FDboost. The methodological developments are motivated by biotechnological data on Escherichia coli fermentations, but cover a much broader model class.  相似文献   

15.
Commonly used tests to detect carcinogenic potential of a test compound make extreme assumptions about the lethality of tumors, due to their occult nature. In this paper we compare a nonparametric test, which uses interim sacrifice to avoid such assumptions, with these tests using simulation based on the EDOl data. Results indicate that in the presence of a significant difference in the mortality rate with treatment, commonly used methods could fail to maintain the nominal significance level. However, when there is no difference in the mortality rate, such procedures are robust to the underlying assumptions about the lethality of tumors and more powerful than the nonparametric test using interim sacrifice.  相似文献   

16.
A practical method is suggested for solving complicated D-optimal design problems analytically. Using this method the author has solved the problem for a quadratic log contrast model for experiments with mixtures introduced by J. Aitchison and J. Bacon-Shone. It is found that for a symmetric subspace of the finite dimensional simplex, the vertices and the centroid of this subspace are the only possible support points for a D-optimal design. The weights that must be assigned to these support points contain irrational numbers and are constrained by a system of three simultaneous linear equations, except for the special cases of 1- and 2-dimensional simplexes where the situation is much simpler. Numerical values for the solution are given up to the 19-dimensional simplex  相似文献   

17.
We consider the problem of constructing a fixed-size confidence region for the difference of means of two multivariate normal populations It is assumed that the variance-covariance matrices of two populations are different only by unknown scalar multipliers Two-stage procedures are presented to derive such a confidence region We also discuss the asymptotic efficiency of the procedure.  相似文献   

18.
In software engineering empirical comparisons of different ways of writing computer code are often made. This leads to the need for planned experimentation and has recently established a new area of application of DoE. This paper is motivated by an experiment on the production of multimedia services on the web, performed at the Telecom Research Centre in Turin, where two different ways of developing code, with or without a framework, were compared. As the experiment progresses, the programmers performance improves as he/she undergoes a learning process; this must be taken into account as it may affect the outcome of the trial. In this paper we discuss statistical models and D-optimal plans for such experiments and indicate some heuristics which allow a much speedier search for the optimum. Solutions differ according to whether we assume that the learning process depends or not on the treatments.  相似文献   

19.
A mixture experiment is an experiment in which the response is assumed to depend on the relative proportions of the ingredients present in the mixture and not on the total amount of the mixture. In such experiment process, variables do not form any portion of the mixture but the levels changed could affect the blending properties of the ingredients. Sometimes, the mixture experiments are costly and the experiments are to be conducted in less number of runs. Here, a general method for construction of efficient mixture experiments in a minimum number of runs by the method for projection of efficient response surface design onto the constrained region is obtained. The efficient designs with a less number of runs have been constructed for 3rd, 4th, and 5th component of mixture experiments with one process variable.  相似文献   

20.
For dependent Bernoulli random variables, the distribution of a sum of the random variables is obtained as a generalized binomial distribution determined by a two-state Markov chain. Asymptotic distributions of the sum are derived from the central limit theorem and the Edgeworth expansion. A numerical comparison of the exact and asymptotic distributions of the sum is also given. Further a distribution of the sum by the Bayesian approach is derived and its asymptotic distributions are provided. Numerical results are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号