共查询到20条相似文献,搜索用时 10 毫秒
1.
This article considers the problem of response surface model fit in computer experiment. We propose a new sequential adaptive design through the “maximum expected improvement” approach. The new method defines the improvement by the first order approximation from the known design points using derivative information and sequentially seeks point in area with large curvature and variance. A version with distance penalty is also considered. We demonstrate their superiority over some existing methods by simulation. 相似文献
2.
《Journal of Statistical Computation and Simulation》2012,82(3):209-231
In this paper, a Bayesian two-stage D–D optimal design for mixture experimental models under model uncertainty is developed. A Bayesian D-optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimalityprocedure that collaborates with the improved model from the first stage data. The results show that a Bayesian two-stage D–D-optimal design for mixture experiments under model uncertainty is more efficient than both the Bayesian one-stage D-optimal design and the non-Bayesian one-stage D-optimal design in most situations. Furthermore, simulations are used to obtain a reasonable ratio of the sample sizes between the two stages. 相似文献
3.
Determining group size is a crucial stage before conducting experiments using group testing methods. Considering misclassification, we propose D-criterion and A-criterion to determine a robust group size for screening multiple infections simultaneously. Extensive simulation shows the advantage of the proposed method when the goal is estimation. 相似文献
4.
AbstractWhen estimating a proportion p by group testing, and it is desired to increase precision, it is sometimes impractical to obtain additional individuals but it is possible to retest groups formed from the individuals within the groups that test positive at the first stage. Hepworth and Watson assessed four methods of retesting, and recommended a random regrouping of individuals from the first stage. They developed an estimator of p for their proposed method, and, because of the analytic complexity, used simulation to examine its variance properties. We now provide an analytical solution to the variance of the estimator, and compare its performance with the earlier simulated results. We show that our solution gives an acceptable approximation in a reasonable range of circumstances. 相似文献
5.
S. James Press 《统计学通讯:理论与方法》2013,42(3):1099-1105
The MISER criterion for pre-experiment balancing of experiments that will be analyzed by the analysis of covariance was proposed by Press (1987). The idea was to minimize, over all designs, the inflation produced by design imbalance (inequality of cell covariate vectors) , of the average standard errors of contrasts. In this note we consider the balancing problem in the case of multiple response experiments, and we show that under certain reasonable assumptions, the multivariate result is analogous to that in the univariate case. 相似文献
6.
《Journal of Statistical Computation and Simulation》2012,82(3):553-568
In this paper, we present two new estimators for the entropy of absolutely continuous random variables and consider some of their properties. Consistency of the first estimator is shown by Monte Carlo method, and the consistency of the second estimator is proved theoretically. Using these estimators, two new tests for normality are presented and their powers are compared with the other entropy-based tests. Simulation results show that the proposed estimators and test statistics perform very well. Finally, a real example is presented and analysed. 相似文献
7.
Graham Hepworth 《Australian & New Zealand Journal of Statistics》2004,46(3):391-405
Group testing has been used in many fields of study to estimate proportions. When groups are of different size, the derivation of exact confidence intervals is complicated by the lack of a unique ordering of the event space. An exact interval estimation method is described here, in which outcomes are ordered according to a likelihood ratio statistic. The method is compared with another exact method, in which outcomes are ordered by their associated MLE. Plots of the P‐value against the proportion are useful in examining the properties of the methods. Coverage provided by the intervals is assessed using several realistic grouptesting procedures. The method based on the likelihood ratio, with a mid‐P correction, is shown to give very good coverage in terms of closeness to the nominal level, and is recommended for this type of problem. 相似文献
8.
Teaching examples for the design of experiments: geographical sensitivity and the self‐fulfilling prophecy 下载免费PDF全文
Dennis W. Lendrem B. Clare Lendrem Ruth Rowland‐Jones Fabio D'Agostino Matt Linsley Martin R. Owen John D. Isaacs 《Pharmaceutical statistics》2016,15(1):90-92
Many scientists believe that small experiments, guided by scientific intuition, are simpler and more efficient than design of experiments. This belief is strong and persists even in the face of data demonstrating that it is clearly wrong. In this paper, we present two powerful teaching examples illustrating the dangers of small experiments guided by scientific intuition. We describe two, simple, two‐dimensional spaces. These two spaces give rise to, and at the same time appear to generate supporting data for, scientific intuitions that are deeply flawed or wholly incorrect. We find these spaces useful in unfreezing scientific thinking and challenging the misplaced confidence in scientific intuition. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
9.
In this paper, different dissimilarity measures are investigated to construct maximin designs for compositional data. Specifically, the effect of different dissimilarity measures on the maximin design criterion for two case studies is presented. Design evaluation criteria are proposed to distinguish between the maximin designs generated. An optimization algorithm is also presented. Divergence is found to be the best dissimilarity measure to use in combination with the maximin design criterion for creating space-filling designs for mixture variables. 相似文献
10.
C. J. Brien 《Australian & New Zealand Journal of Statistics》2017,59(4):327-352
Multiphase experiments are introduced and an overview of their design and analysis as it is currently practised is given via an account of their development since 1955 and a literature survey. Methods that are available for designing and analysing them are outlined, with an emphasis on making explicit the role of the model in their design. The availability of software and its use is described in detail. Overall, while multiphase designs have been applied in areas such as plant breeding, plant pathology, greenhouse experimentation, product storage, gene expression studies, and sensory evaluation, their deployment has been limited. 相似文献
11.
R. A. Bailey 《Journal of the Royal Statistical Society. Series C, Applied statistics》2007,56(4):365-394
Summary. Designs for two-colour microarray experiments can be viewed as block designs with two treatments per block. Explicit formulae for the A- and D-criteria are given for the case that the number of blocks is equal to the number of treatments. These show that the A- and D-optimality criteria conflict badly if there are 10 or more treatments. A similar analysis shows that designs with one or two extra blocks perform very much better, but again there is a conflict between the two optimality criteria for moderately large numbers of treatments. It is shown that this problem can be avoided by slightly increasing the number of blocks. The two colours that are used in each block effectively turn the block design into a row–column design. There is no need to use a design in which every treatment has each colour equally often: rather, an efficient row–column design should be used. For odd replication, it is recommended that the row–column design should be based on a bipartite graph, and it is proved that the optimal such design corresponds to an optimal block design for half the number of treatments. Efficient row–column designs are given for replications 3–6. It is shown how to adapt them for experiments in which some treatments have replication only 2. 相似文献
12.
Experiments that involve the blending of several components are known as mixture experiments. In some mixture experiments, the response depends not only on the proportion of the mixture components, but also on the processing conditions, A new combined model is proposed which is based on Taylor series approximation and is intended to be a compromise between standard mixture models and standard response surface models. Cost and/or time constraints often limit the size of industrial experiments. With this in mind, we present a new class of designs that will accommodate the fitting of the new combined model. 相似文献
13.
Robert Aslett Robert J. Buck Steven G. Duvall Jerome Sacks & William J. Welch 《Journal of the Royal Statistical Society. Series C, Applied statistics》1998,47(1):31-48
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures. 相似文献
14.
Thomas J. Lorenzen Lynn T. Truss W. Scott Spangler William T. Corpus Andrew B. Parker 《Statistics and Computing》1992,2(2):47-54
DEXPERT is an expert system, built using KEE, for the design and analysis of experiments. From a mathematical model, expected mean squares are computed, tests are determined, and the power of the tests computed. Comparisons between designs are aided by suggestions and verbal interpretations provided by DEXPERT. DEXPERT provides a layout sheet for the collection of the data and then analyzes and interprets the results using analytical and graphical methods. 相似文献
15.
Group testing is a method of pooling a number of units together and performing a single test on the resulting group. Group testing is an appealing option when few individual units are thought to be infected and the cost of the testing is non-negligible. Overdispersion is the phenomenon of having greater variability than predicted by the random component of the model; this is common in the modeling of binomial distribution for group testing. The purpose of this paper is to provide a comparison of several established methods of constructing confidence intervals after adjusting for overdispersion. We evaluate and investigate each method in six different cases of group testing. A method based on the score statistic with correction for skewness is recommended. We illustrate the methods using two data sets, one from the detection of seed transmission and the other from serological testing for malaria. 相似文献
16.
C. Albert R. AshauerH.R. Künsch P. Reichert 《Journal of statistical planning and inference》2012,142(1):263-275
The aim of this study is to apply the Bayesian method of identifying optimal experimental designs to a toxicokinetic-toxicodynamic model that describes the response of aquatic organisms to time dependent concentrations of toxicants. As for experimental designs, we restrict ourselves to pulses and constant concentrations. A design of an experiment is called optimal within this set of designs if it maximizes the expected gain of knowledge about the parameters. Focus is on parameters that are associated with the auxiliary damage variable of the model that can only be inferred indirectly from survival time series data. Gain of knowledge through an experiment is quantified both with the ratio of posterior to prior variances of individual parameters and with the entropy of the posterior distribution relative to the prior on the whole parameter space. The numerical methods developed to calculate expected gain of knowledge are expected to be useful beyond this case study, in particular for multinomially distributed data such as survival time series data. 相似文献
17.
A common strategy for avoiding information overload in multi-factor paired comparison experiments is to employ pairs of options which have different levels for only some of the factors in a study. For the practically important case where the factors fall into three groups such that all factors within a group have the same number of levels and where one is only interested in estimating the main effects, a comprehensive catalogue of D-optimal approximate designs is presented. These optimal designs use at most three different types of pairs and have a block diagonal information matrix. 相似文献
18.
This paper describes the one‐day introduction to experimental design training course at GlaxoSmithKline. In particular, the use of paper helicopter experiments has been an effective and efficient method for teaching experimental design techniques to scientific and other staff. A good supporting strategy by which the statistics department provides back‐up following the course is essential. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
19.
Non-proportional hazards (NPH) have been observed in many immuno-oncology clinical trials. Weighted log-rank tests (WLRT) with suitable weights can be used to improve the power of detecting the difference between survival curves in the presence of NPH. However, it is not easy to choose a proper WLRT in practice. A versatile max-combo test was proposed to achieve the balance of robustness and efficiency, and has received increasing attention recently. Survival trials often warrant interim analyses due to their high cost and long durations. The integration and implementation of max-combo tests in interim analyses often require extensive simulation studies. In this report, we propose a simulation-free approach for group sequential designs with the max-combo test in survival trials. The simulation results support that the proposed method can successfully control the type I error rate and offer excellent accuracy and flexibility in estimating sample sizes, with light computation burden. Notably, our method displays strong robustness towards various model misspecifications and has been implemented in an R package. 相似文献
20.
Christine M. Anderson‐Cook Heidi B. Goldfarb Connie M. Borror Douglas C. Montgomery Kelly G. Canter John N. Twist 《Pharmaceutical statistics》2004,3(4):247-260
Many experiments in research and development in the pharmaceutical industry involve mixture components. These are experiments in which the experimental factors are the ingredients of a mixture and the response variable is a function of the relative proportion of each ingredient, not its absolute amount. Thus the mixture ingredients cannot be varied independently. A common variation of the mixture experiment occurs when there are also one or more process factors that can be varied independently of each other and of the mixture components, leading to a mixture–process variable experiment. We discuss the design and analysis of these types of experiments, using tablet formulation as an example. Our objective is to encourage greater utilization of these techniques in pharmaceutical research and development. Copyright © 2004 John Wiley & Sons Ltd. 相似文献