首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new design criterion based on the condition number of an information matrix is proposed to construct optimal designs for linear models, and the resulting designs are called K-optimal designs. The relationship between exact and asymptotic K-optimal designs is derived. Since it is usually hard to find exact optimal designs analytically, we apply a simulated annealing algorithm to compute K-optimal design points on continuous design spaces. Specific issues are addressed to make the algorithm effective. Through exact designs, we can examine some properties of the K-optimal designs such as symmetry and the number of support points. Examples and results are given for polynomial regression models and linear models for fractional factorial experiments. In addition, K-optimal designs are compared with A-optimal and D-optimal designs for polynomial regression models, showing that K-optimal designs are quite similar to A-optimal designs.  相似文献   

2.
ABSTRACT

For experiments running in field plots or over time, the observations are often correlated due to spatial or serial correlation, which leads to correlated errors in a linear model analyzing the treatment means. Without knowing the exact correlation matrix of the errors, it is not possible to compute the generalized least-squares estimator for the treatment means and use it to construct optimal designs for the experiments. In this paper, we propose to use neighborhoods to model the covariance matrix of the errors, and apply a modified generalized least-squares estimator to construct robust designs for experiments with blocks. A minimax design criterion is investigated, and a simulated annealing algorithm is developed to find robust designs. We have derived several theoretical results, and representative examples are presented.  相似文献   

3.
This paper considers optimal parametric designs, i.e. designs represented by probability measures determined by a set of parameters, for nonlinear models and illustrates their use in designs for pharmacokinetic (PK) and pharmacokinetic/pharmacodynamic (PK/PD) trials. For some practical problems, such as designs for modelling PK/PD relationship, this is often the only feasible type of design, as the design points follow a PK model and cannot be directly controlled. Even for ordinary design problems the parametric designs have some advantages over the traditional designs, which often have too few design points for model checking and may not be robust to model and parameter misspecifications. We first describe methods and algorithms to construct the parametric design for ordinary nonlinear design problems and show that the parametric designs are robust to parameter misspecification and have good power for model discrimination. Then we extend this design method to construct optimal repeated measurement designs for nonlinear mixed models. We also use this parametric design for modelling a PK/PD relationship and propose a simulation based algorithm. The application of parametric designs is illustrated with a three-parameter open one-compartment PK model for the ordinary design and repeated measurement design, and an Emax model for the phamacokinetic/pharmacodynamic trial design.  相似文献   

4.
It is well known that it is difficult to construct minimax optimal designs. Furthermore, since in practice we never know the true error variance, it is important to allow small deviations and construct robust optimal designs. We investigate a class of minimax optimal regression designs for models with heteroscedastic errors that are robust against possible misspecification of the error variance. Commonly used A-, c-, and I-optimality criteria are included in this class of minimax optimal designs. Several theoretical results are obtained, including a necessary condition and a reflection symmetry for these minimax optimal designs. In this article, we focus mainly on linear models and assume that an approximate error variance function is available. However, we also briefly discuss how the methodology works for nonlinear models. We then propose an effective algorithm to solve challenging nonconvex optimization problems to find minimax designs on discrete design spaces. Examples are given to illustrate minimax optimal designs and their properties.  相似文献   

5.
Saunders & Eccleston (1992) presented an approach to the design of 2-level factorial experiments for continuous processes. It determined sets of contrasts between the observations that could be well estimated, and then selected a design so that those contrasts estimated the parameters of interest. This paper shows that a well-estimated contrast must have a large number of changes of sign or level, and also be ‘paired’ in a particular sense. It develops an algorithm which constructs designs that must have a large number of changes of sign, evenly spread among the contrasts and optimal or near optimal. When such designs exist they are often preferable to those produced by the reverse foldover algorithm of Cheng & Steinberg (1991).  相似文献   

6.
A D-optimal minimax design criterion is proposed to construct two-level fractional factorial designs, which can be used to estimate a linear model with main effects and some specified interactions. D-optimal minimax designs are robust against model misspecification and have small biases if the linear model contains more interaction terms. When the D-optimal minimax criterion is compared with the D-optimal design criterion, we find that the D-optimal design criterion is quite robust against model misspecification. Lower and upper bounds derived for the loss functions of optimal designs can be used to estimate the efficiencies of any design and evaluate the effectiveness of a search algorithm. Four algorithms to search for optimal designs for any run size are discussed and compared through several examples. An annealing algorithm and a sequential algorithm are particularly effective to search for optimal designs.  相似文献   

7.
Summary.  Designs for two-colour microarray experiments can be viewed as block designs with two treatments per block. Explicit formulae for the A- and D-criteria are given for the case that the number of blocks is equal to the number of treatments. These show that the A- and D-optimality criteria conflict badly if there are 10 or more treatments. A similar analysis shows that designs with one or two extra blocks perform very much better, but again there is a conflict between the two optimality criteria for moderately large numbers of treatments. It is shown that this problem can be avoided by slightly increasing the number of blocks. The two colours that are used in each block effectively turn the block design into a row–column design. There is no need to use a design in which every treatment has each colour equally often: rather, an efficient row–column design should be used. For odd replication, it is recommended that the row–column design should be based on a bipartite graph, and it is proved that the optimal such design corresponds to an optimal block design for half the number of treatments. Efficient row–column designs are given for replications 3–6. It is shown how to adapt them for experiments in which some treatments have replication only 2.  相似文献   

8.
A closer look at de-aliasing effects using an efficient foldover technique   总被引:1,自引:0,他引:1  
A. M. Elsawah 《Statistics》2017,51(3):532-557
Foldover techniques are used to reduce the confounding when some important effects (usually lower order effects) cannot be estimated independently. This article develops an efficient foldover mechanism for symmetric or asymmetric designs, whether regular or nonregular. In this paper, we take the uniformity criteria (UC) as the optimality measures to construct the optimal combined designs (initial design plus its corresponding foldover design) which have better capability of estimating lower order effects. The relationship between any initial design and its combined design is studied. A comparison study between the combined designs via different UC is provided. Equivalence between any combined design and its complementary combined design is investigated, which is a very useful constraint that reduce the search space. Using our results as benchmarks, we can implement a powerful algorithm for constructing optimal combined designs. Our work covers as well as gives results better than recent works of about 20 articles in the last few years as special cases. So this article is a good reference for constructing effective designs.  相似文献   

9.
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre‐clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre‐clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log‐normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out‐perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In the optimal experimental design literature, the G-optimality is defined as minimizing the maximum prediction variance over the entire experimental design space. Although the G-optimality is a highly desirable property in many applications, there are few computer algorithms developed for constructing G-optimal designs. Some existing methods employ an exhaustive search over all candidate designs, which is time-consuming and inefficient. In this paper, a new algorithm for constructing G-optimal experimental designs is developed for both linear and generalized linear models. The new algorithm is made based on the clustering of candidate or evaluation points over the design space and it is a combination of point exchange algorithm and coordinate exchange algorithm. In addition, a robust design algorithm is proposed for generalized linear models with modification of an existing method. The proposed algorithm are compared with the methods proposed by Rodriguez et al. [Generating and assessing exact G-optimal designs. J. Qual. Technol. 2010;42(1):3–20] and Borkowski [Using a genetic algorithm to generate small exact response surface designs. J. Prob. Stat. Sci. 2003;1(1):65–88] for linear models and with the simulated annealing method and the genetic algorithm for generalized linear models through several examples in terms of the G-efficiency and computation time. The result shows that the proposed algorithm can obtain a design with higher G-efficiency in a much shorter time. Moreover, the computation time of the proposed algorithm only increases polynomially when the size of model increases.  相似文献   

11.
Fractional factorial (FF) designs are no doubt the most widely used designs in experimental investigations due to their efficient use of experimental runs. One price we pay for using FF designs is, clearly, our inability to obtain estimates of some important effects (main effects or second order interactions) that are separate from estimates of other effects (usually higher order interactions). When the estimate of an effect also includes the influence of one or more other effects the effects are said to be aliased. Folding over an FF design is a method for breaking the links between aliased effects in a design. The question is, how do we define the foldover structure for asymmetric FF designs, whether regular or nonregular? How do we choose the optimal foldover plan? How do we use optimal foldover plans to construct combined designs which have better capability of estimating lower order effects? The main objective of the present paper is to provide answers to these questions. Using the new results in this paper as benchmarks, we can implement a powerful and efficient algorithm for finding optimal foldover plans which can be used to break links between aliased effects.  相似文献   

12.
Simon's two-stage designs are widely used in clinical trials to assess the activity of a new treatment. In practice, it is often the case that the second stage sample size is different from the planned one. For this reason, the critical value for the second stage is no longer valid for statistical inference. Existing approaches for making statistical inference are either based on asymptotic methods or not optimal. We propose an approach to maximize the power of a study while maintaining the type I error rate, where the type I error rate and power are calculated exactly from binomial distributions. The critical values of the proposed approach are numerically searched by an intelligent algorithm over the complete parameter space. It is guaranteed that the proposed approach is at least as powerful as the conditional power approach which is a valid but non-optimal approach. The power gain of the proposed approach can be substantial as compared to the conditional power approach. We apply the proposed approach to a real Phase II clinical trial.  相似文献   

13.
Clinical trials are often designed to compare several treatments with a common control arm in pairwise fashion. In this paper we study optimal designs for such studies, based on minimizing the total number of patients required to achieve a given level of power. A common approach when designing studies to compare several treatments with a control is to achieve the desired power for each individual pairwise treatment comparison. However, it is often more appropriate to characterize power in terms of the family of null hypotheses being tested, and to control the probability of rejecting all, or alternatively any, of these individual hypotheses. While all approaches lead to unbalanced designs with more patients allocated to the control arm, it is found that the optimal design and required number of patients can vary substantially depending on the chosen characterization of power. The methods make allowance for both continuous and binary outcomes and are illustrated with reference to two clinical trials, one involving multiple doses compared to placebo and the other involving combination therapy compared to mono-therapies. In one example a 55% reduction in sample size is achieved through an optimal design combined with the appropriate characterization of power.  相似文献   

14.
Screening is the first stage of many industrial experiments and is used to determine efficiently and effectively a small number of potential factors among a large number of factors which may affect a particular response. In a recent paper, Jones and Nachtsheim [A class of three-level designs for definitive screening in the presence of second-order effects. J. Qual. Technol. 2011;43:1–15] have given a class of three-level designs for screening in the presence of second-order effects using a variant of the coordinate exchange algorithm as it was given by Meyer and Nachtsheim [The coordinate-exchange algorithm for constructing exact optimal experimental designs. Technometrics 1995;37:60–69]. Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8] have used conference matrices to construct definitive screening designs with good properties. In this paper, we propose a method for the construction of efficient three-level screening designs based on weighing matrices and their complete foldover. This method can be considered as a generalization of the method proposed by Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8]. Many new orthogonal three-level screening designs are constructed and their properties are explored. These designs are highly D-efficient and provide uncorrelated estimates of main effects that are unbiased by any second-order effect. Our approach is relatively straightforward and no computer search is needed since our designs are constructed using known weighing matrices.  相似文献   

15.
Exchange algorithms are popular for finding optimal or efficient designs for linear models, but there are few discussions of this type of algorithm for generalized linear models (GLMs) in literature. A new algorithm, generalized Coordinate Exchange Algorithm (gCEA), is developed in this article to construct efficient designs for GLMs. We compare the performance of the proposed algorithm with other optimization algorithms, including point exchange algorithm, columnwise-pairwise algorithm, simulated annealing and generic algorithm, and demonstrate the superior performance of this new algorithm.  相似文献   

16.
This paper presents an algorithm for the construction of optimal or near optimal change-over designs for arbitrary numbers of treatments, periods and units. Previous research on optimality has been either theoretical or has resulted in limited tabulations of small optimal designs. The algorithm consists of a number of steps:first find an optimal direct treatment effects design, ignoring residual effects, and then optimise this class of designs with respect to residual effects. Poor designs are avoided by judicious application of the (M, S)-optimality criterion, and modifications of it, to appropriate matrices. The performance of the algorithm is illustrated by examples.  相似文献   

17.
We find optimal designs for linear models using a novel algorithm that iteratively combines a semidefinite programming (SDP) approach with adaptive grid techniques. The proposed algorithm is also adapted to find locally optimal designs for nonlinear models. The search space is first discretized, and SDP is applied to find the optimal design based on the initial grid. The points in the next grid set are points that maximize the dispersion function of the SDP-generated optimal design using nonlinear programming. The procedure is repeated until a user-specified stopping rule is reached. The proposed algorithm is broadly applicable, and we demonstrate its flexibility using (i) models with one or more variables and (ii) differentiable design criteria, such as A-, D-optimality, and non-differentiable criterion like E-optimality, including the mathematically more challenging case when the minimum eigenvalue of the information matrix of the optimal design has geometric multiplicity larger than 1. Our algorithm is computationally efficient because it is based on mathematical programming tools and so optimality is assured at each stage; it also exploits the convexity of the problems whenever possible. Using several linear and nonlinear models with one or more factors, we show the proposed algorithm can efficiently find optimal designs.  相似文献   

18.
This paper describes an efficient algorithm for the construction of optimal or near-optimal resolvable incomplete block designs (IBDs) for any number of treatments v < 100. The performance of this algorithm is evaluated against known lattice designs and the 414 or-designs of Patterson & Williams [36]. For the designs under study, it appears that our algorithm is about equally effective as the simulated annealing algorithm of Venables & Eccleston [42]. An example of the use of our algorithm to construct the row (or column) components of resolvable row-column designs is given.  相似文献   

19.
The construction of optimal designs for change-over experiments requires consideration of the two component treatment designs: one for the direct treatments and the other for the residual (carry-over) treatments. A multi-objective approach is introduced using simulated annealing, which simultaneously optimises each of the component treatment designs to produce a set of dominant designs in one run of the algorithm. The algorithm is used to demonstrate that a wide variety of change-over designs can be generated quickly on a desk top computer. These are generally better than those previously recorded in the literature.  相似文献   

20.
ABSTRACT

Optimal main effects plans (MEPs) and optimal foldover designs can often be performed as a series of nested optimal designs. Then, if the experiment cannot be completed due to time or budget constraints, the fraction already performed may still be an optimal design. We show that the optimal MEP for 4t factors in 4t + 4 points does not contain the optimal MEP for 4t factors in 4t + 2 points nested within it. In general, the optimal MEP for 4t factors in 4t + 4 points does not contain the optimal MEPs for 4t factors in 4t + 1, 4t + 2, or 4t + 3 points and the optimal MEP for 4t + 1 factors in 4t + 4 points does not contain the optimal MEPs for 4t + 1 factors in 4t + 2 or 4t + 3 points. We also show that the runs in an orthogonal design for 4t factors in 4t + 4 points, and the optimal foldover designs obtained by folding, should be performed in a certain sequence in order to avoid the possibility of a singular X'X matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号