首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Supersaturated designs are factorial designs in which the number of potential effects is greater than the run size. They are commonly used in screening experiments, with the aim of identifying the dominant active factors with low cost. However, an important research field, which is poorly developed, is the analysis of such designs with non-normal response. In this article, we develop a variable selection strategy, through the modification of the PageRank algorithm, which is commonly used in the Google search engine for ranking Webpages. The proposed method incorporates an appropriate information theoretical measure into this algorithm and as a result, it can be efficiently used for factor screening. A noteworthy advantage of this procedure is that it allows the use of supersaturated designs for analyzing discrete data and therefore a generalized linear model is assumed. As it is depicted via a thorough simulation study, in which the Type I and Type II error rates are computed for a wide range of underlying models and designs, the presented approach can be considered quite advantageous and effective.  相似文献   

2.
In this paper, we propose the application of group screening methods for analyzing data using E(fNOD)-optimal mixed-level supersaturated designs possessing the equal occurrence property. Supersaturated designs are a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the high degree of confounding among factorial effects. Based on the idea of the group screening methods, the f factors are sub-divided into g “group-factors”. The “group-factors” are then studied using the penalized likelihood statistical analysis methods at a factorial design with orthogonal or near-orthogonal columns. All factors in groups found to have a large effect are then studied in a second stage of experiments. A comparison of the Type I and Type II error rates of various estimation methods via simulation experiments is performed. The results are presented in tables and discussion follows.  相似文献   

3.
In this paper a new discrepancy measure of uniformity for uniform designs (UDs) in a unit cube is presented. Alternative measures of uniformity based on distance criteria which can be applied to higher dimensions are also discussed. The good lattice point (glp) method was used to construct the uniform designs. Two approaches (generator equivalence and projection) of reducing the computational cost of the glp method are proposed and discussed. Two examples are presented in this paper.  相似文献   

4.
It is shown that variance-balanced designs can be obtained from Type I orthogonal arrays for many general models with two kinds of treatment effects, including ones for interference, with general dependence structures. These designs can be used to obtain optimal and efficient designs. Some examples and design comparisons are given.  相似文献   

5.
The aim of this study is to apply the Bayesian method of identifying optimal experimental designs to a toxicokinetic-toxicodynamic model that describes the response of aquatic organisms to time dependent concentrations of toxicants. As for experimental designs, we restrict ourselves to pulses and constant concentrations. A design of an experiment is called optimal within this set of designs if it maximizes the expected gain of knowledge about the parameters. Focus is on parameters that are associated with the auxiliary damage variable of the model that can only be inferred indirectly from survival time series data. Gain of knowledge through an experiment is quantified both with the ratio of posterior to prior variances of individual parameters and with the entropy of the posterior distribution relative to the prior on the whole parameter space. The numerical methods developed to calculate expected gain of knowledge are expected to be useful beyond this case study, in particular for multinomially distributed data such as survival time series data.  相似文献   

6.
Supersaturated designs are a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the confounding involved in the statistical analysis. In this article, we propose a method for analyzing data using several types of supersaturated designs. Modifications of widely used information criteria are given and applied to the variable selection procedure for the identification of the active factors. The effectiveness of the proposed method is depicted via simulated experiments and comparisons.  相似文献   

7.
A versatile procedure is described comprising an application of statistical techniques to the analysis of the large, multi‐dimensional data arrays produced by electroencephalographic (EEG) measurements of human brain function. Previous analytical methods have been unable to identify objectively the precise times at which statistically significant experimental effects occur, owing to the large number of variables (electrodes) and small number of subjects, or have been restricted to two‐treatment experimental designs. Many time‐points are sampled in each experimental trial, making adjustment for multiple comparisons mandatory. Given the typically large number of comparisons and the clear dependence structure among time‐points, simple Bonferroni‐type adjustments are far too conservative. A three‐step approach is proposed: (i) summing univariate statistics across variables; (ii) using permutation tests for treatment effects at each time‐point; and (iii) adjusting for multiple comparisons using permutation distributions to control family‐wise error across the whole set of time‐points. Our approach provides an exact test of the individual hypotheses while asymptotically controlling family‐wise error in the strong sense, and can provide tests of interaction and main effects in factorial designs. An application to two experimental data sets from EEG studies is described, but the approach has application to the analysis of spatio‐temporal multivariate data gathered in many other contexts.  相似文献   

8.
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre‐clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre‐clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log‐normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out‐perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Space filling designs are important for deterministic computer experiments. Even a single experiment can be very time consuming and can have many input parameters. Furthermore the underlying function generating the output is often nonlinear. Thus, the computer experiment has to be designed carefully. There exist many design criteria, which can be numerically optimized. Here, a method is developed, which does not need algorithmic optimization. A mesh of nearly regular simplices is constructed and the vertices of the simplices are used as potential design points. The extraction of a design from these meshes is very fast and easy to implement once the underlying mesh has been constructed. The extracted designs are highly competitive regarding the maximin design criterion and it is easy to extract designs for nonstandard design spaces.  相似文献   

10.
Recently, many supersaturated designs have been proposed. A supersaturated design is a fractional factorial design in which the number of factors is greater than the number of experimental runs. The main thrust of the previous studies has been to generate more columns while avoiding large values of squared inner products among all design columns. These designs would be appropriate if the probability for each factor being active is uniformly distributed. When factors can be partitioned into two groups, namely, with high and low probabilities of each factor being active, it is desirable to maintain orthogonality among columns to be assigned to the factors in the high-probability group. We discuss a supersaturated design including an orthogonal base which is suitable for this common situation. Mathematical results on the existence of the supersaturated designs are shown, and the construction of supersaturated designs is presented. We next discuss some properties of the proposed supersaturated designs based on the squared inner products.  相似文献   

11.
ABSTRACT

Supersaturated designs (SSDs) constitute a large class of fractional factorial designs which can be used for screening out the important factors from a large set of potentially active ones. A major advantage of these designs is that they reduce the experimental cost dramatically, but their crucial disadvantage is the confounding involved in the statistical analysis. Identification of active effects in SSDs has been the subject of much recent study. In this article we present a two-stage procedure for analyzing two-level SSDs assuming a main-effect only model, without including any interaction terms. This method combines sure independence screening (SIS) with different penalty functions; such as Smoothly Clipped Absolute Deviation (SCAD), Lasso and MC penalty achieving both the down-selection and the estimation of the significant effects, simultaneously. Insights on using the proposed methodology are provided through various simulation scenarios and several comparisons with existing approaches, such as stepwise in combination with SCAD and Dantzig Selector (DS) are presented as well. Results of the numerical study and real data analysis reveal that the proposed procedure can be considered as an advantageous tool due to its extremely good performance for identifying active factors.  相似文献   

12.
For regression models with quantitative factors it is illustrated that the E-optimal design can be extremely inefficient in the sense that it degenerates to a design which takes all observations at only one point. This phenomenon is caused by the different size of the elements in the covariance matrix of the least-squares estimator for the unknown parameters. For these reasons we propose to replace the E-criterion by a corresponding standardized version. The advantage of this approach is demonstrated for the polynomial regression on a nonnegative interval, where the classical and standardized E-optimal designs can be found explicitly. The described phenomena are not restricted to the E-criterion but appear for nearly all optimality criteria proposed in the literature. Therefore standardization is recommended for optimal experimental design in regression models with quantitative factors. The optimal designs with respect to the new standardized criteria satisfy a similar invariance property as the famous D-optimal designs, which allows an easy calculation of standardized optimal designs on many linearly transformed design spaces.  相似文献   

13.
The purpose of screening experiments is to identify the dominant variables from a set of many potentially active variables which may affect some characteristic y. Edge designs were recently introduced in the literature and are constructed by using conferences matrices and were proved to be robust. We introduce a new class of edge designs which are constructed from skew-symmetric supplementary difference sets. These designs are particularly useful since they can be applied for experiments with an even number of factors and they may exist for orders where conference matrices do not exist. Using this methodology, examples of new edge designs for 6, 14, 22, 26, 38, 42, 46, 58, and 62 factors are constructed. Of special interest are the new edge designs for studying 22 and 58 factors because edge designs with these parameters have not been constructed in the literature since conference matrices of the corresponding order do not exist. The suggested new edge designs achieve the same model-robustness as the traditional edge designs. We also suggest the use of a mirror edge method as a test for the linearity of the true underlying model. We give the details of the methodology and provide some illustrating examples for this new approach. We also show that the new designs have good D-efficiencies when applied to first order models.  相似文献   

14.
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from.  相似文献   

15.
Supersaturated designs (SSDs) are factorial designs in which the number of experimental runs is smaller than the number of parameters to be estimated in the model. While most of the literature on SSDs has focused on balanced designs, the construction and analysis of unbalanced designs has not been developed to a great extent. Recent studies discuss the possible advantages of relaxing the balance requirement in construction or data analysis of SSDs, and that unbalanced designs compare favorably to balanced designs for several optimality criteria and for the way in which the data are analyzed. Moreover, the effect analysis framework of unbalanced SSDs until now is restricted to the central assumption that experimental data come from a linear model. In this article, we consider unbalanced SSDs for data analysis under the assumption of generalized linear models (GLMs), revealing that unbalanced SSDs perform well despite the unbalance property. The examination of Type I and Type II error rates through an extensive simulation study indicates that the proposed method works satisfactorily.  相似文献   

16.
The problem of comparing several experimental treatments to a standard arises frequently in medical research. Various multi-stage randomized phase II/III designs have been proposed that select one or more promising experimental treatments and compare them to the standard while controlling overall Type I and Type II error rates. This paper addresses phase II/III settings where the joint goals are to increase the average time to treatment failure and control the probability of toxicity while accounting for patient heterogeneity. We are motivated by the desire to construct a feasible design for a trial of four chemotherapy combinations for treating a family of rare pediatric brain tumors. We present a hybrid two-stage design based on two-dimensional treatment effect parameters. A targeted parameter set is constructed from elicited parameter pairs considered to be equally desirable. Bayesian regression models for failure time and the probability of toxicity as functions of treatment and prognostic covariates are used to define two-dimensional covariate-adjusted treatment effect parameter sets. Decisions at each stage of the trial are based on the ratio of posterior probabilities of the alternative and null covariate-adjusted parameter sets. Design parameters are chosen to minimize expected sample size subject to frequentist error constraints. The design is illustrated by application to the brain tumor trial.  相似文献   

17.
Latin hypercube designs (LHDs) are widely used in many applications. As the number of design points or factors becomes large, the total number of LHDs grows exponentially. The large number of feasible designs makes the search for optimal LHDs a difficult discrete optimization problem. To tackle this problem, we propose a new population-based algorithm named LaPSO that is adapted from the standard particle swarm optimization (PSO) and customized for LHD. Moreover, we accelerate LaPSO via a graphic processing unit (GPU). According to extensive comparisons, the proposed LaPSO is more stable than existing approaches and is capable of improving known results.  相似文献   

18.
Screening is the first stage of many industrial experiments and is used to determine efficiently and effectively a small number of potential factors among a large number of factors which may affect a particular response. In a recent paper, Jones and Nachtsheim [A class of three-level designs for definitive screening in the presence of second-order effects. J. Qual. Technol. 2011;43:1–15] have given a class of three-level designs for screening in the presence of second-order effects using a variant of the coordinate exchange algorithm as it was given by Meyer and Nachtsheim [The coordinate-exchange algorithm for constructing exact optimal experimental designs. Technometrics 1995;37:60–69]. Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8] have used conference matrices to construct definitive screening designs with good properties. In this paper, we propose a method for the construction of efficient three-level screening designs based on weighing matrices and their complete foldover. This method can be considered as a generalization of the method proposed by Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8]. Many new orthogonal three-level screening designs are constructed and their properties are explored. These designs are highly D-efficient and provide uncorrelated estimates of main effects that are unbiased by any second-order effect. Our approach is relatively straightforward and no computer search is needed since our designs are constructed using known weighing matrices.  相似文献   

19.
A variety trial sometimes requires a resolvable block design in which the replicates are set out next to each other. The long blocks running through the replicates are then of interest. A t -latinized design is one in which groups of these t long blocks are binary. In this paper examples of such designs are given. It is shown that the algorithm described by John & Whitaker (1993) can be used to construct designs with high average efficiency factors. Upper bounds on these efficiency factors are also derived.  相似文献   

20.
A supersaturated design is a factorial design in which the number of effects to be estimated is greater than the available number of experimental runs. It is used in many experiments for screening purposes, i.e., for studying a large number of factors and then identifying the active ones. The goal with such a design is to identify just a few of the factors under consideration, that have dominant effects and to do this at minimum cost. While most of the literature on supersaturated designs has focused on the construction of designs and their optimality, the data analysis of such designs remains still at an early stage. In this paper, we incorporate the parameter model complexity into the supersaturated design analysis process, by assuming generalized linear models for a Bernoulli response, for analyzing main effects designs and discovering simultaneously the effects that are significant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号