首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
EE-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1)ρ(0,1). For number of blocks b   of the form b=3n+1b=3n+1, there are two distinct optimal designs depending on the value of ρρ, with the best design being unequally replicated for large ρρ. For other values of bb, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established.  相似文献   

2.
Box and Behnken [1958. Some new three level second-order designs for surface fitting. Statistical Technical Research Group Technical Report No. 26. Princeton University, Princeton, NJ; 1960. Some new three level designs for the study of quantitative variables. Technometrics 2, 455–475.] introduced a class of 3-level second-order designs for fitting the second-order response surface model. These 17 Box–Behnken designs (BB designs) are available for 3–12 and 16 factors. Although BB designs were developed nearly 50 years ago, they and the central-composite designs of Box and Wilson [1951. On the experimental attainment of optimum conditions. J. Royal Statist. Soc., Ser. B 13, 1–45.] are still the most often recommended response surface designs. Of the 17 aforementioned BB designs, 10 were constructed from balanced incomplete block designs (BIBDs) and seven were constructed from partially BIBDs (PBIBDs). In this paper we show that these seven BB designs constructed from PBIBDs can be improved in terms of rotatability as well as average prediction variance, DD- and GG-efficiency. In addition, we also report new orthogonally blocked solutions for 5, 8, 9, 11 and 13 factors. Note that an 11-factor BB design is available but cannot be orthogonally blocked. All new designs can be found at http://www.math.montana.edu/jobo/bbd/.  相似文献   

3.
A supersaturated design (SSD) is a factorial design in which the degrees of freedom for all its main effects exceed the total number of distinct factorial level-combinations (runs) of the design. Designs with quantitative factors, in which level permutation within one or more factors could result in different geometrical structures, are very different from designs with nominal ones which have been treated as traditional designs. In this paper, a new criterion is proposed for SSDs with quantitative factors. Comparison and analysis for this new criterion are made. It is shown that the proposed criterion has a high efficiency in discriminating geometrically nonisomorphic designs and an advantage in computation.  相似文献   

4.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods.  相似文献   

5.
Defining equations are introduced in the context of two-level factorial designs and they are shown to provide a concise specification of both regular and nonregular designs. The equations are used to find orthogonal arrays of high strength and some optimal designs. The latter optimal designs are formed in a new way by augmenting notional orthogonal arrays which are allowed to have some runs with a negative number of replicates before augmentation. Defining equations are also shown to be useful when the factorial design is blocked.  相似文献   

6.
We present the theoretical background and the numerical procedure for calculating optimum experimental designs for non-linear model discrimination in the presence of constraints. The design support points consist of two kinds of factors: a continuous function of time and discrete levels of other quantitative factors. That is, some of the experimental conditions are allowed to continually vary during the experimental run. We implement the theory in a chemical kinetic model discrimination problem.  相似文献   

7.
The paper introduces DT-optimum designs that provide a specified balance between model discrimination and parameter estimation. An equivalence theorem is presented for the case of two models and extended to an arbitrary number of models and of combinations of parameters. A numerical example shows the properties of the procedure. The relationship with other design procedures for parameter estimation and model discrimination is discussed.  相似文献   

8.
The problem of selecting the correct subset of predictors within a linear model has received much attention in recent literature. Within the Bayesian framework, a popular choice of prior has been Zellner's gg-prior which is based on the inverse of empirical covariance matrix of the predictors. An extension of the Zellner's prior is proposed in this article which allow for a power parameter on the empirical covariance of the predictors. The power parameter helps control the degree to which correlated predictors are smoothed towards or away from one another. In addition, the empirical covariance of the predictors is used to obtain suitable priors over model space. In this manner, the power parameter also helps to determine whether models containing highly collinear predictors are preferred or avoided. The proposed power parameter can be chosen via an empirical Bayes method which leads to a data adaptive choice of prior. Simulation studies and a real data example are presented to show how the power parameter is well determined from the degree of cross-correlation within predictors. The proposed modification compares favorably to the standard use of Zellner's prior and an intrinsic prior in these examples.  相似文献   

9.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided tt-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice.  相似文献   

10.
Logistic functions are used in different applications, including biological growth studies and assay data analysis. Locally D-optimal designs for logistic models with three and four parameters are investigated. It is shown that these designs are minimally supported. Efficiencies are computed for equally spaced and uniform designs.  相似文献   

11.
12.
13.
The KL-optimality criterion has been recently proposed to discriminate between any two statistical models. However, designs which are optimal for model discrimination may be inadequate for parameter estimation. In this paper, the DKL-optimality criterion is proposed which is useful for the dual problem of model discrimination and parameter estimation. An equivalence theorem and a stopping rule for the corresponding iterative algorithms are provided. A pharmacokinetics application and a bioassay example are given to show the good properties of a DKL-optimum design.  相似文献   

14.
15.
It is shown that the Simes inequality is reversed for a broad class of negatively dependent distributions.  相似文献   

16.
When random variables do not take discrete values, observed data are often the rounded values of continuous random variables. Errors caused by rounding of data are often neglected by classical statistical theories. While some pioneers have identified and made suggestions to rectify the problem, few suitable approaches were proposed. In this paper, we propose an approximate MLE (AMLE) procedure to estimate the parameters and discuss the consistency and asymptotic normality of the estimates. For our illustration, we shall consider the estimates of the parameters in AR(p)AR(p) and MA(q)MA(q) models for rounded data.  相似文献   

17.
Isotones   are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of pp values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes   of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r)Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate FF test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate FF distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives.  相似文献   

18.
It is shown how to condense the information contained in a series of studies, each constituted by an objects by variables matrix and a pair of weight matrices, into a structure vector and a sum of sums of squares of residuals. Based on this condensation we propose to carry out ANOVA-like inference for matched series of studies associated with the level combinations of some factors. It is shown how to validate the assumptions underlying the inference. An application to the results of local elections in Portugal is given.  相似文献   

19.
This work introduces specific tools based on phi-divergences to select and check generalized linear models with binary data. A backward selection criterion that helps to reduce the number of explanatory variables is considered. Diagnostic methods based on divergence measures such as a new measure to detect leverage points and two indicators to detect influential points are introduced. As an illustration, the diagnostics are applied to human psychology data.  相似文献   

20.
Many multiple testing procedures (MTPs) are available today, and their number is growing. Also available are many type I error rates: the family-wise error rate (FWER), the false discovery rate, the proportion of false positives, and others. Most MTPs are designed to control a specific type I error rate, and it is hard to compare different procedures. We approach the problem by studying the exact level at which threshold step-down (TSD) procedures (an important class of MTPs exemplified by the classic Holm procedure) control the generalized FWER   defined as the probability of kk or more false rejections. We find that level explicitly for any TSD procedure and any kk. No assumptions are made about the dependency structure of the pp-values of the individual tests. We derive from our formula a criterion for unimprovability   of a procedure in the class of TSD procedures controlling the generalized FWER at a given level. In turn, this criterion implies that for each kk the number of such unimprovable procedures is finite and is greater than one if k>1k>1. Consequently, in this case the most rejective procedure in the above class does not exist.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号