首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
EE-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1)ρ(0,1). For number of blocks b   of the form b=3n+1b=3n+1, there are two distinct optimal designs depending on the value of ρρ, with the best design being unequally replicated for large ρρ. For other values of bb, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established.  相似文献   

2.
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from.  相似文献   

3.
Defining equations are introduced in the context of two-level factorial designs and they are shown to provide a concise specification of both regular and nonregular designs. The equations are used to find orthogonal arrays of high strength and some optimal designs. The latter optimal designs are formed in a new way by augmenting notional orthogonal arrays which are allowed to have some runs with a negative number of replicates before augmentation. Defining equations are also shown to be useful when the factorial design is blocked.  相似文献   

4.
Logistic functions are used in different applications, including biological growth studies and assay data analysis. Locally D-optimal designs for logistic models with three and four parameters are investigated. It is shown that these designs are minimally supported. Efficiencies are computed for equally spaced and uniform designs.  相似文献   

5.
Box and Behnken [1958. Some new three level second-order designs for surface fitting. Statistical Technical Research Group Technical Report No. 26. Princeton University, Princeton, NJ; 1960. Some new three level designs for the study of quantitative variables. Technometrics 2, 455–475.] introduced a class of 3-level second-order designs for fitting the second-order response surface model. These 17 Box–Behnken designs (BB designs) are available for 3–12 and 16 factors. Although BB designs were developed nearly 50 years ago, they and the central-composite designs of Box and Wilson [1951. On the experimental attainment of optimum conditions. J. Royal Statist. Soc., Ser. B 13, 1–45.] are still the most often recommended response surface designs. Of the 17 aforementioned BB designs, 10 were constructed from balanced incomplete block designs (BIBDs) and seven were constructed from partially BIBDs (PBIBDs). In this paper we show that these seven BB designs constructed from PBIBDs can be improved in terms of rotatability as well as average prediction variance, DD- and GG-efficiency. In addition, we also report new orthogonally blocked solutions for 5, 8, 9, 11 and 13 factors. Note that an 11-factor BB design is available but cannot be orthogonally blocked. All new designs can be found at http://www.math.montana.edu/jobo/bbd/.  相似文献   

6.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods.  相似文献   

7.
It is shown that the Simes inequality is reversed for a broad class of negatively dependent distributions.  相似文献   

8.
9.
10.
It is shown how to condense the information contained in a series of studies, each constituted by an objects by variables matrix and a pair of weight matrices, into a structure vector and a sum of sums of squares of residuals. Based on this condensation we propose to carry out ANOVA-like inference for matched series of studies associated with the level combinations of some factors. It is shown how to validate the assumptions underlying the inference. An application to the results of local elections in Portugal is given.  相似文献   

11.
We present the theoretical background and the numerical procedure for calculating optimum experimental designs for non-linear model discrimination in the presence of constraints. The design support points consist of two kinds of factors: a continuous function of time and discrete levels of other quantitative factors. That is, some of the experimental conditions are allowed to continually vary during the experimental run. We implement the theory in a chemical kinetic model discrimination problem.  相似文献   

12.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided tt-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice.  相似文献   

13.
Minimisation is a method often used in clinical trials to balance the treatment groups with respect to some prognostic factors. In the case of two treatments, the predictability of this method is calculated for different numbers of factors, different numbers of levels of each factor and for different proportions of the population at each level. It is shown that if we know nothing about the previous patients except the last treatment allocation, the next treatment can be correctly guessed more than 60% of the time if no biased coin is used. If the two previous assignments are known to have been the same, the next treatment can be guessed correctly around 80% of the time. Therefore, it is suggested that a biased coin should always be used with minimisation. Different choices of biased coin are investigated in terms of the reduction in predictability and the increase in imbalance that they produce. An alternative design to minimisation which makes use of optimum design theory is also investigated, by means of simulation, and does not appear to have any clear advantages over minimisation with a biased coin.  相似文献   

14.
Non-parametric procedures are sometimes in use even in cases where the corresponding parametric procedure is preferable. This is mainly due to the fact that in practical applications of statistical methods too much attention is paid to any violation of the normality assumption–normal distribution is, however, primarily supposed in order to easily derive the exact distribution of the statistic used within parametric approaches.  相似文献   

15.
Isotones   are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of pp values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes   of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r)Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate FF test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate FF distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives.  相似文献   

16.
17.
18.
This work introduces specific tools based on phi-divergences to select and check generalized linear models with binary data. A backward selection criterion that helps to reduce the number of explanatory variables is considered. Diagnostic methods based on divergence measures such as a new measure to detect leverage points and two indicators to detect influential points are introduced. As an illustration, the diagnostics are applied to human psychology data.  相似文献   

19.
20.
For a sequence of strictly stationary random fields that are uniformly ρ′-mixingρ-mixing and satisfy a Lindeberg condition, a central limit theorem is obtained for sequences of “rectangular” sums from the given random fields. The “Lindeberg CLT” is then used to prove a CLT for some kernel estimators of probability density for some strictly stationary random fields satisfying ρ′-mixingρ-mixing, and whose probability density and joint densities are absolutely continuous.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号