首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 958 毫秒
1.
The KL-optimality criterion has been recently proposed to discriminate between any two statistical models. However, designs which are optimal for model discrimination may be inadequate for parameter estimation. In this paper, the DKL-optimality criterion is proposed which is useful for the dual problem of model discrimination and parameter estimation. An equivalence theorem and a stopping rule for the corresponding iterative algorithms are provided. A pharmacokinetics application and a bioassay example are given to show the good properties of a DKL-optimum design.  相似文献   

2.
The paper introduces DT-optimum designs that provide a specified balance between model discrimination and parameter estimation. An equivalence theorem is presented for the case of two models and extended to an arbitrary number of models and of combinations of parameters. A numerical example shows the properties of the procedure. The relationship with other design procedures for parameter estimation and model discrimination is discussed.  相似文献   

3.
4.
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from.  相似文献   

5.
A supersaturated design (SSD) is a factorial design in which the degrees of freedom for all its main effects exceed the total number of distinct factorial level-combinations (runs) of the design. Designs with quantitative factors, in which level permutation within one or more factors could result in different geometrical structures, are very different from designs with nominal ones which have been treated as traditional designs. In this paper, a new criterion is proposed for SSDs with quantitative factors. Comparison and analysis for this new criterion are made. It is shown that the proposed criterion has a high efficiency in discriminating geometrically nonisomorphic designs and an advantage in computation.  相似文献   

6.
Box and Behnken [1958. Some new three level second-order designs for surface fitting. Statistical Technical Research Group Technical Report No. 26. Princeton University, Princeton, NJ; 1960. Some new three level designs for the study of quantitative variables. Technometrics 2, 455–475.] introduced a class of 3-level second-order designs for fitting the second-order response surface model. These 17 Box–Behnken designs (BB designs) are available for 3–12 and 16 factors. Although BB designs were developed nearly 50 years ago, they and the central-composite designs of Box and Wilson [1951. On the experimental attainment of optimum conditions. J. Royal Statist. Soc., Ser. B 13, 1–45.] are still the most often recommended response surface designs. Of the 17 aforementioned BB designs, 10 were constructed from balanced incomplete block designs (BIBDs) and seven were constructed from partially BIBDs (PBIBDs). In this paper we show that these seven BB designs constructed from PBIBDs can be improved in terms of rotatability as well as average prediction variance, DD- and GG-efficiency. In addition, we also report new orthogonally blocked solutions for 5, 8, 9, 11 and 13 factors. Note that an 11-factor BB design is available but cannot be orthogonally blocked. All new designs can be found at http://www.math.montana.edu/jobo/bbd/.  相似文献   

7.
For a sequence of strictly stationary random fields that are uniformly ρ′-mixingρ-mixing and satisfy a Lindeberg condition, a central limit theorem is obtained for sequences of “rectangular” sums from the given random fields. The “Lindeberg CLT” is then used to prove a CLT for some kernel estimators of probability density for some strictly stationary random fields satisfying ρ′-mixingρ-mixing, and whose probability density and joint densities are absolutely continuous.  相似文献   

8.
Logistic functions are used in different applications, including biological growth studies and assay data analysis. Locally D-optimal designs for logistic models with three and four parameters are investigated. It is shown that these designs are minimally supported. Efficiencies are computed for equally spaced and uniform designs.  相似文献   

9.
10.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods.  相似文献   

11.
It is shown how to condense the information contained in a series of studies, each constituted by an objects by variables matrix and a pair of weight matrices, into a structure vector and a sum of sums of squares of residuals. Based on this condensation we propose to carry out ANOVA-like inference for matched series of studies associated with the level combinations of some factors. It is shown how to validate the assumptions underlying the inference. An application to the results of local elections in Portugal is given.  相似文献   

12.
When random variables do not take discrete values, observed data are often the rounded values of continuous random variables. Errors caused by rounding of data are often neglected by classical statistical theories. While some pioneers have identified and made suggestions to rectify the problem, few suitable approaches were proposed. In this paper, we propose an approximate MLE (AMLE) procedure to estimate the parameters and discuss the consistency and asymptotic normality of the estimates. For our illustration, we shall consider the estimates of the parameters in AR(p)AR(p) and MA(q)MA(q) models for rounded data.  相似文献   

13.
Minimisation is a method often used in clinical trials to balance the treatment groups with respect to some prognostic factors. In the case of two treatments, the predictability of this method is calculated for different numbers of factors, different numbers of levels of each factor and for different proportions of the population at each level. It is shown that if we know nothing about the previous patients except the last treatment allocation, the next treatment can be correctly guessed more than 60% of the time if no biased coin is used. If the two previous assignments are known to have been the same, the next treatment can be guessed correctly around 80% of the time. Therefore, it is suggested that a biased coin should always be used with minimisation. Different choices of biased coin are investigated in terms of the reduction in predictability and the increase in imbalance that they produce. An alternative design to minimisation which makes use of optimum design theory is also investigated, by means of simulation, and does not appear to have any clear advantages over minimisation with a biased coin.  相似文献   

14.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided tt-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice.  相似文献   

15.
EE-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1)ρ(0,1). For number of blocks b   of the form b=3n+1b=3n+1, there are two distinct optimal designs depending on the value of ρρ, with the best design being unequally replicated for large ρρ. For other values of bb, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established.  相似文献   

16.
The problem of selecting the correct subset of predictors within a linear model has received much attention in recent literature. Within the Bayesian framework, a popular choice of prior has been Zellner's gg-prior which is based on the inverse of empirical covariance matrix of the predictors. An extension of the Zellner's prior is proposed in this article which allow for a power parameter on the empirical covariance of the predictors. The power parameter helps control the degree to which correlated predictors are smoothed towards or away from one another. In addition, the empirical covariance of the predictors is used to obtain suitable priors over model space. In this manner, the power parameter also helps to determine whether models containing highly collinear predictors are preferred or avoided. The proposed power parameter can be chosen via an empirical Bayes method which leads to a data adaptive choice of prior. Simulation studies and a real data example are presented to show how the power parameter is well determined from the degree of cross-correlation within predictors. The proposed modification compares favorably to the standard use of Zellner's prior and an intrinsic prior in these examples.  相似文献   

17.
We consider batch queueing systems M/MH/1M/MH/1 and MH/M/1MH/M/1 with catastrophes. The transient probability functions of these queueing systems are obtained by a Lattice Path Combinatorics approach that utilizes randomization and dual processes. Steady state distributions are also determined. Generalization to systems having batches of different sizes are discussed.  相似文献   

18.
This article develops test statistics for the homogeneity of the means of several treatment groups of count data in the presence of over-dispersion or under-dispersion when there is no likelihood available. The C(α)C(α) or score type tests based on the models that are specified by only the first two moments of the counts are obtained using quasi-likelihood, extended quasi-likelihood, and double extended quasi-likelihood. Monte Carlo simulations are then used to study the comparative behavior of these C(α)C(α) statistics compared to the C(α)C(α) statistic based on a parametric model, namely, the negative binomial model, in terms of the following: size; power; robustness for departures from the data distribution as well as dispersion homogeneity. These simulations demonstrate that the C(α)C(α) statistic based on the double extended quasi-likelihood holds the nominal size at the 5% level well in all data situations, and it shows some edge in power over the other statistics, and, in particular, it performs much better than the commonly used statistic based on the quasi-likelihood. This C(α)C(α) statistic also shows robustness for moderate heterogeneity due to dispersion. Finally, applications to ecological, toxicological and biological data are given.  相似文献   

19.
It is shown that the Simes inequality is reversed for a broad class of negatively dependent distributions.  相似文献   

20.
Many multiple testing procedures (MTPs) are available today, and their number is growing. Also available are many type I error rates: the family-wise error rate (FWER), the false discovery rate, the proportion of false positives, and others. Most MTPs are designed to control a specific type I error rate, and it is hard to compare different procedures. We approach the problem by studying the exact level at which threshold step-down (TSD) procedures (an important class of MTPs exemplified by the classic Holm procedure) control the generalized FWER   defined as the probability of kk or more false rejections. We find that level explicitly for any TSD procedure and any kk. No assumptions are made about the dependency structure of the pp-values of the individual tests. We derive from our formula a criterion for unimprovability   of a procedure in the class of TSD procedures controlling the generalized FWER at a given level. In turn, this criterion implies that for each kk the number of such unimprovable procedures is finite and is greater than one if k>1k>1. Consequently, in this case the most rejective procedure in the above class does not exist.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号