共查询到20条相似文献,搜索用时 15 毫秒
1.
E-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1). For number of blocks b of the form b=3n+1, there are two distinct optimal designs depending on the value of ρ, with the best design being unequally replicated for large ρ. For other values of b, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established. 相似文献
2.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods. 相似文献
3.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided t-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice. 相似文献
4.
The problem of selecting the correct subset of predictors within a linear model has received much attention in recent literature. Within the Bayesian framework, a popular choice of prior has been Zellner's g-prior which is based on the inverse of empirical covariance matrix of the predictors. An extension of the Zellner's prior is proposed in this article which allow for a power parameter on the empirical covariance of the predictors. The power parameter helps control the degree to which correlated predictors are smoothed towards or away from one another. In addition, the empirical covariance of the predictors is used to obtain suitable priors over model space. In this manner, the power parameter also helps to determine whether models containing highly collinear predictors are preferred or avoided. The proposed power parameter can be chosen via an empirical Bayes method which leads to a data adaptive choice of prior. Simulation studies and a real data example are presented to show how the power parameter is well determined from the degree of cross-correlation within predictors. The proposed modification compares favorably to the standard use of Zellner's prior and an intrinsic prior in these examples. 相似文献
5.
6.
The asymptotic normality of the Nadaraya–Watson regression estimator is studied for α-mixing random fields. The infill-increasing setting is considered, that is when the locations of observations become dense in an increasing sequence of domains. This setting fills the gap between continuous and discrete models. In the infill-increasing case the asymptotic normality of the Nadaraya–Watson estimator holds, but with an unusual asymptotic covariance structure. It turns out that this covariance structure is a combination of the covariance structures that we observe in the discrete and in the continuous case. 相似文献
7.
This work introduces specific tools based on phi-divergences to select and check generalized linear models with binary data. A backward selection criterion that helps to reduce the number of explanatory variables is considered. Diagnostic methods based on divergence measures such as a new measure to detect leverage points and two indicators to detect influential points are introduced. As an illustration, the diagnostics are applied to human psychology data. 相似文献
8.
Alexander N. Donev Randy Tobias Farinaz Monadjemi 《Journal of statistical planning and inference》2008
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from. 相似文献
9.
10.
It is shown that the Simes inequality is reversed for a broad class of negatively dependent distributions. 相似文献
11.
A supersaturated design (SSD) is a factorial design in which the degrees of freedom for all its main effects exceed the total number of distinct factorial level-combinations (runs) of the design. Designs with quantitative factors, in which level permutation within one or more factors could result in different geometrical structures, are very different from designs with nominal ones which have been treated as traditional designs. In this paper, a new criterion is proposed for SSDs with quantitative factors. Comparison and analysis for this new criterion are made. It is shown that the proposed criterion has a high efficiency in discriminating geometrically nonisomorphic designs and an advantage in computation. 相似文献
12.
We determine a credible set A that is the “best” with respect to the variation of the prior distribution in a neighborhood Γ of the starting prior π0(θ). Among the class of sets with credibility γ under π0, the “optimally robust” set will be the one which maximizes the minimum probability of including θ as the prior varies over Γ. This procedure is also Γ-minimax with respect to the risk function, probability of non-inclusion. We find the optimally robust credible set for three neighborhood classes Γ, the ε-contamination class, the density ratio class and the density bounded class. A consequence of this investigation is that the maximum likelihood set is seen to be an optimal credible set from a robustness perspective. 相似文献
13.
It is shown how to condense the information contained in a series of studies, each constituted by an objects by variables matrix and a pair of weight matrices, into a structure vector and a sum of sums of squares of residuals. Based on this condensation we propose to carry out ANOVA-like inference for matched series of studies associated with the level combinations of some factors. It is shown how to validate the assumptions underlying the inference. An application to the results of local elections in Portugal is given. 相似文献
14.
Non-parametric procedures are sometimes in use even in cases where the corresponding parametric procedure is preferable. This is mainly due to the fact that in practical applications of statistical methods too much attention is paid to any violation of the normality assumption–normal distribution is, however, primarily supposed in order to easily derive the exact distribution of the statistic used within parametric approaches. 相似文献
15.
For a sequence of strictly stationary random fields that are uniformly ρ′-mixing and satisfy a Lindeberg condition, a central limit theorem is obtained for sequences of “rectangular” sums from the given random fields. The “Lindeberg CLT” is then used to prove a CLT for some kernel estimators of probability density for some strictly stationary random fields satisfying ρ′-mixing, and whose probability density and joint densities are absolutely continuous. 相似文献
16.
We present the theoretical background and the numerical procedure for calculating optimum experimental designs for non-linear model discrimination in the presence of constraints. The design support points consist of two kinds of factors: a continuous function of time and discrete levels of other quantitative factors. That is, some of the experimental conditions are allowed to continually vary during the experimental run. We implement the theory in a chemical kinetic model discrimination problem. 相似文献
17.
Gregory E. Wilding Govind S. Mudholkar Georgia D. Kollia 《Journal of statistical planning and inference》2007
Isotones are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of p values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate F test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate F distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives. 相似文献
18.
Defining equations are introduced in the context of two-level factorial designs and they are shown to provide a concise specification of both regular and nonregular designs. The equations are used to find orthogonal arrays of high strength and some optimal designs. The latter optimal designs are formed in a new way by augmenting notional orthogonal arrays which are allowed to have some runs with a negative number of replicates before augmentation. Defining equations are also shown to be useful when the factorial design is blocked. 相似文献
19.