首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Likelihood ratio tests are considered for two testing situations; testing for the homogeneity of k normal means against the alternative restricted by a simple tree ordering trend and testing the null hypothesis that the means satisfy the trend against all alternatives. Exact expressions are given for the power functions for k = 3 and 4 and unequal sample sizes, both for the case of known and unknown population variances, and approximations are discussed for larger k. Also, Bartholomew’s conjectures concerning minimal and maximal powers are investigated for the case of equal and unequal sample sizes. The power formulas are used to compute powers for a numerical example.  相似文献   

2.
This article studies a new procedure to test for the equality of k regression curves in a fully non‐parametric context. The test is based on the comparison of empirical estimators of the characteristic functions of the regression residuals in each population. The asymptotic behaviour of the test statistic is studied in detail. It is shown that under the null hypothesis, the distribution of the test statistic converges to a finite combination of independent chi‐squared random variables with one degree of freedom. The coefficients in this linear combination can be consistently estimated. The proposed test is able to detect contiguous alternatives converging to the null at the rate n ? 1 ∕ 2. The practical performance of the test based on the asymptotic null distribution is investigated by means of simulations.  相似文献   

3.
The problem of inference in Bayesian Normal mixture models is known to be difficult. In particular, direct Bayesian inference (via quadrature) suffers from a combinatorial explosion in having to consider every possible partition of n observations into k mixture components, resulting in a computation time which is O(k n). This paper explores the use of discretised parameters and shows that for equal-variance mixture models, direct computation time can be reduced to O(D k n k), where relevant continuous parameters are each divided into D regions. As a consequence, direct inference is now possible on genuine data sets for small k, where the quality of approximation is determined by the level of discretisation. For large problems, where the computational complexity is still too great in O(D k n k) time, discretisation can provide a convergence diagnostic for a Markov chain Monte Carlo analysis.  相似文献   

4.
Two-level regular fractional factorial designs are often used in industry as screening designs to help identify early on in an experimental process those experimental or system variables which have significant effects on the process being studied. When the experimental material to be used in the experiment is heterogenous or the experiment must be performed over several well-defined time periods, blocking is often used as a means to improve experimental efficiency by removing the possible effects of heterogenous experimental material or possible time period effects. In a recent article, Li and Jacroux (2007 Li , F. , Jacroux , M. (2007). Optimal foldover plans for blocked 2 m?k fractional factorial designs. J. Statsist. Plann. Infer 137:24342452. [Google Scholar]) suggested a strategy for constructing optimal follow-up designs for blocked fractional factorial designs using the well-known foldover technique in conjunction with several optimality criteria. In this article, we consider the reverse foldover problem for blocked fractional factorial designs. In particular, given a 2(m+p)?(p+k) blocked fractional factorial design D, we derive simple sufficient conditions which can be used to determine if there exists a 2(m+p?1)?(p?1+k+1) initial fractional factorial design d which yields D as a foldover combined design as well how to generate all such d. Such information is useful in developing an overall experimental strategy in situations where an experimenter wants an overall blocked fractional factorial design with “desirable” properties but also wants the option of analyzing the observed data at the halfway mark to determine if the significant experimental variables are obvious (and the experiment can be terminated) or if a different path of experimentation should be taken from that initially planned.  相似文献   

5.
It is common to test if there is an effect due to a treatment. The commonly used tests have the assumption that the observations differ in location, and that their variances are the same over the groups. Different variances can arise if the observations being analyzed are means of different numbers of observations on individuals or slopes of growth curves with missing data. This study is concerned with cases in which the unequal variances are known, or known to a constant of proportionality. It examines the performance of the ttest, the Mann–Whitney–Wilcoxon Rank Sum test, the Median test, and the Van der Waerden test under these conditions. The t-test based on the weighted means is the likelihood ratio test under normality and has the usual optimality properties. The other tests are compared to it. One may align and scale the observations by subtracting the mean and dividing by the standard deviation of each point. This leads to other, analogous test statistics based on these adjusted observations. These statistics are also compared. Finally, the regression scores tests are compared to the other procedures.  相似文献   

6.
Given k( ? 3) independent normal populations with unknown means and unknown and unequal variances, a single-stage sampling procedure to select the best t out of k populations is proposed and the procedure is completely independent of the unknown means and the unknown variances. For various combinations of k and probability requirement, tables of procedure parameters are provided for practitioners.  相似文献   

7.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

8.
G = F k (k > 1); G = 1 − (1−F) k (k < 1); G = F k (k < 1); and G = 1 − (1−F) k (k > 1), where F and G are two continuous cumulative distribution functions. If an optimal precedence test (one with the maximal power) is determined for one of these four classes, the optimal tests for the other classes of alternatives can be derived. Application of this is given using the results of Lin and Sukhatme (1992) who derived the best precedence test for testing the null hypothesis that the lifetimes of two types of items on test have the same distibution. The test has maximum power for fixed κ in the class of alternatives G = 1 − (1−F) k , with k < 1. Best precedence tests for the other three classes of Lehmann-type alternatives are derived using their results. Finally, a comparison of precedence tests with Wilcoxon's two-sample test is presented. Received: February 22, 1999; revised version: June 7, 2000  相似文献   

9.
Heterogeneity of variances of treatment groups influences the validity and power of significance tests of location in two distinct ways. First, if sample sizes are unequal, the Type I error rate and power are depressed if a larger variance is associated with a larger sample size, and elevated if a larger variance is associated with a smaller sample size. This well-established effect, which occurs in t and F tests, and to a lesser degree in nonparametric rank tests, results from unequal contributions of pooled estimates of error variance in the computation of test statistics. It is observed in samples from normal distributions, as well as non-normal distributions of various shapes. Second, transformation of scores from skewed distributions with unequal variances to ranks produces differences in the means of the ranks assigned to the respective groups, even if the means of the initial groups are equal, and a subsequent inflation of Type I error rates and power. This effect occurs for all sample sizes, equal and unequal. For the t test, the discrepancy diminishes, and for the Wilcoxon–Mann–Whitney test, it becomes larger, as sample size increases. The Welch separate-variance t test overcomes the first effect but not the second. Because of interaction of these separate effects, the validity and power of both parametric and nonparametric tests performed on samples of any size from unknown distributions with possibly unequal variances can be distorted in unpredictable ways.  相似文献   

10.
We consider in this work a k-level step-stress accelerated life-test (ALT) experiment with unequal duration steps τ=(τ1, …, τk). Censoring is allowed only at the change-stress point in the final stage. An exponential failure time distribution with mean life that is a log-linear function of stress, along with a cumulative exposure model, is considered as the working model. The problem of choosing the optimal τ is addressed using the variance-optimality criterion. Under this setting, we then show that the optimal k-level step-stress ALT model with unequal duration steps reduces just to a 2-level step-stress ALT model.  相似文献   

11.
In this paper, we consider a k-level step-stress accelerated life-testing (ALT) experiment with unequal duration steps τ=(τ1, …, τ k ). Censoring is allowed only at the change-stress point in the final stage. A general log-location-scale lifetime distribution with mean life which is a linear function of stress, along with a cumulative exposure model, is considered as the working model. Under this model, the determination of the optimal choice of τ for both Weibull and lognormal distributions are addressed using the variance–optimality criterion. Numerical results show that for a general log-location-scale distributions, the optimal k-step-stress ALT model with unequal duration steps reduces just to a 2-level step-stress ALT model.  相似文献   

12.
The exact and asymptotic upper tail probabilities ( α= .l0, .05, .01, .001) of the three chi-squared goodness-of-fit statistics Pearson's X 2, likelihood ratioG 2, and power-divergence statisticD 2 (λ ) , with λ = 2/3, are compared numerically for simple null hypotheses not involving parameter estimation. Three types of such hypotheses were investigated (equal cell probabilities, proportional cell probabilities, some fixed small expectations together with some increasing large expectations) for the number of cells being between 3 and 15, and for sample sizes from 10 to 40, increasing by steps of one. Rating the relative accuracy of the chi-squared approximation in terms of ±10% and ±20% intervals around α led to the following conclusions: 1. Using G 2 is not recommended. 2 . At the more relevant significance levels α = .10 and α = .05X 2 should be preferred over D 2. Solely in case of unequal cell probabilitiesD 2 is the better choice at α = .O1 and α = .001. 3 . Yarnold's (1970; Journal of the Amerin Statistical Association, 65, 864-886) rule for the minimum expectation when using X 2 ("If the number of cells k is 3 or more, and if r denotes the number of expectations less than 5, then the minimum expectation may be as small as 5r/k.") generalizes to D 2; it gives a good lower limit for the expected cell frequencies, however, when the number of cells is greater than 3. For k = 3 , even sample sizes over 15 may be insufficient.  相似文献   

13.
In this paper, we consider testing the equality of two mean vectors with unequal covariance matrices. In the case of equal covariance matrices, we can use Hotelling’s T2 statistic, which follows the F distribution under the null hypothesis. Meanwhile, in the case of unequal covariance matrices, the T2 type test statistic does not follow the F distribution, and it is also difficult to derive the exact distribution. In this paper, we propose an approximate solution to the problem by adjusting the degrees of freedom of the F distribution. Asymptotic expansions up to the term of order N? 2 for the first and second moments of the U statistic are given, where N is the total sample size minus two. A new approximate degrees of freedom and its bias correction are obtained. Finally, numerical comparison is presented by a Monte Carlo simulation.  相似文献   

14.
It is known that for blocked 2n-k2n-k designs a judicious sequencing of blocks may allow one to obtain early and insightful results regarding influential parameters in the experiment. Such findings may justify the early termination of the experiment thereby producing cost and time savings. This paper introduces an approach for selecting the optimal sequence of blocks for regular two-level blocked fractional factorial split-plot screening experiments. An optimality criterion is developed so as to give priority to the early estimation of low-order factorial effects. This criterion is then applied to the minimum aberration blocked fractional factorial split-plot designs tabled in McLeod and Brewster [2004. The design of blocked fractional factorial split-plot experiments. Technometrics 46, 135–146]. We provide a catalog of optimal block sequences for 16 and 32-run minimum aberration blocked fractional factorial split-plot designs run in either 4 or 8 blocks.  相似文献   

15.
The inverse Gaussian (IG) distribution is often applied in statistical modelling, especially with lifetime data. We present tests for outlying values of the parameters (μ, λ) of this distribution when data are available from a sample of independent units and possibly with more than one event per unit. Outlier tests are constructed from likelihood ratio tests for equality of parameters. The test for an outlying value of λ is based on an F-distributed statistic that is transformed to an approximate normal statistic when there are unequal numbers of events per unit. Simulation studies are used to confirm that Bonferroni tests have accurate size and to examine the powers of the tests. The application to first hitting time models, where the IG distribution is derived from an underlying Wiener process, is described. The tests are illustrated on data concerning the strength of different lots of insulating material.  相似文献   

16.
Reduced k‐means clustering is a method for clustering objects in a low‐dimensional subspace. The advantage of this method is that both clustering of objects and low‐dimensional subspace reflecting the cluster structure are simultaneously obtained. In this paper, the relationship between conventional k‐means clustering and reduced k‐means clustering is discussed. Conditions ensuring almost sure convergence of the estimator of reduced k‐means clustering as unboundedly increasing sample size have been presented. The results for a more general model considering conventional k‐means clustering and reduced k‐means clustering are provided in this paper. Moreover, a consistent selection of the numbers of clusters and dimensions is described.  相似文献   

17.
Heteroscedastic two-way ANOVA are frequently encountered in real data analysis. In the literature, classical F-tests are often blindly employed although they are often biased even for moderate heteroscedasticity. To overcome this problem, several approximate tests have been proposed in the literature. These tests, however, are either too complicated to implement or do not work well in terms of size controlling. In this paper, we propose a simple and accurate approximate degrees of freedom (ADF) test. The ADF test is shown to be invariant under affine-transformations, different choices of contrast matrix for the same null hypothesis, or different labeling schemes of cell means. Moreover, it can be conducted easily using the usual F-distribution with one unknown degree of freedom estimated from the data. Simulations demonstrate that the ADF test works well in various cell sizes and parameter configurations but the classical F-tests work badly when the cell variance homogeneity assumption is violated. A real data example illustrates the methodologies.  相似文献   

18.
Many nonparametric tests have been proposed for the hypothesis of no row (treatment) effect in a one-way layout design. Examples of such tests are Kruskal-Wallis H-test, Bhapkar's (1961) V-test and Deshpande's (1965) L-test. However not many tests are available for testing the same hypothesis in a two-way layout design without interaction. Perhaps the only “established” test is the one due to Friedman (1937). However, it applies to the case of one observation per cell only. In this paper, a new distribution-free test is proposed for the hypothesis of row effect in a two-way layout design. It applies to the case of several observations per cell, not necessarily equal. The asymptotic efficiency of the proposed test relative to other tests is studied.  相似文献   

19.
In this paper, we consider experimental situations in which a regular fractional factorial design is to be used to study the effects of m two-level factors using n=2mk experimental units arranged in 2p blocks of size 2mkp. In such situations, two-factor interactions are often confounded with blocks and complete information is lost on these two-factor interactions. Here we consider the use of the foldover technique in conjunction with combining designs having different blocking schemes to produce alternative partially confounded blocked fractional factorial designs that have more estimable two-factor interactions or a higher estimation capacity or both than their traditional counterparts.  相似文献   

20.
It is well known that Yates' algorithm can be used to estimate the effects in a factorial design. We develop a modification of this algorithm and call it modified Yates' algorithm and its inverse. We show that the intermediate steps in our algorithm have a direct interpretation as estimated level-specific mean values and effects. Also we show how Yates' or our modified algorithm can be used to construct the blocks in a 2 k factorial design and to generate the layout sheet of a 2 k−p fractional factorial design and the confounding pattern in such a design. In a final example we put together all these methods by generating and analysing a 26-2 design with 2 blocks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号