首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Real world data often fail to meet the underlying assumption of population normality. The Rank Transformation (RT) procedure has been recommended as an alternative to the parametric factorial analysis of covariance (ANCOVA). The purpose of this study was to compare the Type I error and power properties of the RT ANCOVA to the parametric procedure in the context of a completely randomized balanced 3 × 4 factorial layout with one covariate. This study was concerned with tests of homogeneity of regression coefficients and interaction under conditional (non)normality. Both procedures displayed erratic Type I error rates for the test of homogeneity of regression coefficients under conditional nonnormality. With all parametric assumptions valid, the simulation results demonstrated that the RT ANCOVA failed as a test for either homogeneity of regression coefficients or interaction due to severe Type I error inflation. The error inflation was most severe when departures from conditional normality were extreme. Also associated with the RT procedure was a loss of power. It is recommended that the RT procedure not be used as an alternative to factorial ANCOVA despite its encouragement from SAS, IMSL, and other respected sources.  相似文献   

2.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

3.
In Sections 49 and 50 of the Design of Experiments, Fisher discusses an experiment designed to compare the effects of several types of manure on yield. Each type of manure is applied at three dosage levels: zero, single, and double doses. Fisher points out that the usual contrasts constructed for a factorial experiment are unsatisfactory in this setting. In particular, since the response curves necessarily meet at the zero dose, the usual notion of interaction as a lack of parallelism cannot apply. Fisher then gives an appropriate definition for interaction in this setting. This paper is concerned with a class of orthogonal polynomials that can be used as an aid in the detection of this modified definition of interaction.  相似文献   

4.
In this paper, we consider nonparametric multiple comparison procedures for unbalanced two-way factorial designs under a pure nonparametric framework. For multiple comparisons of treatments versus a control concerning the main effects or the simple factor effects, the limiting distribution of the associated rank statistics is proven to satisfy the multivariate totally positive of order two condition. Hence, asymptotically the proposed Hochberg procedure strongly controls the familywise type I error rate for the simultaneous testing of the individual hypotheses. In addition, we propose to employ Shaffer's modified version of Holm's stepdown procedure to perform simultaneous tests on all pairwise comparisons regarding the main or simple factor effects and to perform simultaneous tests on all interaction effects. The logical constraints in the corresponding hypothesis families are utilized to sharpen the rejective thresholds and improve the power of the tests.  相似文献   

5.
Abstract.  Several testing procedures are proposed that can detect change-points in the error distribution of non-parametric regression models. Different settings are considered where the change-point either occurs at some time point or at some value of the covariate. Fixed as well as random covariates are considered. Weak convergence of the suggested difference of sequential empirical processes based on non-parametrically estimated residuals to a Gaussian process is proved under the null hypothesis of no change-point. In the case of testing for a change in the error distribution that occurs with increasing time in a model with random covariates the test statistic is asymptotically distribution free and the asymptotic quantiles can be used for the test. This special test statistic can also detect a change in the regression function. In all other cases the asymptotic distribution depends on unknown features of the data-generating process and a bootstrap procedure is proposed in these cases. The small sample performances of the proposed tests are investigated by means of a simulation study and the tests are applied to a data example.  相似文献   

6.
Mixed-level designs have become widely used in the practical experiments. When the levels of some factors are difficult to be changed or controlled, fractional factorial split-plot (FFSP) designs are often used. It is highly to know when a mixed-level FFSP design with resolution III or IV has clear effects. This paper investigates the conditions of a resolution III or IV FFSP design with both two-level and four-level factors to have various clear factorial effects, including two types of main effects and three types of two-factor interaction components. The structures of such designs are shown and illustrated with examples.  相似文献   

7.
We consider a 2r factorial experiment with at least two replicates. Our aim is to find a confidence interval for θ, a specified linear combination of the regression parameters (for the model written as a regression, with factor levels coded as ?1 and 1). We suppose that preliminary hypothesis tests are carried out sequentially, beginning with the rth‐order interaction. After these preliminary hypothesis tests, a confidence interval for θ with nominal coverage 1 ?α is constructed under the assumption that the selected model had been given to us a priori. We describe a new efficient Monte Carlo method, which employs conditioning for variance reduction, for estimating the minimum coverage probability of the resulting confidence interval. The application of this method is demonstrated in the context of a 23 factorial experiment with two replicates and a particular contrast θ of interest. The preliminary hypothesis tests consist of the following two‐step procedure. We first test the null hypothesis that the third‐order interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero and proceed to the second step; otherwise, we stop. In the second step, for each of the second‐order interactions we test the null hypothesis that the interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero. The resulting confidence interval, with nominal coverage probability 0.95, has a minimum coverage probability that is, to a good approximation, 0.464. This shows that this confidence interval is completely inadequate.  相似文献   

8.
In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments.  相似文献   

9.
The tutorial is concerned with two types of test for the general lack of fit of a linear regression model, as found in the Minitab software package, and which do not require replicated observations. They aim to identify non-specified curvature and interaction in predictors, by comparing fits over the predictor region divided into two parts. Minitab's regression subcommand XLOF which gives the tests is only briefly documented in the manual and, unlike virtually all other statistical procedures in the software, it is not standard and cannot be readily found in textbooks. The two types of test are described here; they concern the predictors one-at-a-time and the predictors all-at-once. An example of their use is given. A suite of macros is available which reproduces the results of the XLOF tests in much more detail than is given by the XLOF subcommand.  相似文献   

10.
A procedure is studied that uses rank-transformed data to perform exact and estimated exact tests, which is an alternative to the commonly used F-ratio test procedure. First, a common parametric test statistic is computed using rank-transformed data, where two methods of ranking-ranks taken for the original observations and ranks taken after aligning the observations-are studied. Significance is then determined using either the exact permutation distribution of the statistic or an estimate of this distribution based on a random sample of all possible permutations. Simulation studies compare the performance of this method with the normal theory parametric F-test and the traditional rank transform procedure. Power and nominal type I error rates are compared under conditions when normal theory assumptions are satisfied, as well as when these assumptions are violated. The method is studied for a two-factor factorial arrangement of treatments in a completely randomized design and for a split-unit experiment. The power of the tests rivals the parametric F-test when normal theory assumptions are satisfied, and is usually superior when normal theory assumptions are not satisfied. Based on the evidence of this study, the exact aligned rank procedure appears to be the overall best choice for performing tests in a general factorial experiment.  相似文献   

11.
Combinatorial extension and composition methods have been extensively used in the construction of block designs. One of the composition methods, namely the direct product or Kronecker product method was utilized by Chakravarti [1956] to produce certain types of fractional factorial designs. The present paper shows how the direct sum operation can be utilized in obtaining from initial fractional factorial designs for two separate symmetrical factorials a fractional factorial design for the corresponding asymmetrical factorial. Specifically, we provide some results which are useful in the construction of non-singular fractional factorial designs via the direct sum composition method. In addition a modified direct sum method is discussed and the consequences of imposing orthogonality are explored.  相似文献   

12.
《Econometric Reviews》2013,32(4):351-377
Abstract

In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments.  相似文献   

13.
The traditional design procedure for selecting the parameters of EWMA charts is based on the average run length (ARL). It is shown that for some types of EWMA charts, such a procedure may lead to high probability of a false out-of-control signal. An alternative procedure based on both the ARL and the standard deviation of run length (SRL) is recommended. It is shown that, with the new procedure, the EWMA chart using its exact variance can detect moderate and large shifts of the process mean faster.  相似文献   

14.
Many hypothesis tests are univariate tests and cannot cope with multiple hypothesis without an auxiliary procedure as e. g. the Bonferroni-Holm-procedure. At the same time, there is an urgent need for testing multiple hypothesis due to the very simple existing methods as the Bonferroni-correction or the Bonferroni-Holm-procedure, which suffers from a very small local significance level to detect statistical inferences or the drawback that logical and statistical dependencies among the test statistics are not used, whereby its detection is NP-hard. In honour of this occasion, we present a multiple hypothesis test for i.i.d. random variables based on conditional differences in means, which is capable to cope with multiple hypothesis and does not suffer on such drawbacks as the Bonferroni-correction or the Bonferroni-Holm-procedure. Thereby, the computation time can be neglected.  相似文献   

15.
A CONTINUOUSLY ADAPTIVE RANK TEST FOR SHIFT IN LOCATION   总被引:1,自引:0,他引:1  
This paper considers the problem of testing for shift in location when the symmetry of the underlying distribution is in doubt. Various adaptive test procedures have been suggested in the literature; they are mainly based on a preliminary test or measure of asymmetry, and then choosing between the sign or the Wilcoxon tests accordingly. However, as this paper demonstrates, there are some disadvantages with such procedures. This paper develops a test that does not suffer from such disadvantages. The proposed test is based on modifying the Wilcoxon scores according to the evidence of asymmetry of the distribution present in the data as indicated by the magnitude of the P‐value from a preliminary test of symmetry. A simulation study investigates and compares the performance of the proposed test and other known adaptive procedures in terms of power and attainment of the nominal size. The performance of a suitable bootstrap procedure for the situation under consideration is also studied. In most cases under consideration, the proposed test is found to be superior to the other tests.  相似文献   

16.
A versatile procedure is described comprising an application of statistical techniques to the analysis of the large, multi‐dimensional data arrays produced by electroencephalographic (EEG) measurements of human brain function. Previous analytical methods have been unable to identify objectively the precise times at which statistically significant experimental effects occur, owing to the large number of variables (electrodes) and small number of subjects, or have been restricted to two‐treatment experimental designs. Many time‐points are sampled in each experimental trial, making adjustment for multiple comparisons mandatory. Given the typically large number of comparisons and the clear dependence structure among time‐points, simple Bonferroni‐type adjustments are far too conservative. A three‐step approach is proposed: (i) summing univariate statistics across variables; (ii) using permutation tests for treatment effects at each time‐point; and (iii) adjusting for multiple comparisons using permutation distributions to control family‐wise error across the whole set of time‐points. Our approach provides an exact test of the individual hypotheses while asymptotically controlling family‐wise error in the strong sense, and can provide tests of interaction and main effects in factorial designs. An application to two experimental data sets from EEG studies is described, but the approach has application to the analysis of spatio‐temporal multivariate data gathered in many other contexts.  相似文献   

17.
One of the main advantages of factorial experiments is the information that they can offer on interactions. When there are many factors to be studied, some or all of this information is often sacrificed to keep the size of an experiment economically feasible. Two strategies for group screening are presented for a large number of factors, over two stages of experimentation, with particular emphasis on the detection of interactions. One approach estimates only main effects at the first stage (classical group screening), whereas the other new method (interaction group screening) estimates both main effects and key two-factor interactions at the first stage. Three criteria are used to guide the choice of screening technique, and also the size of the groups of factors for study in the first-stage experiment. The criteria seek to minimize the expected total number of observations in the experiment, the probability that the size of the experiment exceeds a prespecified target and the proportion of active individual factorial effects which are not detected. To implement these criteria, results are derived on the relationship between the grouped and individual factorial effects, and the probability distributions of the numbers of grouped factors whose main effects or interactions are declared active at the first stage. Examples are used to illustrate the methodology, and some issues and open questions for the practical implementation of the results are discussed.  相似文献   

18.
For testing the equality of two survival functions, the weighted logrank test and the weighted Kaplan–Meier test are the two most widely used methods. Actually, each of these tests has advantages and defects against various alternatives, while we cannot specify in advance the possible types of the survival differences. Hence, how to choose a single test or combine a number of competitive tests for indicating the diversities of two survival functions without suffering a substantial loss in power is an important issue. Instead of directly using a particular test which generally performs well in some situations and poorly in others, we further consider a class of tests indexed by a weighted parameter for testing the equality of two survival functions in this paper. A delete-1 jackknife method is implemented for selecting weights such that the variance of the test is minimized. Some numerical experiments are performed under various alternatives for illustrating the superiority of the proposed method. Finally, the proposed testing procedure is applied to two real-data examples as well.  相似文献   

19.
The two-way two-levels crossed factorial design is a commonly used design by practitioners at the exploratory phase of industrial experiments. The F-test in the usual linear model for analysis of variance (ANOVA) is a key instrument to assess the impact of each factor and of their interactions on the response variable. However, if assumptions such as normal distribution and homoscedasticity of errors are violated, the conventional wisdom is to resort to nonparametric tests. Nonparametric methods, rank-based as well as permutation, have been a subject of recent investigations to make them effective in testing the hypotheses of interest and to improve their performance in small sample situations. In this study, we assess the performances of some nonparametric methods and, more importantly, we compare their powers. Specifically, we examine three permutation methods (Constrained Synchronized Permutations, Unconstrained Synchronized Permutations and Wald-Type Permutation Test), a rank-based method (Aligned Rank Transform) and a parametric method (ANOVA-Type Test). In the simulations, we generate datasets with different configurations of distribution of errors, variance, factor's effect and number of replicates. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the conditions under which some tests cannot be used, the tradeoff between power and type I error, and the bias of the power on one main factor analysis due to the presence of effect of the other factor. A dataset from an industrial engineering experiment for thermoformed packaging production is used to illustrate the application of the various methods of analysis, taking into account the power of the test suggested by the objective of the experiment.  相似文献   

20.
One of the major unresolved problems in the area of nonparametric statistics is the need for satisfactory rank-based test procedures for non-additive models in the two-way layout, especially when there is only one observation on each combination of the levels of the experimental factors. In this paper we consider an arbitrary non-additive model for the two-way layout with n levels of each factor. We utilize both alignment and ranking of the data together with basic properties of Latin squares to develop rank tests for interaction (non-additivity). Our technique involves first aligning within one of the main effects, ranking within the other main effects (columns and rows) and then adding the resulting ranks within “interaction bands” corresponding to orthogonal partitions of the interaction for the model, as denoted by the letters of an n × n Latin square. A Friedman-type statistic is then computed on the resulting sums. This is repeated for each of (n?1) mutually orthogonal Latin squares (thus accounting for all the interaction degrees of freedom). The resulting (n?1) Friedman-type statistics are finally combined to obtain an overall test statistic. The necessary null distribution tables for applying the proposed test for non-additivity are presented and we discuss the results of a Monte Carlo simulation study of the relative powers of this new procedure and other (parametric and nonparametric) procedures designed to detect interaction in a two-way layout with one observation per cell.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号