首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
In recent years permutation testing methods have increased both in number of applications and in solving complex multivariate problems. When available permutation tests are essentially of an exact nonparametric nature in a conditional context, where conditioning is on the pooled observed data set which is often a set of sufficient statistics in the null hypothesis. Whereas, the reference null distribution of most parametric tests is only known asymptotically. Thus, for most sample sizes of practical interest, the possible lack of efficiency of permutation solutions may be compensated by the lack of approximation of parametric counterparts. There are many complex multivariate problems, quite common in empirical sciences, which are difficult to solve outside the conditional framework and in particular outside the method of nonparametric combination (NPC) of dependent permutation tests. In this paper we review such a method and its main properties along with some new results in experimental and observational situations (robust testing, multi-sided alternatives and testing for survival functions).  相似文献   

2.
This study compares empirical type I error and power of different permutation techniques that can be used for partial correlation analysis involving three data vectors and for partial Mantel tests. The partial Mantel test is a form of first-order partial correlation analysis involving three distance matrices which is widely used in such fields as population genetics, ecology, anthropology, psychometry and sociology. The methods compared are the following: (1) permute the objects in one of the vectors (or matrices); (2) permute the residuals of a null model; (3) correlate residualized vector 1 (or matrix A) to residualized vector 2 (or matrix B); permute one of the residualized vectors (or matrices); (4) permute the residuals of a full model. In the partial correlation study, the results were compared to those of the parametric t-test which provides a reference under normality. Simulations were carried out to measure the type I error and power of these permutatio methods, using normal and non-normal data, without and with an outlier. There were 10 000 simulations for each situation (100 000 when n = 5); 999 permutations were produced per test where permutations were used. The recommended testing procedures are the following:(a) In partial correlation analysis, most methods can be used most of the time. The parametric t-test should not be used with highly skewed data. Permutation of the raw data should be avoided only when highly skewed data are combined with outliers in the covariable. Methods implying permutation of residuals, which are known to only have asymptotically exact significance levels, should not be used when highly skewed data are combined with small sample size. (b) In partial Mantel tests, method 2 can always be used, except when highly skewed data are combined with small sample size. (c) With small sample sizes, one should carefully examine the data before partial correlation or partial Mantel analysis. For highly skewed data, permutation of the raw data has correct type I error in the absence of outliers. When highly skewed data are combined with outliers in the covariable vector or matrix, it is still recommended to use the permutation of raw data. (d) Method 3 should never be used.  相似文献   

3.
In nonparametric statistics, a hypothesis testing problem based on the ranks of the data gives rise to two separate permutation sets corresponding to the null and to the alternative hypothesis, respectively. A modification of Critchlow's unified approach to hypothesis testing is proposed. By defining the distance between permutation sets to be the average distance between pairs of permutations, one from each set, various test statistics are derived for the multi-sample location problem and the two-way layout. The asymptotic distributions of the test statistics are computed under both the null and alternative hypotheses. Some comparisons are made on the basis of the asymptotic relative efficiency.  相似文献   

4.
Many exploratory studies such as microarray experiments require the simultaneous comparison of hundreds or thousands of genes. It is common to see that most genes in many microarray experiments are not expected to be differentially expressed. Under such a setting, a procedure that is designed to control the false discovery rate (FDR) is aimed at identifying as many potential differentially expressed genes as possible. The usual FDR controlling procedure is constructed based on the number of hypotheses. However, it can become very conservative when some of the alternative hypotheses are expected to be true. The power of a controlling procedure can be improved if the number of true null hypotheses (m 0) instead of the number of hypotheses is incorporated in the procedure [Y. Benjamini and Y. Hochberg, On the adaptive control of the false discovery rate in multiple testing with independent statistics, J. Edu. Behav. Statist. 25(2000), pp. 60–83]. Nevertheless, m 0 is unknown, and has to be estimated. The objective of this article is to evaluate some existing estimators of m 0 and discuss the feasibility of these estimators in incorporating into FDR controlling procedures under various experimental settings. The results of simulations can help the investigator to choose an appropriate procedure to meet the requirement of the study.  相似文献   

5.
The authors propose a method for comparing two samples of curves. The notion of similarity between two curves is the basis of three statistics they suggest for testing the null hypothesis of no difference between the two groups. They exploit standard tools from functional data analysis to preprocess the observed curves and use the permutation distribution under the null hypothesis to obtain p‐values for their tests. They explore the operating characteristics of these tests through simulations and as an application, compare the ganglioside distribution in brain tissue between old and young rats.  相似文献   

6.
Identifying differentially expressed genes is a basic objective in microarray experiments. Many statistical methods for detecting differentially expressed genes in multiple-slide experiments have been proposed. However, sometimes with limited experimental resources, only a single cDNA array or two Oligonuleotide arrays could be made or only insufficient replicated arrays could be conducted. Many current statistical models cannot be used because of the non-availability of replicated data. Simply using fold changes is also unreliable and inefficient [Chen et al. 1997. Ratio-based decisions and the quantitative analysis of cDNA microarray images. J. Biomed. Optics 2, 364–374; Newton et al. 2001. On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. J. Comput. Biol. 8, 37–52; Pan et al. 2002. How many replicates of arrays are required to detect gene expression changes in microarray experiments? a mixture model approach. Genome Biol. 3, research0022.1-0022.10]. We propose a new method. If the log-transformed ratios for the expressed genes as well as unexpressed genes have equal variance, we use a Hadamard matrix to construct a t-test from a single array data. Basically, we test whether each doubtful gene has significantly differential expression compared to the unexpressed genes. We form some new random variables corresponding to the rows of a Hadamard matrix using the algebraic sum of gene expressions. A one-sample t-test is constructed and the p-value is calculated for each doubtful gene based on these random variables. By using any method for multiple testing, adjusted p-values could be obtained from original p-values and significance of doubtful genes can be determined. When the variance of expressed genes differs from the variance of unexpressed genes, we construct a z-statistic based on the result from application of Hadamard matrix and find the confidence interval to retain the null hypothesis. Using the interval, we determine differentially expressed genes. This method is also useful for multiple microarrays, especially when sufficient replicated data are not available for a traditional t-test. We apply our methodology to ApoAI data. The results appear to be promising. They not only confirm the early known differentially expressed genes, but also indicate more genes to be differentially expressed.  相似文献   

7.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

9.
Gene expression data analysis provides scientists with a wealth of information about gene relationships, particularly the identification of significantly differentially expressed genes. However, there is no consensus on the analysis technique that will solve the inherent multiplicity problem (thousands of genes to be tested) and yield a reasonable and statistically justifiable number of differentially expressed genes. We propose the Multiplicity-Adjusted Order Statistics Analysis (MAOSA) to identify differentially expressed genes while adjusting for the multiple testing. The multiplicity problem will be eased by performing a Bonferroni correction on a small number of effects, since the majority of genes are not differentially expressed.  相似文献   

10.
A sequential method for approximating a general permutation test (SAPT) is proposed and evaluated. Permutations are randomly generated from some set G, and a sequential probability ratio test (SPRT) is used to determine whether an observed test statistic falls sufficiently far in the tail of the permutation distribution to warrant rejecting some hypothesis. An estimate and bounds on the power function of the SPRT are used to find bounds on the effective significance level of the SAPT. Guidelines are developed for choosing parameters in order to obtain a desired significance level and minimize the number of permutations needed to reach a decision. A theoretical estimate of the average number of permutations under the null hypothesis is given along with simulation results demonstrating the power and average number of permutations for various alternatives. The sequential approximation retains the generality of the permutation test,- while avoiding the computational complexities that arise in attempting to computer the full permutation distribution exactly  相似文献   

11.
We study various bootstrap and permutation methods for matched pairs, whose distributions can have different shapes even under the null hypothesis of no treatment effect. Although the data may not be exchangeable under the null, we investigate different permutation approaches as valid procedures for finite sample sizes. It will be shown that permutation or bootstrap schemes, which neglect the dependency structure in the data, are asymptotically valid. Simulation studies show that these new tests improve the power of the t-test under non-normality.  相似文献   

12.
Exact ksample permutation tests for binary data for three commonly encountered hypotheses tests are presented,, The tests are derived both under the population and randomization models . The generating function for the number of cases in the null distribution is obtained, The asymptotic distributions of the test statistics are derived . Actual significance levels are computed for the asymptotic test versions , Random sampling of the null distribution is suggested as a superior alternative to the asymptotics and an efficient computer technique for implementing the random sampling is described., finally, some numerical examples are presented and sample size guidelines given for computer implementation of the exact tests.  相似文献   

13.
It is well known that the testing of zero variance components is a non-standard problem since the null hypothesis is on the boundary of the parameter space. The usual asymptotic chi-square distribution of the likelihood ratio and score statistics under the null does not necessarily hold because of this null hypothesis. To circumvent this difficulty in balanced linear growth curve models, we introduce an appropriate test statistic and suggest a permutation procedure to approximate its finite-sample distribution. The proposed test alleviates the necessity of any distributional assumptions for the random effects and errors and can easily be applied for testing multiple variance components. Our simulation studies show that the proposed test has Type I error rate close to the nominal level. The power of the proposed test is also compared with the likelihood ratio test in the simulations. An application on data from an orthodontic study is presented and discussed.  相似文献   

14.
Let X1,…, Xn be random variables symmetric about θ from a common unknown distribution Fθ(x) =F(x–θ). To test the null hypothesis H0:θ= 0 against the alternative H1:θ > 0, permutation tests can be used at the cost of computational difficulties. This paper investigates alternative tests that are computationally simpler, notably some bootstrap tests which are compared with permutation tests. Of these the symmetrical bootstrap-f test competes very favourably with the permutation test in terms of Bahadur asymptotic efficiency, so it is a very attractive alternative.  相似文献   

15.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

16.
The likelihood-ratio test (LRT) is considered as a goodness-of-fit test for the null hypothesis that several distribution functions are uniformly stochastically ordered. Under the null hypothesis, H1 : F1 ? F2 ?···? FN, the asymptotic distribution of the LRT statistic is a convolution of several chi-bar-square distributions each of which depends upon the location parameter. The least-favourable parameter configuration for the LRT is not unique. It can be two different types and depends on the number of distributions, the number of intervals and the significance level α. This testing method is illustrated with a data set of survival times of five groups of male fruit flies.  相似文献   

17.

A basic graphical approach for checking normality is the Q - Q plot that compares sample quantiles against the population quantiles. In the univariate analysis, the probability plot correlation coefficient test for normality has been studied extensively. We consider testing the multivariate normality by using the correlation coefficient of the Q - Q plot. When multivariate normality holds, the sample squared distance should follow a chi-square distribution for large samples. The plot should resemble a straight line. A correlation coefficient test can be constructed by using the pairs of points in the probability plot. When the correlation coefficient test does not reject the null hypothesis, the sample data may come from a multivariate normal distribution or some other distributions. So, we use the following two steps to test multivariate normality. First, we check the multivariate normality by using the probability plot correction coefficient test. If the test does not reject the null hypothesis, then we test symmetry of the distribution and determine whether multivariate normality holds. This test procedure is called the combination test. The size and power of this test are studied, and it is found that the combination test, in general, is more powerful than other tests for multivariate normality.  相似文献   

18.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

19.
When thousands of tests are performed simultaneously to detect differentially expressed genes in microarray analysis, the number of Type I errors can be immense if a multiplicity adjustment is not made. However, due to the large scale, traditional adjustment methods require very stringen significance levels for individual tests, which yield low power for detecting alterations. In this work, we describe how two omnibus tests can be used in conjunction with a gene filtration process to circumvent difficulties due to the large scale of testing. These two omnibus tests, the D-test and the modified likelihood ratio test (MLRT), can be used to investigate whether a collection of P-values has arisen from the Uniform(0,1) distribution or whether the Uniform(0,1) distribution contaminated by another Beta distribution is more appropriate. In the former case, attention can be directed to a smaller part of the genome; in the latter event, parameter estimates for the contamination model provide a frame of reference for multiple comparisons. Unlike the likelihood ratio test (LRT), both the D-test and MLRT enjoy simple limiting distributions under the null hypothesis of no contamination, so critical values can be obtained from standard tables. Simulation studies demonstrate that the D-test and MLRT are superior to the AIC, BIC, and Kolmogorov-Smirnov test. A case study illustrates omnibus testing and filtration.  相似文献   

20.
In this paper, we study the multi-class differential gene expression detection for microarray data. We propose a likelihood-based approach to estimating an empirical null distribution to incorporate gene interactions and provide a more accurate false-positive control than the commonly used permutation or theoretical null distribution-based approach. We propose to rank important genes by p-values or local false discovery rate based on the estimated empirical null distribution. Through simulations and application to lung transplant microarray data, we illustrate the competitive performance of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号