首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
For animal carcinogenicity study with multiple dose groups, positive trend test and pairwise comparisons of treated groups with control are generally performed using the Cochran-Armitage, Peto test, or Poly-K test. These tests are asymptotically normal. The exact version of Cochran-Armitage and Peto tests are available based on the permutation test assuming fixed column and row totals. For Poly-K test column totals depend on the mortality pattern of the animals and can not be kept fixed over the permutations of the animals. In this work a modification of the permutation test is suggested that can be applied on exact Poly-K test.  相似文献   

2.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

3.
Permutation tests are often used to analyze data since they may not require one to make assumptions regarding the form of the distribution to have a random and independent sample selection. We initially considered a permutation test to assess the treatment effect on computed tomography lesion volume in the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial, which has highly skewed data. However, we encountered difficulties in summarizing the permutation test results on the lesion volume. In this paper, we discuss some aspects of permutation tests and illustrate our findings. This experience with the NINDS t-PA Stroke Trial data emphasizes that permutation tests are useful for clinical trials and can be used to validate assumptions of an observed test statistic. The permutation test places fewer restrictions on the underlying distribution but is not always distribution-free or an exact test, especially for ill-behaved data. Quasi-likelihood estimation using the generalized estimating equation (GEE) approach on transformed data seems to be a good choice for analyzing CT lesion data, based on both its corresponding permutation test and its clinical interpretation.  相似文献   

4.
Generalized discriminant analysis based on distances   总被引:14,自引:1,他引:13  
This paper describes a method of generalized discriminant analysis based on a dissimilarity matrix to test for differences in a priori groups of multivariate observations. Use of classical multidimensional scaling produces a low‐dimensional representation of the data for which Euclidean distances approximate the original dissimilarities. The resulting scores are then analysed using discriminant analysis, giving tests based on the canonical correlations. The asymptotic distributions of these statistics under permutations of the observations are shown to be invariant to changes in the distributions of the original variables, unlike the distributions of the multi‐response permutation test statistics which have been considered by other workers for testing differences among groups. This canonical method is applied to multivariate fish assemblage data, with Monte Carlo simulations to make power comparisons and to compare theoretical results and empirical distributions. The paper proposes classification based on distances. Error rates are estimated using cross‐validation.  相似文献   

5.
Multivariate hypothesis testing in studies of vegetation is likely to be hindered by unrealistic assumptions when based on conventional statistical methods. This can be overcome by randomization tests. In this paper, the accuracy and power of a MANOVA randomization test are evaluated for one and two factors with interaction with simulated data from three distributions. The randomization test is based on the partitioning of sum of squares computed from Euclidean distances. In one-factor designs, sample size and variance inequality were evaluated. The results showed a high level of accuracy. The power curve was higher with normal distribution, lower with uniform, intermediate with lognormal and was sensitive to variance inequality. In two-factor designs, three methods of permutations and two statistics were compared. The results showed that permutation of the residuals with F pseudo is accurate and can give good power for testing the interaction and restricted permutation for testing main factors.  相似文献   

6.
The essence of the generalised multivariate Behrens–Fisher problem (BFP) is how to test the null hypothesis of equality of mean vectors for two or more populations when their dispersion matrices differ. Solutions to the BFP usually assume variables are multivariate normal and do not handle high‐dimensional data. In ecology, species' count data are often high‐dimensional, non‐normal and heterogeneous. Also, interest lies in analysing compositional dissimilarities among whole communities in non‐Euclidean (semi‐metric or non‐metric) multivariate space. Hence, dissimilarity‐based tests by permutation (e.g., PERMANOVA, ANOSIM) are used to detect differences among groups of multivariate samples. Such tests are not robust, however, to heterogeneity of dispersions in the space of the chosen dissimilarity measure, most conspicuously for unbalanced designs. Here, we propose a modification to the PERMANOVA test statistic, coupled with either permutation or bootstrap resampling methods, as a solution to the BFP for dissimilarity‐based tests. Empirical simulations demonstrate that the type I error remains close to nominal significance levels under classical scenarios known to cause problems for the un‐modified test. Furthermore, the permutation approach is found to be more powerful than the (more conservative) bootstrap for detecting changes in community structure for real ecological datasets. The utility of the approach is shown through analysis of 809 species of benthic soft‐sediment invertebrates from 101 sites in five areas spanning 1960 km along the Norwegian continental shelf, based on the Jaccard dissimilarity measure.  相似文献   

7.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

9.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

10.
Book Reviews     
The Levene test is a widely used test for detecting differences in dispersion. The modified Levene transformation using sample medians is considered in this article. After Levene's transformation the data are not normally distributed, hence, nonparametric tests may be useful. As the Wilcoxon rank sum test applied to the transformed data cannot control the type I error rate for asymmetric distributions, a permutation test based on reallocations of the original observations rather than the absolute deviations was investigated. Levene's transformation is then only an intermediate step to compute the test statistic. Such a Levene test, however, cannot control the type I error rate when the Wilcoxon statistic is used; with the Fisher–Pitman permutation test it can be extremely conservative. The Fisher–Pitman test based on reallocations of the transformed data seems to be the only acceptable nonparametric test. Simulation results indicate that this test is on average more powerful than applying the t test after Levene's transformation, even when the t test is improved by the deletion of structural zeros.  相似文献   

11.
We begin by describing how to find the limits of confidence intervals by using a few permutation tests of significance. Next, we demonstrate how the adaptive permutation test, which maintains its level of significance, produces confidence intervals that maintain their coverage probabilities. By inverting adaptive tests, adaptive confidence intervals can be found for any single parameter in a multiple regression model. These adaptive confidence intervals are often narrower than the traditional confidence intervals when the error distributions are long‐tailed or skewed. We show how much reduction in width can be achieved for the slopes in several multiple regression models and for the interaction effect in a two‐way design. An R function that can compute these adaptive confidence intervals is described and instructions are provided for its use with real data.  相似文献   

12.
The sup $LM$ test for structural change is embedded into a permutation test framework for a simple location model. The resulting conditional permutation distribution is compared to the usual (unconditional) asymptotic distribution, showing that the power of the test can be clearly improved in small samples. Furthermore, the permutation test is embedded into a general framework that encompasses tools for binary and multivariate dependent variables as well as model-based permutation testing for structural change. It is also demonstrated that the methods can not only be employed for analyzing structural changes in time series data but also for recursive partitioning of cross-section data. The procedures suggested are illustrated using both artificial data and empirical applications (number of youth homicides, employment discrimination data, carbon flux in tropical forests, stock returns, and demand for economics journals).  相似文献   

13.
14.
Supersaturated designs (SSDs) are useful in examining many factors with a restricted number of experimental units. Many analysis methods have been proposed to analyse data from SSDs, with some methods performing better than others when data are normally distributed. It is possible that data sets violate assumptions of standard analysis methods used to analyse data from SSDs, and to date the performance of these analysis methods have not been evaluated using nonnormally distributed data sets. We conducted a simulation study with normally and nonnormally distributed data sets to compare the identification rates, power and coverage of the true models using a permutation test, the stepwise procedure and the smoothly clipped absolute deviation (SCAD) method. Results showed that at the level of significance α=0.01, the identification rates of the true models of the three methods were comparable; however at α=0.05, both the permutation test and stepwise procedures had considerably lower identification rates than SCAD. For most cases, the three methods produced high power and coverage. The experimentwise error rates (EER) were close to the nominal level (11.36%) for the stepwise method, while they were somewhat higher for the permutation test. The EER for the SCAD method were extremely high (84–87%) for the normal and t-distributions, as well as for data with outlier.  相似文献   

15.
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples that can be collected in individuals is limited and the sampling times must be as flexible as possible. Therefore to optimize joint design for several drugs (i.e., to determine a compromise between informative times that best characterize each drug's kinetics), we proposed to use a compound optimality criterion based on the expected population Fisher information matrix in nonlinear mixed effect models. This criterion allows weighting different models, which might be useful to take into account the importance accorded to each target in a phenotyping test. We also computed windows around the optimal times based on recursive random sampling and Monte‐Carlo simulation while maintaining a reasonable level of efficiency for parameter estimation. We illustrated this strategy for two drugs often included in phenotyping cocktails, midazolam (probe for CYP3A) and digoxin (P‐glycoprotein), based on the data of a previous study, and were able to find a sparse and flexible design. The obtained design was evaluated by clinical trial simulations and shown to be efficient for the estimation of population and individual parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
This paper describes a permutation procedure to test for the equality of selected elements of a covariance or correlation matrix across groups. It involves either centring or standardising each variable within each group before randomly permuting observations between groups. Since the assumption of exchangeability of observations between groups does not strictly hold following such transformations, Monte Carlo simulations were used to compare expected and empirical rejection levels as a function of group size, the number of groups and distribution type (Normal, mixtures of Normals and Gamma with various values of the shape parameter). The Monte Carlo study showed that the estimated probability levels are close to those that would be obtained with an exact test except at very small sample sizes (5 or 10 observations per group). The test appears robust against non-normal data, different numbers of groups or variables per group and unequal sample sizes per group. Power was increased with increasing sample size, effect size and the number of elements in the matrix and power was decreased with increasingly unequal numbers of observations per group.  相似文献   

17.
ABSTRACT

In this article we present a new solution to test for effects in unreplicated two-level factorial designs. The proposed test statistic, in case the error components are normally distributed, follows an F random variable, though our attention is on its nonparametric permutation version. The proposed procedure does not require any transformation of data such as residualization and it is exact for each effect and distribution-free. Our main aim is to discuss a permutation solution conditional to the original vector of responses. We give two versions of the same nonparametric testing procedure in order to control both the individual error rate and the experiment-wise error rate. A power comparison with Loughin and Noble's test is provided in the case of a unreplicated 24 full factorial design.  相似文献   

18.
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log‐rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan–Meier estimators of survival curves using an IPTW log‐rank test for multi‐valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
When comparing the central values of two independent groups, should a t-test be performed, or should the observations be transformed into their ranks and a Wilcoxon-Mann-Whitney test performed? This paper argues that neither should automatically be chosen. Instead, provided that software for conducting randomisation tests is available, the chief concern should be with obtaining data values that are a good reflection of scientific reality and appropriate to the objective of the research; if necessary, the data values should be transformed so that this is so. The subsequent use of a randomisation (permutation) test will mean that failure of the transformed data values to satisfy assumptions such as normality and equality of variances will not be of concern.  相似文献   

20.
This study compares empirical type I error and power of different permutation techniques that can be used for partial correlation analysis involving three data vectors and for partial Mantel tests. The partial Mantel test is a form of first-order partial correlation analysis involving three distance matrices which is widely used in such fields as population genetics, ecology, anthropology, psychometry and sociology. The methods compared are the following: (1) permute the objects in one of the vectors (or matrices); (2) permute the residuals of a null model; (3) correlate residualized vector 1 (or matrix A) to residualized vector 2 (or matrix B); permute one of the residualized vectors (or matrices); (4) permute the residuals of a full model. In the partial correlation study, the results were compared to those of the parametric t-test which provides a reference under normality. Simulations were carried out to measure the type I error and power of these permutatio methods, using normal and non-normal data, without and with an outlier. There were 10 000 simulations for each situation (100 000 when n = 5); 999 permutations were produced per test where permutations were used. The recommended testing procedures are the following:(a) In partial correlation analysis, most methods can be used most of the time. The parametric t-test should not be used with highly skewed data. Permutation of the raw data should be avoided only when highly skewed data are combined with outliers in the covariable. Methods implying permutation of residuals, which are known to only have asymptotically exact significance levels, should not be used when highly skewed data are combined with small sample size. (b) In partial Mantel tests, method 2 can always be used, except when highly skewed data are combined with small sample size. (c) With small sample sizes, one should carefully examine the data before partial correlation or partial Mantel analysis. For highly skewed data, permutation of the raw data has correct type I error in the absence of outliers. When highly skewed data are combined with outliers in the covariable vector or matrix, it is still recommended to use the permutation of raw data. (d) Method 3 should never be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号