首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research involving a clinical intervention is normally aimed at testing the treatment effects on a dependent variable, which is assumed to be a relevant indicator of health or quality-of-life status. In much clinical research large-n trials are in fact impractical because the availability of individuals within well-defined categories is limited in this application field. This makes it more and more important to concentrate on single-case experiments. The goal with these is to investigate the presence of a difference in the effect of the treatments considered in the study. In this setting, valid inference generally cannot be made using the parametric statistical procedures that are typically used for the analysis of clinical trials and other large-n designs. Hence, nonparametric tools can be a valid alternative to analyze this kind of data. We propose a permutation solution to assess treatment effects in single-case experiments within alternation designs. An extension to the case of more than two treatments is also presented. A simulation study shows that the approach is both reliable under the null hypothesis and powerful under the alternative, and that it improves the performance of a considered competitor. In the end, we present the results of a real case application.  相似文献   

2.
Sunset Salvo     
The Wilcoxon—Mann—Whitney test enjoys great popularity among scientists comparing two groups of observations, especially when measurements made on a continuous scale are non-normally distributed. Triggered by different results for the procedure from two statistics programs, we compared the outcomes from 11 PC-based statistics packages. The findings were that the delivered p values ranged from significant to nonsignificant at the 5% level, depending on whether a large-sample approximation or an exact permutation form of the test was used and, in the former case, whether or not a correction for continuity was used and whether or not a correction for ties was made. Some packages also produced pseudo-exact p values, based on the null distribution under the assumption of no ties. A further crucial point is that the variant of the algorithm used for computation by the packages is rarely indicated in the output or documented in the Help facility and the manuals. We conclude that the only accurate form of the Wilcoxon—Mann—Whitney procedure is one in which the exact permutation null distribution is compiled for the actual data.  相似文献   

3.
We discuss findings regarding the permutation distributions of treatment effect estimators in the proportional hazards model. For fixed sample size n, we will prove that all uncensored and untied event times yield the same permutation distribution of treatment effect estimators in the proportional hazards model. In other words this distribution is irrelevant with respect to the actual event times. We will show several uniqueness properties under different conditions. These properties are useful for small sample permutation tests and also helpful to large sample cases.  相似文献   

4.
Two analysis of means type randomization tests for testing the equality of I variances for unbalanced designs are presented. Randomization techniques for testing statistical hypotheses can be used when parametric tests are inappropriate. Suppose that I independent samples have been collected. Randomization tests are based on shuffles or rearrangements of the (combined) sample. Putting each of the I samples ‘in a bowl’ forms the combined sample. Drawing samples ‘from the bowl’ forms a shuffle. Shuffles can be made with replacement (bootstrap shuffling) or without replacement (permutation shuffling). The tests that are presented offer two advantages. They are robust to non-normality and they allow the user to graphically present the results via a decision chart similar to a Shewhart control chart. A Monte Carlo study is used to verify that the permutation version of the tests exhibit excellent power when compared to other robust tests. The Monte Carlo study also identifies circumstances under which the popular Levene's test fails.  相似文献   

5.
ABSTRACT

In this article we present a new solution to test for effects in unreplicated two-level factorial designs. The proposed test statistic, in case the error components are normally distributed, follows an F random variable, though our attention is on its nonparametric permutation version. The proposed procedure does not require any transformation of data such as residualization and it is exact for each effect and distribution-free. Our main aim is to discuss a permutation solution conditional to the original vector of responses. We give two versions of the same nonparametric testing procedure in order to control both the individual error rate and the experiment-wise error rate. A power comparison with Loughin and Noble's test is provided in the case of a unreplicated 24 full factorial design.  相似文献   

6.
Under a randomization model for a completely randomized design permutation tests are considered based on the usual F statistic and on a multi-response permutation procedure statistic. For the first statistic the first two moments are obtained so a comparision with the distribution under the normal theory model can be made. The second statistic is shown to converge in distribution to an infinite weighted sum of chi-squared variates, the weights being the limits of the eigenvalues of a matrix depending on the distance measure used and the order statistics of the observations.  相似文献   

7.
In this paper, we investigate different procedures for testing the equality of two mean survival times in paired lifetime studies. We consider Owen’s M-test and Q-test, a likelihood ratio test, the paired t-test, the Wilcoxon signed rank test and a permutation test based on log-transformed survival times in the comparative study. We also consider the paired t-test, the Wilcoxon signed rank test and a permutation test based on original survival times for the sake of comparison. The size and power characteristics of these tests are studied by means of Monte Carlo simulations under a frailty Weibull model. For less skewed marginal distributions, the Wilcoxon signed rank test based on original survival times is found to be desirable. Otherwise, the M-test and the likelihood ratio test are the best choices in terms of power. In general, one can choose a test procedure based on information about the correlation between the two survival times and the skewness of the marginal survival distributions.  相似文献   

8.
We study various bootstrap and permutation methods for matched pairs, whose distributions can have different shapes even under the null hypothesis of no treatment effect. Although the data may not be exchangeable under the null, we investigate different permutation approaches as valid procedures for finite sample sizes. It will be shown that permutation or bootstrap schemes, which neglect the dependency structure in the data, are asymptotically valid. Simulation studies show that these new tests improve the power of the t-test under non-normality.  相似文献   

9.
The k nearest neighbors (k-NN) classifier is one of the most popular methods for statistical pattern recognition and machine learning. In practice, the size k, the number of neighbors used for classification, is usually arbitrarily set to one or some other small numbers, or based on the cross-validation procedure. In this study, we propose a novel alternative approach to decide the size k. Based on a k-NN-based multivariate multi-sample test, we assign each k a permutation test based Z-score. The number of NN is set to the k with the highest Z-score. This approach is computationally efficient since we have derived the formulas for the mean and variance of the test statistic under permutation distribution for multiple sample groups. Several simulation and real-world data sets are analyzed to investigate the performance of our approach. The usefulness of our approach is demonstrated through the evaluation of prediction accuracies using Z-score as a criterion to select the size k. We also compare our approach to the widely used cross-validation approaches. The results show that the size k selected by our approach yields high prediction accuracies when informative features are used for classification, whereas the cross-validation approach may fail in some cases.  相似文献   

10.
Multivariate combination-based permutation tests have been widely used in many complex problems. In this paper we focus on the equipower property, derived directly from the finite-sample consistency property, and we analyze the impact of the dependency structure on the combined tests. At first, we consider the finite-sample consistency property which assumes that sample sizes are fixed (and possibly small) and considers on each subject a large number of informative variables. Moreover, since permutation test statistics do not require to be standardized, we need not assume that data are homoscedastic in the alternative. The equipower property is then derived from these two notions: consider the unconditional permutation power of a test statistic T for fixed sample sizes, with V ? 2 independent and identically distributed variables and fixed effect δ, calculated in two ways: (i) by considering two V-dimensional samples sized m1 and m2, respectively; (ii) by considering two unidimensional samples sized n1 = Vm1 and n2 = Vm2, respectively. Since the unconditional power essentially depends on the non centrality induced by T, and two ways are provided with exactly the same likelihood and the same non centrality, we show that they are provided with the same power function, at least approximately. As regards both investigating the equipower property and the power behavior in presence of correlation we performed an extensive simulation study.  相似文献   

11.
In this article, we develop new bootstrap-based inference for noncausal autoregressions with heavy-tailed innovations. This class of models is widely used for modeling bubbles and explosive dynamics in economic and financial time series. In the noncausal, heavy-tail framework, a major drawback of asymptotic inference is that it is not feasible in practice as the relevant limiting distributions depend crucially on the (unknown) decay rate of the tails of the distribution of the innovations. In addition, even in the unrealistic case where the tail behavior is known, asymptotic inference may suffer from small-sample issues. To overcome these difficulties, we propose bootstrap inference procedures using parameter estimates obtained with the null hypothesis imposed (the so-called restricted bootstrap). We discuss three different choices of bootstrap innovations: wild bootstrap, based on Rademacher errors; permutation bootstrap; a combination of the two (“permutation wild bootstrap”). Crucially, implementation of these bootstraps do not require any a priori knowledge about the distribution of the innovations, such as the tail index or the convergence rates of the estimators. We establish sufficient conditions ensuring that, under the null hypothesis, the bootstrap statistics estimate consistently particular conditionaldistributions of the original statistics. In particular, we show that validity of the permutation bootstrap holds without any restrictions on the distribution of the innovations, while the permutation wild and the standard wild bootstraps require further assumptions such as symmetry of the innovation distribution. Extensive Monte Carlo simulations show that the finite sample performance of the proposed bootstrap tests is exceptionally good, both in terms of size and of empirical rejection probabilities under the alternative hypothesis. We conclude by applying the proposed bootstrap inference to Bitcoin/USD exchange rates and to crude oil price data. We find that indeed noncausal models with heavy-tailed innovations are able to fit the data, also in periods of bubble dynamics. Supplementary materials for this article are available online.  相似文献   

12.
To carry out a permutation test we have to examine the n! permutations of the observations. In order to make the permutation test feasible, Dwass (1957) proposed to examine only a sample of these permutations. With the help of sequential methods, we obtain a test which is never less efficient than that proposed by Dwass or the permutation test itself, in the sense that it is as powerful and never requires more permutations to make a decision. In practice, we can expect to gain much efficiency.  相似文献   

13.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

14.
Permutation tests are often used to analyze data since they may not require one to make assumptions regarding the form of the distribution to have a random and independent sample selection. We initially considered a permutation test to assess the treatment effect on computed tomography lesion volume in the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial, which has highly skewed data. However, we encountered difficulties in summarizing the permutation test results on the lesion volume. In this paper, we discuss some aspects of permutation tests and illustrate our findings. This experience with the NINDS t-PA Stroke Trial data emphasizes that permutation tests are useful for clinical trials and can be used to validate assumptions of an observed test statistic. The permutation test places fewer restrictions on the underlying distribution but is not always distribution-free or an exact test, especially for ill-behaved data. Quasi-likelihood estimation using the generalized estimating equation (GEE) approach on transformed data seems to be a good choice for analyzing CT lesion data, based on both its corresponding permutation test and its clinical interpretation.  相似文献   

15.
In this paper, we propose several tests for detecting difference in means and variances simultaneously between two populations under normality. First of all, we propose a likelihood ratio test. Then we obtain an expression of the likelihood ratio statistic by a product of two functions of random quantities, which can be used to test the two individual partial hypotheses for differences in means and variances. With those individual partial tests, we propose a union-intersection test. Also we consider two optimal tests by combining the p-values of the two individual partial tests. For obtaining null distributions, we apply the permutation principle with the Monte Carlo approach. Then we compare efficiency among the proposed tests with well-known ones through a simulation study. Finally, we discuss some interesting features related to the simultaneous tests and resampling methods as concluding remarks.  相似文献   

16.
Using Monte Carlo simulation, we compare the performance of five asymptotic test procedures and a randomized permutation test procedure for testing the homogeneity of odds ratio under the stratified matched-pair design. We note that the weighted-least-square test procedure is liberal, while Pearson's goodness-of-fit (PGF) test procedure with the continuity correction is conservative. We note that PGF without the continuity correction, the conditional likelihood ratio test procedure, and the randomized permutation test procedure can generally perform well with respect to Type I error. We use the data taken from a case–control study regarding the endometrial cancer incidence published elsewhere to illustrate the use of these test procedures.  相似文献   

17.
In this article, procedures are proposed to test the hypothesis of equality of two or more regression functions. Tests are proposed by p-values, first under homoscedastic regression model, which are derived using fiducial method based on cubic spline interpolation. Then, we construct a test in the heteroscedastic case based on Fisher's method of combining independent tests. We study the behaviors of the tests by simulation experiments, in which comparisons with other tests are also given. The proposed tests have good performances. Finally, an application to a data set are given to illustrate the usefulness of the proposed test in practice.  相似文献   

18.
The randomization design used to collect the data provides basis for the exact distributions of the permutation tests. The truncated binomial design is one of the commonly used designs for forcing balance in clinical trials to eliminate experimental bias. In this article, we consider the exact distribution of the weighted log-rank class of tests for censored data under the truncated binomial design. A double saddlepoint approximation for p-values of this class is derived under the truncated binomial design. The speed and accuracy of the saddlepoint approximation over the normal asymptotic facilitate the inversion of the weighted log-rank tests to determine nominal 95% confidence intervals for treatment effect with right censored data.  相似文献   

19.
This paper deals with the asymptotics of permutation tests based on a certain rather general class of measures of association for R by C contingency tables, given marginal totals. This class includes the classical chi-square test, the T b and γ indices of Goodman and Kruskall (1954) and the popular Rand (1971) index. The asymptotic distribution of this class of permutation tests for association is a weighted sum of non-central (gen-erally speaking) chi-squares. The formulae for the asymptotic moments of such tests are also given. If non-centrality holds under the null hypothe-sis of independence, the distribution in question converges to the normal distribution. The efficacies for such measures of association are obtained. Several applications are analysed in detail, including the above mentioned indices. Approximations to the permutation distribution are also discussed.  相似文献   

20.
Exact permutation testing of effects in unreplicated two-level multifactorial designs is developed based on the notion of realigning observations and on paired permutations. This approach preserves the exchangeability of error components for testing up tok effects. Advantages and limitations of exact permutation procedures for unreplicated factorials are discussed and a simulation study on paired permutation testing is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号