首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shaffer's extensions and generalization of Dunnett's procedure are shown to be applicable in several nonparametric data analyses. Applications are considered within the context of the Kruskal-Wallis one-way analysis of variance (ANOVA) test for ranked data, Friedman's two-way ANOVA test for ranked data, and Cochran's test of change for dichotomous data.  相似文献   

2.
This article analyzes a small censored data set to demonstrate the potential dangers of using statistical computing packages without understanding the details of statistical methods. The data, consisting of censored response times with heavy ties in one time point, were analyzed with a Cox regression model utilizing SAS PHREG and BMDP2L procedures. The p values, reported from both SAS PHREG and BMDP2L procedures, for testing the equality of two treatments vary considerably. This article illustrates that (1) the Breslow likelihood used in both BMDP2L and SAS PHREG procedures is too conservative and can have a critical effect on an extreme data set, (2) Wald's test in the SAS PHREG procedure may yield absurd results from most likelihood models, and (3) BMDP2L needs to include more than just the Breslow likelihood in future development.  相似文献   

3.
Recent research on finding appropriate composite endpoints for preclinical Alzheimer's disease has focused considerable effort on finding “optimized” weights in the construction of a weighted composite score. In this paper, several proposed methods are reviewed. Our results indicate no evidence that these methods will increase the power of the test statistics, and some of these weights will introduce biases to the study. Our recommendation is to focus on identifying more sensitive items from clinical practice and appropriate statistical analyses of a large Alzheimer's data set. Once a set of items has been selected, there is no evidence that adding weights will generate more sensitive composite endpoints.  相似文献   

4.
The problem of testing whether two samples of possibly right-censored survival data come from the same distribution is considered. The aim is to develop a test which is capable of detection of a wide spectrum of alternatives. A new class of tests based on Neyman's embedding idea is proposed. The null hypothesis is tested against a model where the hazard ratio of the two survival distributions is expressed by several smooth functions. A data-driven approach to the selection of these functions is studied. Asymptotic properties of the proposed procedures are investigated under fixed and local alternatives. Small-sample performance is explored via simulations which show that the power of the proposed tests appears to be more robust than the power of some versatile tests previously proposed in the literature (such as combinations of weighted logrank tests, or Kolmogorov–Smirnov tests).  相似文献   

5.
Testing for periodicity in microarray time series encounters the challenges of short series length, missing values and presence of non-Fourier frequencies. In this article, a test method for such series has been proposed. The method is completely simulation based and finds p-values for test of periodicity through fitting Pearson Type VI distribution. The simulation results compare and reveal the excellence of this method over Fisher's g test for varying series length, frequencies, and error variance. This approach is applied to Caulobacter crescentus cell cycle data in order to demonstrate the practical performance.  相似文献   

6.
Nonparametric estimation of copula-based measures of multivariate association in a continuous random vector X=(X1, …, Xd) is usually based on complete continuous data. In many practical applications, however, these types of data are not readily available; instead aggregated ordinal observations are given, for example, ordinal ratings based on a latent continuous scale. This article introduces a purely nonparametric and data-driven estimator of the unknown copula density and the corresponding copula based on multivariate contingency tables. Estimators for multivariate Spearman's rho and Kendall's tau are based thereon. The properties of these estimators in samples of medium and large size are evaluated in a simulation study. An increasing bias can be observed along with an increasing degree of association between the components. As it is to be expected, the bias is severely influenced by the amount of information available. Additionally, the influence of sample size is only marginal. We further give an empirical illustration based on daily returns of five German stocks.  相似文献   

7.
We estimate sib–sib correlation by maximizing the log-likelihood of a Kotz-type distribution. Using extensive simulations we conclude that estimating sib–sib correlation using the proposed method has many advantages. Results are illustrated on a real life data set due to Galton. Testing of hypothesis about this correlation is also discussed using the three likelihood based tests and a test based on Srivastava's estimator. It is concluded that score test derived using Kotz-type density performs the best.  相似文献   

8.
ABSTRACT

A simple test based on Gini's mean difference is proposed to test the hypothesis of equality of population variances. Using 2000 replicated samples and empirical distributions, we show that the test compares favourably with Bartlett's and Levene's test for the normal population. Also, it is more powerful than Bartlett's and Levene's tests for some alternative hypotheses for some non-normal distributions and more robust than the other two tests for large sample sizes under some alternative hypotheses. We also give an approximate distribution to the test statistic to enable one to calculate the nominal levels and P-values.  相似文献   

9.
This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the profile likelihood CI, and the score CI for Goodman and Kruskal's gamma, under several conditions. The choice for Goodman and Kruskal's gamma was based on results of Woods [Consistent small-sample variances for six gamma-family measures of ordinal association. Multivar Behav Res. 2009;44:525–551], who found relatively poor coverage for gamma for very small samples compared to other ordinal association measures. The profile likelihood CI and the score CI had the best coverage, close to the nominal value, but those CIs could often not be computed for sparse tables. The coverage of the Goodman–Kruskal CI and the Cliff-consistent CI was often poor. Computation time was fast to reasonably fast for all types of CI.  相似文献   

10.
In this paper we consider the problem of comparing several means under heteroscedasticity and nonnormality. By combining Huber‘s M-estimators with the Brown-Forsythe test, several robust procedures were developed; these procedures were compared through computer simulation studies with the Tan-Tabatabai procedure which was developed by combining Tiku's MML estimators with the Brown-Forsythe test. The numerical results indicate clearly that the Tan-Tabatabai procedure is considerably more powerful than tests based on Huber's M-estimators over a wide range of nonnormal distributions.  相似文献   

11.
The kappa coefficient is a widely used measure for assessing agreement on a nominal scale. Weighted kappa is an extension of Cohen's kappa that is commonly used for measuring agreement on an ordinal scale. In this article, it is shown that weighted kappa can be computed as a function of unweighted kappas. The latter coefficients are kappa coefficients that correspond to smaller contingency tables that are obtained by merging categories.  相似文献   

12.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   

13.
Mood's test, which is a relatively old test (and the oldest non‐parametric test among those tests in its class) for determining heterogeneity of variance, is still being widely used in different areas such as biometry, biostatistics and medicine. Although it is a popular test, it is not suitable for use on a two‐way factorial design. In this paper, Mood's test is generalised to the 2 × 2 factorial design setting and its performance is compared with that of Klotz's test. The power and robustness of these tests are examined in detail by means of a simulation study with 10,000 replications. Based on the simulation results, the generalised Mood's and Klotz's tests can especially be recommended in settings in which the parent distribution is symmetric. As an example application we analyse data from a multi‐factor agricultural system that involves chilli peppers, nematodes and yellow nutsedge. This example dataset suggests that the performance of the generalised Mood test is in agreement with that of the generalised Klotz's test.  相似文献   

14.
In this note we propose two procedures for testing homogeneity of co-variance matrices that are both extensions of Hartley's (1940) test for equality of variances. The first is a two-stage procedure where the first step is a simple test for equality of the largest eigenvalues, and corresponding eigenvectors, of the covariance matrices. The second is based on projection pursuit and seems harder to apply in practice.  相似文献   

15.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   

16.
Sophisticated statistical analyses of incidence frequencies are often required for various epidemiologic and biomedical applications. Among the most commonly applied methods is the Pearson's χ2 test, which is structured to detect non specific anomalous patterns of frequencies and is useful for testing the significance for incidence heterogeneity. However, the Pearson's χ2 test is not efficient for assessing the significance of frequency in a particular cell (or class) to be attributed to chance alone. We recently developed statistical tests for detecting temporal anomalies of disease cases based on maximum and minimum frequencies; these tests are actually designed to test of significance for a particular high or low frequency. The purpose of this article is to demonstrate merits of these tests in epidemiologic and biomedical studies. We show that our proposed methods are more sensitive and powerful for testing extreme cell counts than is the Pearson's χ2 test. This feature could provide important and valuable information in epidemiologic or biomeidcal studies. We elucidated and illustrated the differences in sensitivity among our tests and the Pearson's χ2 test by analyzing a data set of Langerhans cell histiocytosis cases and its hypothetical sets. We also computed and compared the statistical power of these methods using various sets of cell numbers and alternative frequencies. The investigation of statistical sensitivity and power presented in this work will provide investigators with useful guidelines for selecting the appropriate tests for their studies.  相似文献   

17.
Taguchi's statistic has long been known to be a more appropriate measure of association for ordinal variables than the Pearson chi-squared statistic. Therefore, there is some advantage in using Taguchi's statistic for performing correspondence analysis when a two-way contingency table consists of one ordinal categorical variable. This article will explore the development of correspondence analysis using a decomposition of Taguchi's statistic.  相似文献   

18.
In event time data analysis, comparisons between distributions are made by the logrank test. When the data appear to contain crossing hazards phenomena, nonparametric weighted logrank statistics are usually suggested to accommodate different-weighted functions to increase the power. However, the gain in power by imposing different weights has its limits since differences before and after the crossing point may balance each other out. In contrast to the weighted logrank tests, we propose a score-type statistic based on the semiparametric-, heteroscedastic-hazards regression model of Hsieh [2001. On heteroscedastic hazards regression models: theory and application. J. Roy. Statist. Soc. Ser. B 63, 63–79.], by which the nonproportionality is explicitly modeled. Our score test is based on estimating functions derived from partial likelihood under the heteroscedastic model considered herein. Simulation results show the benefit of modeling the heteroscedasticity and power of the proposed test to two classes of weighted logrank tests (including Fleming–Harrington's test and Moreau's locally most powerful test), a Renyi-type test, and the Breslow's test for acceleration. We also demonstrate the application of this test by analyzing actual data in clinical trials.  相似文献   

19.
The authors extend Fisher's method of combining two independent test statistics to test homogeneity of several two‐parameter populations. They explore two procedures combining asymptotically independent test statistics: the first pools two likelihood ratio statistics and the other, score test statistics. They then give specific results to test homogeneity of several normal, negative binomial or beta‐binomial populations. Their simulations provide evidence that in this context, Fisher's method performs generally well, even when the statistics to be combined are only asymptotically independent. They are led to recommend Fisher's test based on score statistics, since the latter have simple forms, are easy to calculate, and have uniformly good level properties.  相似文献   

20.
An omnibus test of uniformity based upon the ratios of sample moments and population moments is introduced. Results of a monte carlo power study show that for two types of alternatives considered, the proposed test has good power in comparison with Neyman's test N 2Greenwood's test, Kolmogorov-Smirnov test, and Chi-squared test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号