首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonparametric tests for the comparison of different treatments based on current status data are proposed. For this problem, most methods proposed in the literature require that observation times on all subjects follow the same distribution. In other words, censoring distributions are identical between the treatment groups. In this paper, we focus on the situation where the censoring distributions may be different for subjects in different treatment groups and the test that can take this unequal censoring into account is given. The asymptotic distribution of the test proposed is derived. The method proposed is applied to data arising from a tumorigenicity experiment.  相似文献   

2.
In many case-control studies, it is common to utilize paired data when treatments are being evaluated. In this article, we propose and examine an efficient distribution-free test to compare two independent samples, where each is based on paired observations. We extend and modify the density-based empirical likelihood ratio test presented by Gurevich and Vexler [7] to formulate an appropriate parametric likelihood ratio test statistic corresponding to the hypothesis of our interest and then to approximate the test statistic nonparametrically. We conduct an extensive Monte Carlo study to evaluate the proposed test. The results of the performed simulation study demonstrate the robustness of the proposed test with respect to values of test parameters. Furthermore, an extensive power analysis via Monte Carlo simulations confirms that the proposed method outperforms the classical and general procedures in most cases related to a wide class of alternatives. An application to a real paired data study illustrates that the proposed test can be efficiently implemented in practice.  相似文献   

3.
Paired binary data arise naturally when paired body parts are investigated in clinical trials. One of the widely used models for dealing with this kind of data is the equal correlation coefficients model. Before using this model, it is necessary to test whether the correlation coefficients in each group are actually equal. In this paper, three test statistics (likelihood ratio test, Wald-type test, and Score test) are derived for this purpose. The simulation results show that the Score test statistic maintains type I error rate and has satisfactory power, and therefore is recommended among the three methods. The likelihood ratio test is over conservative in most cases, and the Wald-type statistic is not robust with respect to empirical type I error. Three real examples, including a multi-centre Phase II double-blind placebo randomized controlled trial, are given to illustrate the three proposed test statistics.  相似文献   

4.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

5.
In ophthalmologic or otolaryngologic study, each subject may contribute paired organs measurements to the analysis. A number of statistical methods have been proposed on bilateral correlated data. In practice, it is important to detect confounding effect by treatment interaction, since ignoring confounding effect may lead to unreliable conclusion. Therefore, stratified data analysis can be considered to adjust the effect of confounder on statistical inference. In this article, we investigate and derive three test procedures for testing homogeneity of difference of two proportions for stratified correlated paired binary data in the basis of equal correlation model assumption. The performance of proposed test procedures is examined through Monte Carlo simulation. The simulation results show that the Score test is usually robust on type I error control with high power, and therefore is recommended among the three methods. One example from otolaryngologic study is given to illustrate the three test procedures.  相似文献   

6.
Rank tests are considered that compare t treatments in repeated measures designs. A statistic is given that contains as special cases several that have been proposed for this problem, including one that corresponds to the randomized block ANOVA statistic applied to the rank transformed data. Another statistic is proposed, having a null distribution holding under more general conditions, that is the rank transform of the Hotelling statistic for repeated measures. A statistic of this type is also given for data that are ordered categorical rather than fully rankedo Unlike the Friedman statistic, the statistics discussed in this article utilize a single ranking of the entire sample. Power calculations for an underlying normal distribution indicate that the rank transformed ANOVA test can be substantially more powerful than the Friedman test.  相似文献   

7.
Robust tests for comparing scale parameters, based on deviances—absolute deviations from the median—are examined. Higgins (2004) proposed a permutation test for comparing two treatments based on the ratio of deviances, but the performance of this procedure has not been investigated. A simulation study examines the performance of Higgins’ test relative to other tests of scale utilizing deviances that have been shown in the literature to have good properties. An extension of Higgins’ procedure to three or more treatments is proposed, and a second simulation study compares its performance to other omnibus tests for comparing scale. While no procedure emerged as a preferred choice in every scenario, Higgins’ tests are found to perform well overall with respect to Type I error rate and power.  相似文献   

8.
In the last few years, two adaptive tests for paired data have been proposed. One test proposed by Freidlin et al. [On the use of the Shapiro–Wilk test in two-stage adaptive inference for paired data from moderate to very heavy tailed distributions, Biom. J. 45 (2003), pp. 887–900] is a two-stage procedure that uses a selection statistic to determine which of three rank scores to use in the computation of the test statistic. Another statistic, proposed by O'Gorman [Applied Adaptive Statistical Methods: Tests of Significance and Confidence Intervals, Society for Industrial and Applied Mathematics, Philadelphia, 2004], uses a weighted t-test with the weights determined by the data. These two methods, and an earlier rank-based adaptive test proposed by Randles and Hogg [Adaptive Distribution-free Tests, Commun. Stat. 2 (1973), pp. 337–356], are compared with the t-test and to Wilcoxon's signed-rank test. For sample sizes between 15 and 50, the results show that the adaptive test proposed by Freidlin et al. and the adaptive test proposed by O'Gorman have higher power than the other tests over a range of moderate to long-tailed symmetric distributions. The results also show that the test proposed by O'Gorman has greater power than the other tests for short-tailed distributions. For sample sizes greater than 50 and for small sample sizes the adaptive test proposed by O'Gorman has the highest power for most distributions.  相似文献   

9.
Several methods exist for the problem of testing the equality of several treatments against the one-sided alternative that the treatments are better than the control. These methods include Dunnett's test, Bartholomew's likelihood-ratio test, the Abelson-Tukey-Schaafsma-Smid optimal-contrast test, and the multiple-contrast test of Mukerjee, Robertson, and Wright. A new test is proposed based on an approximation of the likelihood-ratio test of Bartholomew. This test involves using a circular cone in place of the alternative-hypothesis cone. The circular-cone test has excellent power characteristics similar to those of Bartholomew's test. Moreover, it has the advantages of being simpler to compute and may be used with unequal sample sizes.  相似文献   

10.
An adaptive test is proposed for the one-way layout. This test procedure uses the order statistics of the combined data to obtain estimates of percentiles, which are used to select an appropriate set of rank scores for the one-way test statistic. This test is designed to have reasonably high power over a range of distributions. The adaptive procedure proposed for a one-way layout is a generalization of an existing two-sample adaptive test procedure. In this Monte Carlo study, the power and significance level of the F-test, the Kruskal-Wallis test, the normal scores test, and the adaptive test were evaluated for the one-way layout. All tests maintained their significance level for data sets having at least 24 observations. The simulation results show that the adaptive test is more powerful than the other tests for skewed distributions if the total number of observations equals or exceeds 24. For data sets having at least 60 observations the adaptive test is also more powerful than the F-test for some symmetric distributions.  相似文献   

11.
A disease prevalence can be estimated by classifying subjects according to whether they have the disease. When gold-standard tests are too expensive to be applied to all subjects, partially validated data can be obtained by double-sampling in which all individuals are classified by a fallible classifier, and some of individuals are validated by the gold-standard classifier. However, it could happen in practice that such infallible classifier does not available. In this article, we consider two models in which both classifiers are fallible and propose four asymptotic test procedures for comparing disease prevalence in two groups. Corresponding sample size formulae and validated ratio given the total sample sizes are also derived and evaluated. Simulation results show that (i) Score test performs well and the corresponding sample size formula is also accurate in terms of the empirical power and size in two models; (ii) the Wald test based on the variance estimator with parameters estimated under the null hypothesis outperforms the others even under small sample sizes in Model II, and the sample size estimated by this test is also accurate; (iii) the estimated validated ratios based on all tests are accurate. The malarial data are used to illustrate the proposed methodologies.  相似文献   

12.
A new rank test family is proposed to test the equality of two multivariate failure times distributions with censored observations. The tests are very simple: they are based on a transformation of the multivariate rank vectors to a univariate rank score and the resulting statistics belong to the familiar class of the weighted logrank test statistics. The new procedure is also applicable to multivariate observations in general, such as repeated measures, some of which may be missing. To investigate the performance of the proposed tests, a simulation study was conducted with bivariate exponential models for various censoring rates. The size and power of these tests against Lehmann alternatives were compared to the size and power of two other tests (Wei and Lachin, 1984 and Wei and Knuiman, 1987). In all simulations the new procedures provide a relatively good power and an accurate control over the size of the test. A real example from the National Cooperative Gallstone Study is given  相似文献   

13.
Summary.  We propose 'Dunnett-type' test procedures to test for simple tree order restrictions on the means of p independent normal populations. The new tests are based on the estimation procedures that were introduced by Hwang and Peddada and later by Dunbar, Conaway and Peddada. The procedures proposed are also extended to test for 'two-sided' simple tree order restrictions. For non-normal data, nonparametric versions based on ranked data are also suggested. Using computer simulations, we compare the proposed test procedures with some existing test procedures in terms of size and power. Our simulation study suggests that the procedures compete well with the existing procedures for both one-sided and two-sided simple tree alternatives. In some instances, especially in the case of two-sided alternatives or for non-normally distributed data, the gains in power due to the procedures proposed can be substantial.  相似文献   

14.
As a nonparametric randomness test, the positive and negative runs test is widely used in practice due to the simplicity of its procedures. The test can lose efficiency if the alternative distribution is symmetrical at 0.5. In addition, the test can only be applied to test the randomness of a sequence from the uniform distribution. In this paper, we introduce an adaptive positive and negative runs test method to maximize the power function by choosing the optimal cut point. Also, the test is extended to check the randomness of a sequence generated from any other given distributions. Furthermore, we derive the exact distribution and obtain the asymptotical critical values of the proposed test statistics. Compared with the existed test, the efficiency of the proposed adaptive positive and negative runs test is competitive through simulation study.  相似文献   

15.
A Gaussian random function is a functional version of the normal distribution. This paper proposes a statistical hypothesis test to test whether or not a random function is a Gaussian random function. A parameter that is equal to 0 under Gaussian random function is considered, and its unbiased estimator is given. The asymptotic distribution of the estimator is studied, which is used for constructing a test statistic and discussing its asymptotic power. The performance of the proposed test is investigated through several numerical simulations. An illustrative example is also presented.  相似文献   

16.
ABSTRACT

The score test and the GOF test for the inverse Gaussian distribution, in particular the latter, are known to have large size distortion and hence unreliable power when referring to the asymptotic critical values. We show in this paper that with the appropriately bootstrapped critical values, these tests become second-order accurate, with size distortion being essentially eliminated and power more reliable. Two major generalizations of the score test are made: one is to allow the data to be right-censored, and the other is to allow the existence of covariate effects. A data mapping method is introduced for the bootstrap to be able to produce censored data that are conformable with the null model. Monte Carlo results clearly favour the proposed bootstrap tests. Real data illustrations are given.  相似文献   

17.
In the two-sample location-shift problem, Student's t test or Wilcoxon's rank-sum test are commonly applied. The latter test can be more powerful for non-normal data. Here, we propose to combine the two tests within a maximum test. We show that the constructed maximum test controls the type I error rate and has good power characteristics for a variety of distributions; its power is close to that of the more powerful of the two tests. Thus, irrespective of the distribution, the maximum test stabilizes the power. To carry out the maximum test is a more powerful strategy than selecting one of the single tests. The proposed test is applied to data of a clinical trial.  相似文献   

18.
A sequential method for approximating a general permutation test (SAPT) is proposed and evaluated. Permutations are randomly generated from some set G, and a sequential probability ratio test (SPRT) is used to determine whether an observed test statistic falls sufficiently far in the tail of the permutation distribution to warrant rejecting some hypothesis. An estimate and bounds on the power function of the SPRT are used to find bounds on the effective significance level of the SAPT. Guidelines are developed for choosing parameters in order to obtain a desired significance level and minimize the number of permutations needed to reach a decision. A theoretical estimate of the average number of permutations under the null hypothesis is given along with simulation results demonstrating the power and average number of permutations for various alternatives. The sequential approximation retains the generality of the permutation test,- while avoiding the computational complexities that arise in attempting to computer the full permutation distribution exactly  相似文献   

19.
This paper explores in high-dimensional settings how to test the equality of two location vectors. We introduce a rank-based projection test under elliptical symmetry. Optimal projection direction is derived according to asymptotically and locally best power criteria. Data-splitting strategy is used to estimate optimal projection and construct test statistics. The limiting null distribution and power function of the proposed statistics are thoroughly investigated under some mild assumptions. The test is shown to keep type I error rates pretty well and outperforms several existing methods in a broad range of settings, especially in the presence of large correlation structures. Simulation studies are conducted to confirm the asymptotic results and a real data example is applied to demonstrate the advantage of the proposed procedure.  相似文献   

20.
Student's t test as well as Wilcoxon's rank-sum test may be inefficient in situations where treatments bring about changes in both location and scale. In order to rectify this situation, O'Brien (1988, Journal of the American Statistical Association 83, 52-61) has proposed two new statistics, the generalized t and generalized rank-sum procedures, which may be much more powerful than their traditional counterparts in such situations. Recently, however, Blair and Morel (1991, Statistics in Medicine in press) have shown that referencing these new statistics to standard F tables as recommended by O'Brien results in inflations of Type I errors. This paper provides tables of critical values which do not produce such inflations. Use of these new critical values results in Type I error rates near nominal levels for the generalized t statistic and slightly conservative rates for the generalized rank-sum test. In addition to the critical values, some new power results are given for the generalized tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号