首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider multiple comparison test procedures among treatment effects in a randomized block design. We propose closed testing procedures based on maximum values of some two-sample t test statistics and based on F test statistics. It is shown that the proposed procedures are more powerful than single-step procedures and the REGW (Ryan/Einot–Gabriel/Welsch)-type tests. Next, we consider the randomized block design under simple ordered restrictions of treatment effects. We propose closed testing procedures based on maximum values of two-sample one-sided t test statistics and based on Batholomew’s statistics for all pairwise comparisons of treatment effects. Although single-step multiple comparison procedures are utilized in general, the power of these procedures is low for a large number of groups. The closed testing procedures stated in the present article are more powerful than the single-step procedures. Simulation studies are performed under the null hypothesis and some alternative hypotheses. In this studies, the proposed procedures show a good performance.  相似文献   

2.
Location-scale invariant Bickel–Rosenblatt goodness-of-fit tests (IBR tests) are considered in this paper to test the hypothesis that f, the common density function of the observed independent d-dimensional random vectors, belongs to a null location-scale family of density functions. The asymptotic behaviour of the test procedures for fixed and non-fixed bandwidths is studied by using an unifying approach. We establish the limiting null distribution of the test statistics, the consistency of the associated tests and we derive its asymptotic power against sequences of local alternatives. These results show the asymptotic superiority, for fixed and local alternatives, of IBR tests with fixed bandwidth over IBR tests with non-fixed bandwidth.  相似文献   

3.
Multiple Hypotheses Testing with Weights   总被引:2,自引:0,他引:2  
In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights. Some rationale for incorporating weights in multiple hypotheses testing are discussed. Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems. An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained. This model serves as a vehicle for demonstrating the different implications of the approaches to weighting. Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate. Extensions and modifications of the procedures based on Simes (1986) are given. These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses. The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights.  相似文献   

4.
In this paper, we consider testing the location parameter with multilevel (or hierarchical) data. A general family of weighted test statistics is introduced. This family includes extensions to the case of multilevel data of familiar procedures like the t, the sign and the Wilcoxon signed-rank tests. Under mild assumptions, the test statistics have a null limiting normal distribution which facilitates their use. An investigation of the relative merits of selected members of the family of tests is achieved theoretically by deriving their asymptotic relative efficiency (ARE) and empirically via a simulation study. It is shown that the performance of a test depends on the clusters configurations and on the intracluster correlations. Explicit formulas for optimal weights and a discussion of the impact of omitting a level are provided for 2 and 3-level data. It is shown that using appropriate weights can greatly improve the performance of the tests. Finally, the use of the new tests is illustrated with a real data example.  相似文献   

5.
For testing the equality of two survival functions, the weighted logrank test and the weighted Kaplan–Meier test are the two most widely used methods. Actually, each of these tests has advantages and defects against various alternatives, while we cannot specify in advance the possible types of the survival differences. Hence, how to choose a single test or combine a number of competitive tests for indicating the diversities of two survival functions without suffering a substantial loss in power is an important issue. Instead of directly using a particular test which generally performs well in some situations and poorly in others, we further consider a class of tests indexed by a weighted parameter for testing the equality of two survival functions in this paper. A delete-1 jackknife method is implemented for selecting weights such that the variance of the test is minimized. Some numerical experiments are performed under various alternatives for illustrating the superiority of the proposed method. Finally, the proposed testing procedure is applied to two real-data examples as well.  相似文献   

6.
Specification tests for conditional heteroskedasticity that are derived under the assumption that the density of the innovation is Gaussian may not be powerful in light of the recent empirical results that the density is not Gaussian. We obtain specification tests for conditional heteroskedasticity under the assumption that the innovation density is a member of a general family of densities. Our test statistics maximize asymptotic local power and weighted average power criteria for the general family of densities. We establish both first-order and second-order theory for our procedures. Simulations indicate that asymptotic power gains are achievable in finite samples.  相似文献   

7.
The problem of testing whether two samples of possibly right-censored survival data come from the same distribution is considered. The aim is to develop a test which is capable of detection of a wide spectrum of alternatives. A new class of tests based on Neyman's embedding idea is proposed. The null hypothesis is tested against a model where the hazard ratio of the two survival distributions is expressed by several smooth functions. A data-driven approach to the selection of these functions is studied. Asymptotic properties of the proposed procedures are investigated under fixed and local alternatives. Small-sample performance is explored via simulations which show that the power of the proposed tests appears to be more robust than the power of some versatile tests previously proposed in the literature (such as combinations of weighted logrank tests, or Kolmogorov–Smirnov tests).  相似文献   

8.
We propose a class of flexible non-parametric tests for the presence of dependence between components of a random vector based on weighted Cramér–von Mises functionals of the empirical copula process. The weights act as a tuning parameter and are shown to significantly influence the power of the test, making it more sensitive to different types of dependence. Asymptotic properties of the test are stated in the general case, for an arbitrary bounded and integrable weighting function, and computational formulas for a number of weighted statistics are provided. Several issues relating to the choice of the weights are discussed, and a simulation study is conducted to investigate the power of the test under a variety of dependence alternatives. The greatest gain in power is found to occur when weights are set proportional to true deviations from independence copula.  相似文献   

9.
Methods for a sequential test of a dose-response effect in pre-clinical studies are investigated. The objective of the test procedure is to compare several dose groups with a zero-dose control. The sequential testing is conducted within a closed family of one-sided tests. The procedures investigated are based on a monotonicity assumption. These closed procedures strongly control the familywise error rate while providing information about the shape of the dose-responce relationship. Performance of sequential testing procedures are compared via a Monte Carlo simulation study. We illustrae the procedures by application to a real data set.  相似文献   

10.
Partition testing in dose-response studies with multiple endpoints   总被引:1,自引:0,他引:1  
Dose-response studies with multiple endpoints can be formulated as closed testing or partition testing problems. When the endpoints are primary and secondary, whether the order in which the doses are to be tested is pre-determined or sample determined lead to different partitioning of the parameter space corresponding to the null hypotheses to be tested. We use the case of two doses and two endpoints to illustrate how to apply the partitioning principle to construct multiple tests that control the appropriate error rate. Graphical representation can be useful in visualizing the decision process.  相似文献   

11.
This paper discusses a class of tests of lack-of-fit of a parametric regression model when design is non-random and uniform on [0,1]. These tests are based on certain minimized distances between a nonparametric regression function estimator and the parametric model being fitted. We investigate asymptotic null distributions of the proposed tests, their consistency and asymptotic power against a large class of fixed and sequences of local nonparametric alternatives, respectively. The best fitted parameter estimate is seen to be n1/2-consistent and asymptotically normal. A crucial result needed for proving these results is a central limit lemma for weighted degenerate U statistics where the weights are arrays of some non-random real numbers. This result is of an independent interest and an extension of a result of Hall for non-weighted degenerate U statistics.  相似文献   

12.
This paper reviews global and multiple tests for the combination ofn hypotheses using the orderedp-values of then individual tests. In 1987, Röhmel and Streitberg presented a general method to construct global level α tests based on orderedp-values when there exists no prior knowledge regarding the joint distribution of the corresponding test statistics. In the case of independent test statistics, construction of global tests is available by means of recursive formulae presented by Bicher (1989), Kornatz (1994) and Finner and Roters (1994). Multiple test procedures can be developed by applying the closed test principle using these global tests as building blocks. Liu (1996) proposed representing closed tests by means of “critical matrices” which contain the critical values of the global tests. Within the framework of these theoretical concepts, well-known global tests and multiple test procedures are classified and the relationships between the different tests are characterised.  相似文献   

13.
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well‐known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph‐based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid‐trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
The rank envelope test (Myllymäki et al. in J R Stat Soc B, doi: 10.1111/rssb.12172, 2016) is proposed as a solution to the multiple testing problem for Monte Carlo tests. Three different situations are recognized: (1) a few univariate Monte Carlo tests, (2) a Monte Carlo test with a function as the test statistic, (3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which determines subtests and distances of the used test function(s) which lead to the rejection at the prescribed significance level of the test. Examples of null hypotheses from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test for a group of point patterns, test of dependence of components in a multi-type point pattern, and test of the Boolean assumption for random closed sets. A power comparison to the classical multiple testing procedures is given.  相似文献   

15.
In this paper, we introduce a precedence-type test based on Kaplan–Meier estimator of cumulative distribution function (CDF) for testing the hypothesis that two distribution functions are equal against a stochastically ordered hypothesis. This test is an alternative to the precedence life-test proposed first by Nelson (1963). After deriving the null distribution of the test statistic, we present its exact power function under the Lehmann alternative, and compare the exact power as well as simulated power (under location-shift) of the proposed test with other precedence-type tests. Next, we extend this test to the case of progressively Type-II censored data. Critical values for some combination of sample sizes and progressive censoring schemes are presented. We then examine the power properties of this test procedure and compare them to those of the weighted precedence and weighted maximal precedence tests under a location-shift alternative by means of Monte Carlo simulations. Finally, we present two examples to illustrate all the test procedures discussed here, and then make some concluding remarks.  相似文献   

16.
To improve the goodness of fit between a regression model and observations, the model can be complicated; however, that can reduce the statistical power when the complication does not lead significantly to an improved model. In the context of two-phase (segmented) logistic regressions, the model evaluation needs to include testing for simple (one-phase) versus two-phase logistic regression models. In this article, we propose and examine a class of likelihood ratio type tests for detecting a change in logistic regression parameters that splits the model into two-phases. We show that the proposed tests, based on Shiryayev–Roberts type statistics, are on average the most powerful. The article argues in favor of a new approach for fixing Type I errors of tests when the parameters of null hypotheses are unknown. Although the suggested approach is partly based on Bayes–Factor-type testing procedures, the classical significance levels of the proposed tests are under control. We demonstrate applications of the average most powerful tests to an epidemiologic study entitled “Time to pregnancy and multiple births.”  相似文献   

17.
Robust test procedures are developed for testing the intercept of a simple regression model when the slope is (i) completely unspecified, (ii) specified to a fixed value, or (iii) suspected to be a fixed value. Defining (i) unrestricted (UT), (ii) restricted (RT), and (iii) pre-test test (PTT) functions for the intercept parameter under the three choices of the slope, tests are formulated using the M-estimation methodology. The asymptotic distributions of the test statistics and their asymptotic power functions are derived. The analytical and graphical comparisons of the tests reveal that the PTT achieves a reasonable dominance over the other tests.  相似文献   

18.
Multiple hypothesis testing is widely used to evaluate scientific studies involving statistical tests. However, for many of these tests, p values are not available and are thus often approximated using Monte Carlo tests such as permutation tests or bootstrap tests. This article presents a simple algorithm based on Thompson Sampling to test multiple hypotheses. It works with arbitrary multiple testing procedures, in particular with step-up and step-down procedures. Its main feature is to sequentially allocate Monte Carlo effort, generating more Monte Carlo samples for tests whose decisions are so far less certain. A simulation study demonstrates that for a low computational effort, the new approach yields a higher power and a higher degree of reproducibility of its results than previously suggested methods.  相似文献   

19.
We consider the problem of accounting for multiplicity for two correlated endpoints in the comparison of two treatments using weighted hypothesis tests. Various weighted testing procedures are reviewed, and a more powerful method (a variant of the weighted Simes test) is evaluated for the general bivariate normal case and for a particular clinical trial example. Results from these evaluations are summarized and indicate that the weighted methods perform in a manner similar to unweighted methods. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号