首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Tests that combine p-values, such as Fisher's product test, are popular to test the global null hypothesis H0 that each of n component null hypotheses, H1,…,Hn, is true versus the alternative that at least one of H1,…,Hn is false, since they are more powerful than classical multiple tests such as the Bonferroni test and the Simes tests. Recent modifications of Fisher's product test, popular in the analysis of large scale genetic studies include the truncated product method (TPM) of Zaykin et al. (2002), the rank truncated product (RTP) test of Dudbridge and Koeleman (2003) and more recently, a permutation based test—the adaptive rank truncated product (ARTP) method of Yu et al. (2009). The TPM and RTP methods require users' specification of a truncation point. The ARTP method improves the performance of the RTP method by optimizing selection of the truncation point over a set of pre-specified candidate points. In this paper we extend the ARTP by proposing to use all the possible truncation points {1,…,n} as the candidate truncation points. Furthermore, we derive the theoretical probability distribution of the test statistic under the global null hypothesis H0. Simulations are conducted to compare the performance of the proposed test with the Bonferroni test, the Simes test, the RTP test, and Fisher's product test. The simulation results show that the proposed test has higher power than the Bonferroni test and the Simes test, as well as the RTP method. It is also significantly more powerful than Fisher's product test when the number of truly false hypotheses is small relative to the total number of hypotheses, and has comparable power to Fisher's product test otherwise.  相似文献   

2.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   

3.
Combining p-values from statistical tests across different studies is the most commonly used approach in meta-analysis for evolutionary biology. The most commonly used p-value combination methods mainly incorporate the z-transform tests (e.g., the un-weighted z-test and the weighted z-test) and the gamma-transform tests (e.g., the CZ method [Z. Chen, W. Yang, Q. Liu, J.Y. Yang, J. Li, and M.Q. Yang, A new statistical approach to combining p-values using gamma distribution and its application to genomewide association study, Bioinformatics 15 (2014), p. S3]). However, among these existing p-value combination methods, no method is uniformly most powerful in all situations [Chen et al. 2014]. In this paper, we propose a meta-analysis method based on the gamma distribution, MAGD, by pooling the p-values from independent studies. The newly proposed test, MAGD, allows for flexible accommodating of the different levels of heterogeneity of effect sizes across individual studies. The MAGD simultaneously retains all the characters of the z-transform tests and the gamma-transform tests. We also propose an easy-to-implement resampling approach for estimating the empirical p-values of MAGD for the finite sample size. Simulation studies and two data applications show that the proposed method MAGD is essentially as powerful as the z-transform tests (the gamma-transform tests) under the circumstance with the homogeneous (heterogeneous) effect sizes across studies.  相似文献   

4.
A new class of probability distributions, the so-called connected double truncated gamma distribution, is introduced. We show that using this class as the error distribution of a linear model leads to a generalized quantile regression model that combines desirable properties of both least-squares and quantile regression methods: robustness to outliers and differentiable loss function.  相似文献   

5.
Software packages usually report the results of statistical tests using p-values. Users often interpret these values by comparing them with standard thresholds, for example, 0.1, 1, and 5%, which is sometimes reinforced by a star rating (***, **, and *, respectively). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, for example, by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals that cover [0, 1] and that can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations.  相似文献   

6.
We propose a new method for dimension reduction in regression using the first two inverse moments. We develop corresponding weighted chi-squared tests for the dimension of the regression. The proposed method considers linear combinations of sliced inverse regression (SIR) and the method using a new candidate matrix which is designed to recover the entire inverse second moment subspace. The optimal combination may be selected based on the p-values derived from the dimension tests. Theoretically, the proposed method, as well as sliced average variance estimate (SAVE), is more capable of recovering the complete central dimension reduction subspace than SIR and principle Hessian directions (pHd). Therefore it can substitute for SIR, pHd, SAVE, or any linear combination of them at a theoretical level. Simulation study indicates that the proposed method may have consistently greater power than SIR, pHd, and SAVE.  相似文献   

7.
Many nonparametric tests in one sample problem, matched pairs, and competingrisks under censoring have the same underlying permutation distribution. This article proposes a saddlepoint approximation to the exact p-values of these tests instead of the asymptotic approximations. The performance of the saddlepoint approximation is assessed by using simulation studies that show the superiority of the saddlepoint methods over the asymptotic approximations in several settings. The use of the saddlepoint to approximate the p-values of class of two sample tests under complete randomized design is also discussed.  相似文献   

8.
Fisher's inverse chi-square method for combining independent significance tests is extended to cover cases of dependence among the individual tests. A weighted version of the method and its approximate null distribution are presented. To illustrate the use of the proposed method, two tests for the overall treatment efficacy are combined, with the resulting test procedure exhibiting good control of the type I error probability. Two examples from clinical trials are given to illustrate the applicability of the procedures to real-life situations.  相似文献   

9.
Abstract.  Wang & Wells [ J. Amer. Statist. Assoc. 95 (2000) 62] describe a non-parametric approach for checking whether the dependence structure of a random sample of censored bivariate data is appropriately modelled by a given family of Archimedean copulas. Their procedure is based on a truncated version of the Kendall process introduced by Genest & Rivest [ J. Amer. Statist. Assoc. 88 (1993) 1034] and later studied by Barbe et al . [ J. Multivariate Anal. 58 (1996) 197]. Although Wang & Wells (2000) determine the asymptotic behaviour of their truncated process, their model selection method is based exclusively on the observed value of its L 2-norm. This paper shows how to compute asymptotic p -values for various goodness-of-fit test statistics based on a non-truncated version of Kendall's process. Conditions for weak convergence are met in the most common copula models, whether Archimedean or not. The empirical behaviour of the proposed goodness-of-fit tests is studied by simulation, and power comparisons are made with a test proposed by Shih [ Biometrika 85 (1998) 189] for the gamma frailty family.  相似文献   

10.
Two types of state-switching models for U.S. real output have been proposed: models that switch randomly between states and models that switch states deterministically, as in the threshold autoregressive model of Potter. These models have been justified primarily on how well they fit the sample data, yielding statistically significant estimates of the model coefficients. Here we propose a new approach to the evaluation of an estimated nonlinear time series model that provides a complement to existing methods based on in-sample fit or on out-of-sample forecasting. In this new approach, a battery of distinct nonlinearity tests is applied to the sample data, resulting in a set of p-values for rejecting the null hypothesis of a linear generating mechanism. This set of p-values is taken to be a “stylized fact” characterizing the nonlinear serial dependence in the generating mechanism of the time series. The effectiveness of an estimated nonlinear model for this time series is then evaluated in terms of the congruence between this stylized fact and a set of nonlinearity test results obtained from data simulated using the estimated model. In particular, we derive a portmanteau statistic based on this set of nonlinearity test p-values that allows us to test the proposition that a given model adequately captures the nonlinear serial dependence in the sample data. We apply the method to several estimated state-switching models of U.S. real output.  相似文献   

11.
The inverse Gaussian family of non negative, skewed random variables is analytically simple, and its inference theory is well known to be analogous to the normal theory in numerous ways. Hence, it is widely used for modeling non negative positively skewed data. In this note, we consider the problem of testing homogeneity of order restricted means of several inverse Gaussian populations with a common unknown scale parameter using an approach based on the classical methods, such as Fisher's, for combining independent tests. Unlike the likelihood approach which can only be readily applied to a limited number of restrictions and the settings of equal sample sizes, this approach is applicable to problems involving a broad variety of order restrictions and arbitrary sample size settings, and most importantly, no new null distributions are needed. An empirical power study shows that, in case of the simple order, the test based on Fisher's combination method compares reasonably with the corresponding likelihood ratio procedure.  相似文献   

12.
This article introduces graphical procedures for assessing the fit of the gamma distribution. The procedures are based on a standardized version of the cumulant generating function. Plots with bands of 95% simultaneous confidence level are developed by utilizing asymptotic and finite-sample results. The plots have linear scales and do not rely on the use of tables or values of special functions. Further, it is found through simulation, that the goodness-of-fit test implied by these plots compares favorably with respect to power to other known tests for the gamma distribution in samples drawn from lognormal and inverse Gaussian distributions.  相似文献   

13.
This article presents a Bayesian approach to the regression analysis of truncated data, with a focus on zero-truncated counts from the Poisson distribution. The approach provides inference not only on the regression coefficients but also on the total sample size and the parameters of the covariate distribution. The theory is applied to some illegal immigrant data from The Netherlands. Several models are fitted with the aid of Markov chain Monte Carlo methods and assessed via posterior predictive p-values. Inferences are compared with those obtained elsewhere using other approaches.  相似文献   

14.
ABSTRACT

Correlated bilateral data arise from stratified studies involving paired body organs in a subject. When it is desirable to conduct inference on the scale of risk difference, one needs first to assess the assumption of homogeneity in risk differences across strata. For testing homogeneity of risk differences, we herein propose eight methods derived respectively from weighted-least-squares (WLS), the Mantel-Haenszel (MH) estimator, the WLS method in combination with inverse hyperbolic tangent transformation, and the test statistics based on their log-transformation, the modified Score test statistic and Likelihood ratio test statistic. Simulation results showed that four of the tests perform well in general, with the tests based on the WLS method and inverse hyperbolic tangent transformation always performing satisfactorily even under small sample size designs. The methods are illustrated with a dataset.  相似文献   

15.
Many applications of nonparametric tests based on curve estimation involve selecting a smoothing parameter. The author proposes an adaptive test that combines several generalized likelihood ratio tests in order to get power performance nearly equal to whichever of the component tests is best. She derives the asymptotic joint distribution of the component tests and that of the proposed test under the null hypothesis. She also develops a simple method of selecting the smoothing parameters for the proposed test and presents two approximate methods for obtaining its P‐value. Finally, she evaluates the proposed test through simulations and illustrates its application to a set of real data.  相似文献   

16.
Our main interest is parameter estimation using maximum entropy methods in the prediction of future events for Homogeneous Poisson Processes when the distribution governing the distribution of the parameters is unknown. We intend to use empirical Bayes techniques and the maximum entropy principle to model the prior information. This approach has also been motivated by the success of the gamma prior for this problem, since it is well known that the gamma maximizes Shannon entropy under appropriately chosen constraints. However, as an alternative, we propose here to apply one of the often used methods to estimate the parameters of the maximum entropy prior. It consists of moment matching, that is, maximizing the entropy subject to the constraint that the first two moments equal the empirical ones and we obtain the truncated normal distribution (truncated below at the origin) as a solution. We also use maximum likelihood estimation (MLE) methods to estimate the parameters of the truncated normal distribution for this case. These two solutions, the gamma and the truncated normal, which maximize the entropy under different constraints are tested as to their effectiveness for prediction of future events for homogeneous Poisson processes by measuring their coverage probabilities, the suitably normalized lengths of their prediction intervals and their goodness-of-fit measured by the Kullback–Leibler criterion and a discrepancy measure. The estimators obtained by these methods are compared in an extensive simulation study to each other as well as to the estimators obtained using the completely noninformative Jeffreys’ prior and the usual frequency methods. We also consider the problem of choosing between the two maximum entropy methods proposed here, that is, the gamma prior and the truncated normal prior, estimated both by matching of the first two moments and, by maximum likelihood, when faced with data and we advocate the use of the sample skewness and kurtosis. The methods are also illustrated on two examples: one concerning the occurrence of mammary tumors in laboratory animals taking part in a carcinogenicity experiment and the other, a warranty dataset from the automobile industry.  相似文献   

17.
We consider estimation and goodness-of-fit tests in GARCH models with innovations following a heavy-tailed and possibly asymmetric distribution. Although the method is fairly general and applies to GARCH models with arbitrary innovation distribution, we consider as special instances the stable Paretian, the variance gamma, and the normal inverse Gaussian distribution. Exploiting the simple structure of the characteristic function of these distributions, we propose minimum distance estimation based on the empirical characteristic function of properly standardized GARCH-residuals. The finite-sample results presented facilitate comparison with existing methods, while the new procedures are also applied to real data from the financial market.  相似文献   

18.
Test statistics from the class of two-sample linear rank tests are commonly used to compare a treatment group with a control group. Two independent random samples of sizes m and n are drawn from two populations. As a result, N = m + n observations in total are obtained. The aim is to test the null hypothesis of identical distributions. The alternative hypothesis is that the populations are of the same form but with a different measure of central tendency. This article examines mid p-values from the null permutation distributions of tests based on the class of two-sample linear rank statistics. The results obtained indicate that normal approximation-based computations are very close to the permutation simulations, and they provide p-values that are close to the exact mid p-values for all practical purposes.  相似文献   

19.
In this article, we derive the likelihood ratio tests (LRTs) for simultaneously testing interval hypotheses for normal means with known and unknown variances, and also with unknown but equal variance. Special cases when the interval hypotheses boil down to a point hypothesis are also discussed. Remarks regarding comparison of the LRT with tests based on combination of p-values are made, and several applications based on real data are mentioned.  相似文献   

20.
This article considers the situation where, in the most general case, each observation in a sample has been “truncated” below at a different, but known value. Each observation is truncated in the sense that, had it been 1ess than the truncati on point, it would not have appeared in the sample. A goodness-of-fit test based on Gnedenko’ F statistic is developed to test the hypothesis that the underlying distribution is Pareto against the alternative of lognormality. The Chi-square and Kolmogorov tests are adapted to test the hypothesi s of lognormality with unspecified alternative. The application of these techni ques to the analysis of insurance claim data is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号