首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The asymptotic distributions of many classical test statistics are normal. The resulting approximations are often accurate for commonly used significance levels, 0.05 or 0.01. In genome‐wide association studies, however, the significance level can be as low as 1×10−7, and the accuracy of the p‐values can be challenging. We study the accuracies of these small p‐values are using two‐term Edgeworth expansions for three commonly used test statistics in GWAS. These tests have nuisance parameters not defined under the null hypothesis but estimable. We derive results for this general form of testing statistics using Edgeworth expansions, and find that the commonly used score test, maximin efficiency robust test and the chi‐squared test are second order accurate in the presence of the nuisance parameter, justifying the use of the p‐values obtained from these tests in the genome‐wide association studies.  相似文献   

2.
In this article, we consider the class of censored exponential regression models which is very useful for modeling lifetime data. Under a sequence of Pitman alternatives, the asymptotic expansions up to order n? 1/2 of the non null distribution functions of the likelihood ratio, Wald, Rao score, and gradient statistics are derive in this class of models. The non null asymptotic distribution functions of these statistics are obtained for testing a composite null hypothesis in the presence of nuisance parameters. The power of all four tests, which are equivalent to first order, are compared based on these non null asymptotic expansions. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, we consider Monte Carlo simulations. We also present an empirical application for illustrative purposes.  相似文献   

3.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

4.
In statistical modelling, it is often of interest to evaluate non‐negative quantities that capture heterogeneity in the population such as variances, mixing proportions and dispersion parameters. In instances of covariate‐dependent heterogeneity, the implied homogeneity hypotheses are nonstandard and existing inferential techniques are not applicable. In this paper, we develop a quasi‐score test statistic to evaluate homogeneity against heterogeneity that varies with a covariate profile through a regression model. We establish the limiting null distribution of the proposed test as a functional of mixtures of chi‐square processes. The methodology does not require the full distribution of the data to be entirely specified. Instead, a general estimating function for a finite dimensional component of the model, that is, of interest is assumed but other characteristics of the population are left completely unspecified. We apply the methodology to evaluate the excess zero proportion in zero‐inflated models for count data. Our numerical simulations show that the proposed test can greatly improve efficiency over tests of homogeneity that neglect covariate information under the alternative hypothesis. An empirical application to dental caries indices demonstrates the importance and practical utility of the methodology in detecting excess zeros in the data.  相似文献   

5.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

6.
Abstract. We propose a non‐parametric change‐point test for long‐range dependent data, which is based on the Wilcoxon two‐sample test. We derive the asymptotic distribution of the test statistic under the null hypothesis that no change occurred. In a simulation study, we compare the power of our test with the power of a test which is based on differences of means. The results of the simulation study show that in the case of Gaussian data, our test has only slightly smaller power minus.3pt than the ‘difference‐of‐means’ test. For heavy‐tailed data, our test outperforms the ‘difference‐of‐means’ test.  相似文献   

7.
In this paper, we study the problem of testing the hypothesis on whether the density f of a random variable on a sphere belongs to a given parametric class of densities. We propose two test statistics based on the L2 and L1 distances between a non‐parametric density estimator adapted to circular data and a smoothed version of the specified density. The asymptotic distribution of the L2 test statistic is provided under the null hypothesis and contiguous alternatives. We also consider a bootstrap method to approximate the distribution of both test statistics. Through a simulation study, we explore the moderate sample performance of the proposed tests under the null hypothesis and under different alternatives. Finally, the procedure is illustrated by analysing a real data set based on wind direction measurements.  相似文献   

8.
Rao's score test normally replaces nuisance parameters by their maximum likelihood estimates under the null hypothesis about the parameter of interest. In some models, however, a nuisance parameter is not identified under the null, so that this approach cannot be followed. This paper suggests replacing the nuisance parameter by its maximum likelihood estimate from the unrestricted model and making the appropriate adjustment to the variance of the estimated score. This leads to a rather natural modification of Rao's test, which is examined in detail for a regression-type model. It is compared with the approach, which has featured most frequently in the literature on this problem, where a test statistic appropriate to a known value of the nuisance parameter is treated as a function of that parameter and maximised over its range. It is argued that the modified score test has considerable advantages, including robustness to a crucial assumption required by the rival approach.  相似文献   

9.
The authors propose two methods based on the signed root of the likelihood ratio statistic for one‐sided testing of a simple null hypothesis about a scalar parameter in the présence of nuisance parameters. Both methods are third‐order accurate and utilise simulation to avoid the need for onerous analytical calculations characteristic of competing saddlepoint procedures. Moreover, the new methods do not require specification of ancillary statistics. The methods respect the conditioning associated with similar tests up to an error of third order, and conditioning on ancillary statistics to an error of second order.  相似文献   

10.
New statistical procedures are introduced to analyse typical microRNA expression data sets. For each separate microRNA expression, the null hypothesis to be tested is that there is no difference between the distributions of the expression in different groups. The test statistics are then constructed having certain type of alternatives in mind. To avoid strong (parametric) distributional assumptions, the alternatives are formulated using probabilities of different orders of pairs or triples of observations coming from different groups, and the test statistics are then constructed using corresponding several‐sample U‐statistics, natural estimates of these probabilities. Classical several‐sample rank test statistics, such as the Kruskal–Wallis and Jonckheere–Terpstra tests, are special cases in our approach. Also, as the number of variables (microRNAs) is huge, we confront a serious simultaneous testing problem. Different approaches to control the family‐wise error rate or the false discovery rate are shortly discussed, and it is shown how the Chen–Stein theorem can be used to show that family‐wise error rate can be controlled for cluster‐dependent microRNAs under weak assumptions. The theory is illustrated with an analysis of real data, a microRNA expression data set on Finnish (aggressive and non‐aggressive) prostate cancer patients and their controls.  相似文献   

11.
A consistent approach to the problem of testing non‐correlation between two univariate infinite‐order autoregressive models was proposed by Hong (1996). His test is based on a weighted sum of squares of residual cross‐correlations, with weights depending on a kernel function. In this paper, the author follows Hong's approach to test non‐correlation of two cointegrated (or partially non‐stationary) ARMA time series. The test of Pham, Roy & Cédras (2003) may be seen as a special case of his approach, as it corresponds to the choice of a truncated uniform kernel. The proposed procedure remains valid for testing non‐correlation between two stationary invertible multivariate ARMA time series. The author derives the asymptotic distribution of his test statistics under the null hypothesis and proves that his procedures are consistent. He also studies the level and power of his proposed tests in finite samples through simulation. Finally, he presents an illustration based on real data.  相似文献   

12.
For normally distributed data, the asymptotic bias and skewness of the pivotal statistic Studentized by the asymptotically distribution-free standard error are shown to be the same as those given by the normal theory in structural equation modeling. This gives the same asymptotic null distributions of the two pivotal statistics up to the next order beyond the usual normal approximation under normality. With an alternative hypothesis, the asymptotic variances of the two statistics under normality/non normality are also derived. It is, however, shown that the asymptotic variances of the non null distributions of the statistics are generally different even under normality.  相似文献   

13.
Abstract. Frailty models with a non‐parametric baseline hazard are widely used for the analysis of survival data. However, their maximum likelihood estimators can be substantially biased in finite samples, because the number of nuisance parameters associated with the baseline hazard increases with the sample size. The penalized partial likelihood based on a first‐order Laplace approximation still has non‐negligible bias. However, the second‐order Laplace approximation to a modified marginal likelihood for a bias reduction is infeasible because of the presence of too many complicated terms. In this article, we find adequate modifications of these likelihood‐based methods by using the hierarchical likelihood.  相似文献   

14.
The Pearson chi‐squared statistic for testing the equality of two multinomial populations when the categories are nominal is much less appropriate for ordinal categories. Test statistics typically used in this context are based on scorings of the ordinal levels, but the results of these tests are highly dependent on the choice of scores. The authors propose a test which naturally modifies the Pearson chi‐squared statistic to incorporate the ordinal information. The proposed test statistic does not depend on the scores and under the null hypothesis of equality of populations, it is asymptotically equivalent to the likelihood ratio test against the alternative of two‐sided likelihood ratio ordering.  相似文献   

15.
Abstract. A substantive problem in neuroscience is the lack of valid statistical methods for non‐Gaussian random fields. In the present study, we develop a flexible, yet tractable model for a random field based on kernel smoothing of a so‐called Lévy basis. The resulting field may be Gaussian, but there are many other possibilities, including random fields based on Gamma, inverse Gaussian and normal inverse Gaussian (NIG) Lévy bases. It is easy to estimate the parameters of the model and accordingly to assess by simulation the quantiles of test statistics commonly used in neuroscience. We give a concrete example of magnetic resonance imaging scans that are non‐Gaussian. For these data, simulations under the fitted models show that traditional methods based on Gaussian random field theory may leave small, but significant changes in signal level undetected, while these changes are detectable under a non‐Gaussian Lévy model.  相似文献   

16.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Detecting parameter shift in garch models   总被引:1,自引:0,他引:1  
This paper applies recent theories of testing for parameter constancy to the conditional variance in a GARCH model. The supremum Lagrange multiplier test for conditional Gaussian GARCH models and its robustified variants are discussed. The asymptotic null distribution of the test statistics are derived from the weak convergence of the scores, and the critical values from the hitting probability of squared Bessel process.

Monte Carlo studies on the finite sample size and power performance of the supremum LM tests are conducted. Applications of these tests to S&P 500 indicate that the hypothesis of stable conditional variance parameters can be rejected.  相似文献   

18.
This paper applies recent theories of testing for parameter constancy to the conditional variance in a GARCH model. The supremum Lagrange multiplier test for conditional Gaussian GARCH models and its robustified variants are discussed. The asymptotic null distribution of the test statistics are derived from the weak convergence of the scores, and the critical values from the hitting probability of squared Bessel process.

Monte Carlo studies on the finite sample size and power performance of the supremum LM tests are conducted. Applications of these tests to S&P 500 indicate that the hypothesis of stable conditional variance parameters can be rejected.  相似文献   

19.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

20.
A goodness‐of‐fit procedure is proposed for parametric families of copulas. The new test statistics are functionals of an empirical process based on the theoretical and sample versions of Spearman's dependence function. Conditions under which this empirical process converges weakly are seen to hold for many families including the Gaussian, Frank, and generalized Farlie–Gumbel–Morgenstern systems of distributions, as well as the models with singular components described by Durante [Durante ( 2007 ) Comptes Rendus Mathématique. Académie des Sciences. Paris, 344, 195–198]. Thanks to a parametric bootstrap method that allows to compute valid P‐values, it is shown empirically that tests based on Cramér–von Mises distances keep their size under the null hypothesis. Simulations attesting the power of the newly proposed tests, comparisons with competing procedures and complete analyses of real hydrological and financial data sets are presented. The Canadian Journal of Statistics 37: 80‐101; 2009 © 2009 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号