首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This article generalizes Neyman's smooth test for the goodness-of-fit hypothesis using orthogonal polynomials of the density function under the null hypothesis, and derives a Lagrange Multiplier (LM) statistic based on the generalized form of the smooth test. Under the null hypothesis, using the joint limiting normality of the orthogonal functions imbedded into the smooth alternative density function and the restricted parameter estimators, the covariance matrix of the LM statistic can be estimated. The procedure of constructing monic orthogonal polynomials from a given moment function is developed. This procedure is applied to examples of testing for normal, Poisson, and gamma distributions.  相似文献   

2.
It is proposed that baseline measurements be obtained prior to each period in a two-period crossover design. These measurements are used in a preliminary test for determining the validity of a test for treatment comparison and also for testing the hypothesis of equal treatment effects. The null hypothesis in this preliminary test consists of the following three hypotheses: that there is no difference in disease conditions prior to the two periods, no difference in residual effects of the drugs, and no treatment × period interaction.. A numerical example is given and the efficiencies of several methods are computed.  相似文献   

3.
对面板数据双因素误差回归模型构造了检验序列相关和随机效应的一种联合LM检验,发现该LM统计量也是检验联合假设H0:σμ^2=λ=0的Baltagi-Li LM统计量和检验假设H0:σv^2=λ=0的Breusch-Pagan-LM统计量之和。当面板数据的个体数N充分大时,该联合LM统计量的渐近分布是χ^2(3)分布;无论双因素误差面板数据回归模型的剩余误差项是AR(1)过程还是MA(1)过程,联合LM检验是相同的,即对随机效应和一阶序列相关的联合LM检验是独立于序列相关的形式。  相似文献   

4.
Robust tests for the common principal components model   总被引:1,自引:0,他引:1  
When dealing with several populations, the common principal components (CPC) model assumes equal principal axes but different variances along them. In this paper, a robust log-likelihood ratio statistic allowing to test the null hypothesis of a CPC model versus no restrictions on the scatter matrices is introduced. The proposal plugs into the classical log-likelihood ratio statistic robust scatter estimators. Using the same idea, a robust log-likelihood ratio and a robust Wald-type statistic for testing proportionality against a CPC model are considered. Their asymptotic distributions under the null hypothesis and their partial influence functions are derived. A small simulation study allows to compare the behavior of the classical and robust tests, under normal and contaminated data.  相似文献   

5.
This paper characterizes the asymptotic behaviour of the likelihood ratio test statistic (LRTS) for testing homogeneity (i.e. no mixture) against gamma mixture alternatives. Under the null hypothesis, the LRTS is shown to be asymptotically equivalent to the square of Davies's Gaussian process test statistic and diverges at a log n rate to infinity in probability. Based on the asymptotic analysis, we propose and demonstrate a computationally efficient method to simulate the null distributions of the LRTS for small to moderate sample sizes.  相似文献   

6.
The negative binomial (NB) is frequently used to model overdispersed Poisson count data. To study the effect of a continuous covariate of interest in an NB model, a flexible procedure is used to model the covariate effect by fixed-knot cubic basis-splines or B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A penalized likelihood is used to estimate parameters of the model. A penalized likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the continuous covariate effect. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. The smoothing parameter value is determined by setting a specified value equal to the asymptotic expectation of the test statistic under the null hypothesis. The power performance of the proposed test is studied with simulation experiments.  相似文献   

7.
Using a minimum p-value principle, a new two-sample test MIN3 is proposed in the paper. The cumulative distribution function of the MIN3 test statistic is studied and approximated by the Beta distribution of the third kind. Lower percentage points of the distribution of the new test statistic under the null hypothesis are computed. Also the test power for a lot of types of alternative hypotheses (with 0, 1 and 2 point(-s) of the intersection(-s) of survival functions) is studied and we found that the usage of the MIN3 test is a preferred strategy by the Wald and Savage decision-making criteria under risk and uncertainty. The results of application of the MIN3 test are shown for two examples from lifetime data analysis.  相似文献   

8.
We propose a new method to test the order between two high-dimensional mean curves. The new statistic extends the approach of Follmann (1996) to high-dimensional data by adapting the strategy of Bai and Saranadasa (1996). The proposed procedure is an alternative to the non-negative basis matrix factorization (NBMF) based test of Lee et al. (2008) for the same hypothesis, but it is much easier to implement. We derive the asymptotic mean and variance of the proposed test statistic under the null hypothesis of equal mean curves. Based on theoretical results, we put forward a permutation procedure to approximate the null distribution of the new test statistic. We compare the power of the proposed test with that of the NBMF-based test via simulations. We illustrate the approach by an application to tidal volume traces.  相似文献   

9.
The nonparametric component in a partially linear model is estimated by a linear combination of fixed-knot cubic B-splines with a second-order difference penalty on the adjacent B-spline coefficients. The resulting penalized least-squares estimator is used to construct two Wald-type spline-based test statistics for the null hypothesis of the linearity of the nonparametric function. When the number of knots is fixed, the first test statistic asymptotically has the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom, under the null hypothesis. The smoothing parameter is determined by specifying a value for the asymptotically expected value of the test statistic under the null hypothesis. When the number of knots is fixed and under the null hypothesis, the second test statistic asymptotically has a chi-squared distribution with K=q+2 degrees of freedom, where q is the number of knots used for estimation. The power performances of the two proposed tests are investigated via simulation experiments, and the practicality of the proposed methodology is illustrated using a real-life data set.  相似文献   

10.
Improved James-Stein type estimation of the mean vector μ of a multovaroate Student-t population of dimension p with ν degrees of freedom is considered. In addition to the sample data, uncertain prior information on the value of the mean vector, in the form of a null hypothesis, is used for the estiamtion. The usual maximum liklihood estimator((mle) of μ is obtained and a test statistic for testing H0:μ=μ0 is derived. Based on the mle of μ and the tes statistic the preliminary test estimator (PTE), Stein-type shrinkage estimator (SE) and positive-rule shrinkage esiimator (PRSE) are defined. The bias and the quadratic risk of the estimators are evaiuated. The relative performances of the estimators are mvestigated by analyzing the risks under different condltlons It is observed that the FRSE dommates over he other three estimators, regardless of the vaiidity of the null hypothesis and the value ν.  相似文献   

11.
Summary.  It is well known that in a sequential study the probability that the likelihood ratio for a simple alternative hypothesis H 1 versus a simple null hypothesis H 0 will ever be greater than a positive constant c will not exceed 1/ c under H 0. However, for a composite alternative hypothesis, this bound of 1/ c will no longer hold when a generalized likelihood ratio statistic is used. We consider a stepwise likelihood ratio statistic which, for each new observation, is updated by cumulatively multiplying the ratio of the conditional likelihoods for the composite alternative hypothesis evaluated at an estimate of the parameter obtained from the preceding observations versus the simple null hypothesis. We show that, under the null hypothesis, the probability that this stepwise likelihood ratio will ever be greater than c will not exceed 1/ c . In contrast, under the composite alternative hypothesis, this ratio will generally converge in probability to ∞. These results suggest that a stepwise likelihood ratio statistic can be useful in a sequential study for testing a composite alternative versus a simple null hypothesis. For illustration, we conduct two simulation studies, one for a normal response and one for an exponential response, to compare the performance of a sequential test based on a stepwise likelihood ratio statistic with a constant boundary versus some existing approaches.  相似文献   

12.
There are many hypothesis testing settings in which one can calculate a “reasonable” test statistic, but in which the null distribution of the statistic is unknown or completely intractable. Fortunately, in many such situations, it is possible to simulate values of the test statistic under the null hypothesis, in which case one can conduct a Monte Carlo test. A difficulty however arises in that Monte Carlo tests, as they are currently structured, are applicable only if ties cannot occur among the values of the test statistics. There is a frequently occurring scenario in which there are lots of ties, namely that in which the null distribution of the test statistic has a (single) point mass. It turns out that one can modify the current form of Monte Carlo tests so as to accommodate such settings. Developing this modification leads to an intriguing identity involving the binomial probability function and its derivatives. In this article, we will briefly explain the modified procedure, discuss simulation studies which demonstrate its efficacy, and provide a proof of the identity referred to above.  相似文献   

13.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

14.
The nonparametric component in a partially linear model is approximated via cubic B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A Wald-type spline-based test statistic is constructed for the null hypothesis of no effect of a continuous covariate. When the number of knots is fixed, the limiting null distribution of the test statistic is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. A real-life dataset is provided to illustrate the practical use of the test statistic.  相似文献   

15.
In this paper we evaluate the performance of three methods for testing the existence of a unit root in a time series, when the models under consideration in the null hypothesis do not display autocorrelation in the error term. In such cases, simple versions of the Dickey-Fuller test should be used as the most appropriate ones instead of the known augmented Dickey-Fuller or Phillips-Perron tests. Through Monte Carlo simulations we show that, apart from a few cases, testing the existence of a unit root we obtain actual type I error and power very close to their nominal levels. Additionally, when the random walk null hypothesis is true, by gradually increasing the sample size, we observe that p-values for the drift in the unrestricted model fluctuate at low levels with small variance and the Durbin-Watson (DW) statistic is approaching 2 in both the unrestricted and restricted models. If, however, the null hypothesis of a random walk is false, taking a larger sample, the DW statistic in the restricted model starts to deviate from 2 while in the unrestricted model it continues to approach 2. It is also shown that the probability not to reject that the errors are uncorrelated, when they are indeed not correlated, is higher when the DW test is applied at 1% nominal level of significance.  相似文献   

16.
ABSTRACT

We derive the influence function of the likelihood ratio test statistic for multivariate normal sample. The derived influence function does not depend on the influence functions of the parameters under the null hypothesis. So we can obtain directly the empirical influence function with only the maximum likelihood estimators under the null hypothesis. Since the derived formula is a general form, it can be applied to influence analysis on many statistical testing problems.  相似文献   

17.
Two goodness of fit statistics with asymmetric weight function are derived from a decomposition of the Anderson-Darling statistic, For each one, the asymptotic null distribution is found for a simple null hypothesis and some upper percentties are calculated. The asymptotic power of the tests are obtained for some contiguous alternatives around a normal null hypothesis. The tests allow the user to choose to which tail to give more weight and it is intended to be used for that purpose. Therefore it should be not considered as a competitor of the classical goodness of fit tests.  相似文献   

18.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures.  相似文献   

19.
In an informal way, some dilemmas in connection with hypothesis testing in contingency tables are discussed. The body of the article concerns the numerical evaluation of Cochran's Rule about the minimum expected value in r × c contingency tables with fixed margins when testing independence with Pearson's X2 statistic using the χ2 distribution.  相似文献   

20.
With data collection in environmental science and bioassay, left censoring because of nondetects is a problem. Similarly in reliability and life data analysis right censoring frequently occurs. There is a need for goodness of fit tests that can adapt to left or right censored data and be used to check important distributional assumptions without becoming too difficult to regularly implement in practice. A new test statistic is derived from a plot of the standardized spacings between the order statistics versus their ranks. Any linear or curvilinear pattern is evidence against the null distribution. When testing the Weibull or extreme value null hypothesis this statistic has a null distribution that is approximately F for most combinations of sample size and censoring of practical interest. Our statistic is compared to the Mann-Scheuer-Fertig statistic which also uses the standardized spacings between the order statistics. The results of a simulation study show the two tests are competitive in terms of power. Although the Mann-Scheuer-Fertig statistic is somewhat easier to compute, our test enjoys advantages in the accuracy of the F approximation and the availability of a graphical diagnostic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号