首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When the error terms are autocorrelated, the conventional t-tests for individual regression coefficients mislead us to over-rejection of the null hypothesis. We examine, by Monte Carlo experiments, the small sample properties of the unrestricted estimator of ρ and of the estimator of ρ restricted by the null hypothesis. We compare the small sample properties of the Wald, likelihood ratio and Lagrange multiplier test statistics for individual regression coefficients. It is shown that when the null hypothesis is true, the unrestricted estimator of ρ is biased. It is also shown that the Lagrange multiplier test using the maximum likelihood estimator of ρ performs better than the Wald and likelihood ratio tests.  相似文献   

2.
Nonparametric regression models are often used to check or suggest a parametric model. Several methods have been proposed to test the hypothesis of a parametric regression function against an alternative smoothing spline model. Some tests such as the locally most powerful (LMP) test by Cox et al. (Cox, D., Koh, E., Wahba, G. and Yandell, B. (1988). Testing the (parametric) null model hypothesis in (semiparametric) partial and generalized spline models. Ann. Stat., 16, 113–119.), the generalized maximum likelihood (GML) ratio test and the generalized cross validation (GCV) test by Wahba (Wahba, G. (1990). Spline models for observational data. CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM.) were developed from the corresponding Bayesian models. Their frequentist properties have not been studied. We conduct simulations to evaluate and compare finite sample performances. Simulation results show that the performances of these tests depend on the shape of the true function. The LMP and GML tests are more powerful for low frequency functions while the GCV test is more powerful for high frequency functions. For all test statistics, distributions under the null hypothesis are complicated. Computationally intensive Monte Carlo methods can be used to calculate null distributions. We also propose approximations to these null distributions and evaluate their performances by simulations.  相似文献   

3.
ABSTRACT

This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are “marginally significant”) correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests.

The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects.  相似文献   

4.
Assume that we have a sequence of n independent and identically distributed random variables with a continuous distribution function F, which is specified up to a few unknown parameters. In this paper, tests based on sum‐functions of sample spacings are proposed, and large sample theory of the tests are presented under simple null hypotheses as well as under close alternatives. Tests, which are optimal within this class, are constructed, and it is noted that these tests have properties that closely parallel those of the likelihood ratio test in regular parametric models. Some examples are given, which show that the proposed tests work also in situations where the likelihood ratio test breaks down. Extensions to more general hypotheses are discussed.  相似文献   

5.
The exponential family structure of the joint distribution of generalized order statistics is utilized to establish multivariate tests on the model parameters. For simple and composite null hypotheses, the likelihood ratio test (LR test), Wald's test, and Rao's score test are derived and turn out to have simple representations. The asymptotic distribution of the corresponding test statistics under the null hypothesis is stated, and, in case of a simple null hypothesis, asymptotic optimality of the LR test is addressed. Applications of the tests are presented; in particular, we discuss their use in reliability, and to decide whether a Poisson process is homogeneous. Finally, a power study is performed to measure and compare the quality of the tests for both, simple and composite null hypotheses.  相似文献   

6.
We introduce a family of Rényi statistics of orders r?∈?R for testing composite hypotheses in general exponential models, as alternatives to the previously considered generalized likelihood ratio (GLR) statistic and generalized Wald statistic. If appropriately normalized exponential models converge in a specific sense when the sample size (observation window) tends to infinity, and if the hypothesis is regular, then these statistics are shown to be χ2-distributed under the hypothesis. The corresponding Rényi tests are shown to be consistent. The exact sizes and powers of asymptotically α-size Rényi, GLR and generalized Wald tests are evaluated for a concrete hypothesis about a bivariate Lévy process and moderate observation windows. In this concrete situation the exact sizes of the Rényi test of the order r?=?2 practically coincide with those of the GLR and generalized Wald tests but the exact powers of the Rényi test are on average somewhat better.  相似文献   

7.
In this paper, we propose several tests for detecting difference in means and variances simultaneously between two populations under normality. First of all, we propose a likelihood ratio test. Then we obtain an expression of the likelihood ratio statistic by a product of two functions of random quantities, which can be used to test the two individual partial hypotheses for differences in means and variances. With those individual partial tests, we propose a union-intersection test. Also we consider two optimal tests by combining the p-values of the two individual partial tests. For obtaining null distributions, we apply the permutation principle with the Monte Carlo approach. Then we compare efficiency among the proposed tests with well-known ones through a simulation study. Finally, we discuss some interesting features related to the simultaneous tests and resampling methods as concluding remarks.  相似文献   

8.
Teresa Ledwina 《Statistics》2013,47(1):105-118
We state some necessary and sufficient conditions for admissibility of tests for a simple and a composite null hypotheses against ”one-sided” alternatives on multivariate exponential distributions with discrete support.

Admissibility of the maximum likelihood test for “one –sided” alternatives and z χ2test for the independence hypothesis in r× scontingency tables is deduced among others.  相似文献   

9.
Generalized variance is a measure of dispersion of multivariate data. Comparison of dispersion of multivariate data is one of the favorite issues for multivariate quality control, generalized homogeneity of multidimensional scatter, etc. In this article, the problem of testing equality of generalized variances of k multivariate normal populations by using the Bartlett's modified likelihood ratio test (BMLRT) is proposed. Simulations to compare the Type I error rate and power of the BMLRT and the likelihood ratio test (LRT) methods are performed. These simulations show that the BMLRT method has a better chi-square approximation under the null hypothesis. Finally, a practical example is given.  相似文献   

10.
This paper gives an exposition of the use of the posterior likelihood ratio for testing point null hypotheses in a fully Bayesian framework. Connections between the frequentist P-value and the posterior distribution of the likelihood ratio are used to interpret and calibrate P-values in a Bayesian context, and examples are given to show the use of simple posterior simulation methods to provide Bayesian tests of common hypotheses.  相似文献   

11.
We consider estimation and test problems for some semiparametric two-sample density ratio models. The profile empirical likelihood (EL) poses an irregularity problem under the null hypothesis that the laws of the two samples are equal. We show that a dual form of the profile EL is well defined even under the null hypothesis. A statistical test, based on the dual form of the EL ratio statistic (ELRS), is then proposed. We give an interpretation for the dual form of the ELRS through φφ-divergences and duality techniques. The asymptotic properties of the test statistic are presented both under the null and the alternative hypotheses, and approximation of the power function of the test is deduced.  相似文献   

12.
ABSTRACT

This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman–Pearson Lemma that minimizes a linear combination of α, the probability of rejecting a true null hypothesis, and β, the probability of failing to reject a false null, instead of fixing α and minimizing β. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.  相似文献   

13.
This paper introduces a general framework for testing hypotheses about the structure of the mean function of complex functional processes. Important particular cases of the proposed framework are as follows: (1) testing the null hypothesis that the mean of a functional process is parametric against a general alternative modelled by penalized splines; and (2) testing the null hypothesis that the means of two possibly correlated functional processes are equal or differ by only a simple parametric function. A global pseudo‐likelihood ratio test is proposed, and its asymptotic distribution is derived. The size and power properties of the test are confirmed in realistic simulation scenarios. Finite‐sample power results indicate that the proposed test is much more powerful than competing alternatives. Methods are applied to testing the equality between the means of normalized δ‐power of sleep electroencephalograms of subjects with sleep‐disordered breathing and matched controls.  相似文献   

14.
ABSTRACT

We derive the influence function of the likelihood ratio test statistic for multivariate normal sample. The derived influence function does not depend on the influence functions of the parameters under the null hypothesis. So we can obtain directly the empirical influence function with only the maximum likelihood estimators under the null hypothesis. Since the derived formula is a general form, it can be applied to influence analysis on many statistical testing problems.  相似文献   

15.
We derive likelihood ratio (LR) tests for the null hypothesis of equivalence that the normal means fall into a practical indifference zone. The LR test can easily be constructed and applied to k ≥ 2 treatments. Simulation results indicate that the LR test might be slightly anticonservative statistically, but when the sample sizes are large, it always produces the nominal level for mean configurations under the null hypothesis. More powerful than the studentized range test, the LR test is a straightforward application that requires only current existing statistical tables, with no complicated computations.  相似文献   

16.
The restricted minimum φ-divergence estimator, [Pardo, J.A., Pardo, L. and Zografos, K., 2002, Minimum φ-divergence estimators with constraints in multinomial populations. Journal of Statistical Planning and Inference, 104, 221–237], is employed to obtain estimates of the cell frequencies of an I×I contingency table under hypotheses of symmetry, marginal homogeneity or quasi-symmetry. The associated φ-divergence statistics are distributed asymptotically as chi-squared distributions under the null hypothesis. The new estimators and test statistics contain, as particular cases, the classical estimators and test statistics previously presented in the literature for the cited problems. A simulation study is presented, for the symmetry problem, to choose the best function φ2 for estimation and the best function φ1 for testing.  相似文献   

17.
《Statistics》2012,46(6):1187-1209
ABSTRACT

According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for each alternative hypothesis, this leads to weighing evidence without any shrinkage toward a presumption of the truth of each null hypothesis. That lack of shrinkage can lead to many false positives in settings with large numbers of hypotheses. A related problem is that point hypotheses cannot have more support than their alternatives. Both problems may be solved by fusing the realistic model with a model of a more restricted parameter space for use with the general law of likelihood. Applying the proposed framework of model fusion to data sets from genomics and education yields intuitively reasonable weights of evidence.  相似文献   

18.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

19.

We present correction formulae to improve likelihood ratio and score teats for testing simple and composite hypotheses on the parameters of the beta distribution. As a special case of our results we obtain improved tests for the hypothesis that a sample is drawn from a uniform distribution on (0, 1). We present some Monte Carlo investigations to show that both corrected tests have better performances than the classical likelihood ratio and score tests at least for small sample sizes.  相似文献   

20.
Summary: In this paper the seasonal unit root test of Hylleberg et al. (1990) is generalized to cover a heterogenous panel. The procedure follows the work of Im, Pesaran and Shin (2002) and is independently proposed by Otero et al. (2004). Test statistics are given and critical values are obtained by simulation. Moreover, the properties of the tests are analyzed for different deterministic and dynamic specifications. Evidence is presented that for a small time series dimension the power is low even for increasing cross section dimension. Therefore, it seems necessary to have a higher time series dimension than cross section dimension. The test is applied to unemployment data in industrialized countries. In some cases seasonal unit roots are detected. However, the null hypotheses of panel seasonal unit roots are rejected. The null hypothesis of a unit root at the zero frequency is not rejected, thereby supporting the presence of hysteresis effects. * The research of this paper was supported by the Deutsche Forschungsgemeinschaft. The paper was presented at the workshop “Unit roots and cointegration in panel data” in Frankfurt, October 2004 and in the poster-session at the EC2 meeting in Marseille, December 2004. We are grateful to the participants of the workshops and an anonymous referee for their helpful comments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号