共查询到20条相似文献,搜索用时 15 毫秒
1.
《Journal of Statistical Computation and Simulation》2012,82(5):982-998
The class of inflated beta regression models generalizes that of beta regressions [S.L.P. Ferrari and F. Cribari-Neto, Beta regression for modelling rates and proportions, J. Appl. Stat. 31 (2004), pp. 799–815] by incorporating a discrete component that allows practitioners to model data on rates and proportions with observations that equal an interval limit. For instance, one can model responses that assume values in (0, 1]. The likelihood ratio test tends to be quite oversized (liberal, anticonservative) in inflated beta regressions estimated with a small number of observations. Indeed, our numerical results show that its null rejection rate can be almost twice the nominal level. It is thus important to develop alternative testing strategies. This paper develops small-sample adjustments to the likelihood ratio and signed likelihood ratio test statistics in inflated beta regression models. The adjustments do not require orthogonality between the parameters of interest and the nuisance parameters and are fairly simple since they only require first- and second-order log-likelihood cumulants. Simulation results show that the modified likelihood ratio tests deliver much accurate inference in small samples. An empirical application is presented and discussed. 相似文献
2.
Pao-Sheng Shen 《Journal of applied statistics》2011,38(10):2345-2353
Doubly truncated data appear in a number of applications, including astronomy and survival analysis. For doubly-truncated data, the lifetime T is observable only when U≤T≤V, where U and V are the left-truncated and right-truncated time, respectively. Based on the empirical likelihood approach of Zhou [21], we propose a modified EM algorithm of Turnbull [19] to construct the interval estimator of the distribution function of T. Simulation results indicate that the empirical likelihood method can be more efficient than the bootstrap method. 相似文献
3.
Wenzheng Huang 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(2):401-409
Summary. It is well known that in a sequential study the probability that the likelihood ratio for a simple alternative hypothesis H 1 versus a simple null hypothesis H 0 will ever be greater than a positive constant c will not exceed 1/ c under H 0 . However, for a composite alternative hypothesis, this bound of 1/ c will no longer hold when a generalized likelihood ratio statistic is used. We consider a stepwise likelihood ratio statistic which, for each new observation, is updated by cumulatively multiplying the ratio of the conditional likelihoods for the composite alternative hypothesis evaluated at an estimate of the parameter obtained from the preceding observations versus the simple null hypothesis. We show that, under the null hypothesis, the probability that this stepwise likelihood ratio will ever be greater than c will not exceed 1/ c . In contrast, under the composite alternative hypothesis, this ratio will generally converge in probability to ∞. These results suggest that a stepwise likelihood ratio statistic can be useful in a sequential study for testing a composite alternative versus a simple null hypothesis. For illustration, we conduct two simulation studies, one for a normal response and one for an exponential response, to compare the performance of a sequential test based on a stepwise likelihood ratio statistic with a constant boundary versus some existing approaches. 相似文献
4.
5.
Junshan Xie 《统计学通讯:理论与方法》2017,46(17):8479-8492
The paper considers a significance test of regression variables in the high-dimensional linear regression model when the dimension of the regression variables p, together with the sample size n, tends to infinity. Under two sightly different cases, we proved that the likelihood ratio test statistic will converge in distribution to a Gaussian random variable, and the explicit expressions of the asymptotical mean and covariance are also obtained. The simulations demonstrate that our high-dimensional likelihood ratio test method outperforms those using the traditional methods in analyzing high-dimensional data. 相似文献
6.
In this article, we show that the log empirical likelihood ratio statistic for the population mean converges in distribution to χ2(1) as n → ∞ when the population is in the domain of attraction of normal law but has infinite variance. The simulation results show that the empirical likelihood ratio method is applicable under the infinite second moment condition. 相似文献
7.
Gabriela Ciuperca 《Journal of Statistical Computation and Simulation》2013,83(4):739-758
In this paper, a nonlinear model with response variables missing at random is studied. In order to improve the coverage accuracy for model parameters, the empirical likelihood (EL) ratio method is considered. On the complete data, the EL statistic for the parameters and its approximation have a χ2 asymptotic distribution. When the responses are reconstituted using a semi-parametric method, the empirical log-likelihood on the response variables associated with the imputed data is also asymptotically χ2. The Wilks theorem for EL on the parameters, based on reconstituted data, is also satisfied. These results can be used to construct the confidence region for the model parameters and the response variables. It is shown via Monte Carlo simulations that the EL methods outperform the normal approximation-based method in terms of coverage probability for the unknown parameter, including on the reconstituted data. The advantages of the proposed method are exemplified on real data. 相似文献
8.
For a specified decision rule, a general class of likelihood ratio based repeated significance tests is considered. An invariance principle for the likelihood ratio statistics is established and incorporated in the study of the asymptotic theory of the proposed tests. For comparing these tests with the conventional likelihood ratio tests, based solely on the target sample size, some Bahadur efficiency results are presented. The theory is then adapted in the study of some multiple comparison procedures 相似文献
9.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures. 相似文献
10.
A more accurate Bartlett correction factor is proposed for a likelihood ratio statistic in multivariate bioassay. 相似文献
11.
Albert Vexler Guogen ShanSeongeun Kim Wan-Min TsaiLili Tian Alan D. Hutson 《Journal of statistical planning and inference》2011,141(6):2128-2140
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice. 相似文献
12.
In Hong Chang 《Statistics》2013,47(2):294-305
We consider likelihood ratio statistics based on the usual profile likelihood and the standard adjustments thereof proposed in the literature in the presence of nuisance parameters. The role of data-dependent priors in ensuring approximate frequentist validity of posterior credible regions based on the inversion of these statistics is investigated. Unlike what happens with data-free priors, it is seen that the resulting probability matching conditions readily admit solutions which entail approximate frequentist validity of the highest posterior density region as well. 相似文献
13.
Several adjustments of the profile likelihood have the common effect of reducing the bias of the associated score function. Hence expansions for the adjusted score functions differ by a term, Dξ, that has small asymptotic order (n ?½). The effect of Dξ on other quantities of interest is studied. In particular, we find the bias and variance of the adjusted maximum-likelihood estimate to be relatively unaffected, while differences in the Bartlett correction depend on Dξ in a simple way. 相似文献
14.
Density ratio models (DRMs) are commonly used semiparametric models to link related populations. Empirical likelihood (EL) under DRM has been demonstrated to be a flexible and useful platform for semiparametric inferences. Since DRM-based EL has the same maximum point and maximum likelihood as its dual form (dual EL), EL-based inferences under DRM are usually made through the latter. A natural question comes up: is there any efficiency loss of doing so? We make a careful comparison of the dual EL and DRM-based EL estimation methods from theory and numerical simulations. We find that their point estimators for any parameter are exactly the same, while they may have different performances in interval estimation. In terms of coverage accuracy, the two intervals are comparable for non- or moderate skewed populations, and the DRM-based EL interval can be much superior for severely skewed populations. A real data example is analysed for illustration purpose. 相似文献
15.
Li Chen 《统计学通讯:理论与方法》2013,42(20):5933-5945
AbstractIn this article, empirical likelihood is applied to the linear regression model with inequality constraints. We prove that asymptotic distribution of the adjusted empirical likelihood ratio test statistic is a weighted mixture of chi-square distribution. 相似文献
16.
Distribution function estimation plays a significant role of foundation in statistics since the population distribution is always involved in statistical inference and is usually unknown. In this paper, we consider the estimation of the distribution function of a response variable Y with missing responses in the regression problems. It is proved that the augmented inverse probability weighted estimator converges weakly to a zero mean Gaussian process. A augmented inverse probability weighted empirical log-likelihood function is also defined. It is shown that the empirical log-likelihood converges weakly to the square of a Gaussian process with mean zero and variance one. We apply these results to the construction of Gaussian process approximation based confidence bands and empirical likelihood based confidence bands of the distribution function of Y. A simulation is conducted to evaluate the confidence bands. 相似文献
17.
《Journal of Statistical Computation and Simulation》2012,82(10):2233-2247
In this paper, we develop modified versions of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard-adjusted likelihood ratio statistics, which follow a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed tests display superior finite sample behaviour as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease. 相似文献
18.
In this paper, we investigate the empirical-likelihood-based inference for the construction of confidence intervals and regions of the parameters of interest in single index models with missing covariates at random. An augmented inverse probability weighted-type empirical likelihood ratio for the parameters of interest is defined such that this ratio is asymptotically standard chi-squared. Our approach is to directly calibrate the empirical log-likelihood ratio, and does not need multiplication by an adjustment factor for the original ratio. Our bias-corrected empirical likelihood is self-scale invariant and no plug-in estimator for the limiting variance is needed. Some simulation studies are carried out to assess the performance of our proposed method. 相似文献
19.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic. 相似文献