首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

2.
It is well known that financial data frequently contain outlying observations. Almost all methods and techniques used to estimate GARCH models are likelihood-based and thus generally non-robust against outliers. Minimum distance method, as an important tool for statistical inferences and a competitive alternative for achieving robustness, has surprisingly not been well explored for GARCH models. In this paper, we proposed a minimum Hellinger distance estimator (MHDE) and a minimum profile Hellinger distance estimator (MPHDE), depending on whether the innovation distribution is specified or not, for estimating the parameters in GARCH models. The construction and investigation of the two estimators are quite involved due to the non-i.i.d. nature of data. We proved that the MHDE is a consistent estimator and derived its bias in explicit expression. For both of the proposed estimators, we demonstrated their finite-sample performance through simulation studies and compared with the well-established methods including MLE, Gaussian Quasi-MLE, Non-Gaussian Quasi-MLE and Least Absolute Deviation estimator. Our numerical results showed that MHDE and MPHDE have much better performance than MLE-based methods when data are contaminated while simultaneously they are very competitive when data is clean, which testified to the robustness and efficiency of the two proposed MHD-type estimations.  相似文献   

3.
Bayesian analysis often requires the researcher to employ Markov Chain Monte Carlo (MCMC) techniques to draw samples from a posterior distribution which in turn is used to make inferences. Currently, several approaches to determine convergence of the chain as well as sensitivities of the resulting inferences have been developed. This work develops a Hellinger distance approach to MCMC diagnostics. An approximation to the Hellinger distance between two distributions f and g based on sampling is introduced. This approximation is studied via simulation to determine the accuracy. A criterion for using this Hellinger distance for determining chain convergence is proposed as well as a criterion for sensitivity studies. These criteria are illustrated using a dataset concerning the Anguilla australis, an eel native to New Zealand.  相似文献   

4.
5.
We show that for a class of penalty functions, finding the global optimizer in the penalized least-squares estimation is equivalent to the ‘exact cover by 3-sets’ problem, which belongs to a class of NP-hard problems. The NP-hardness result is then extended to the cases of penalized least absolute deviations regression and a special class of penalized support vector machines. We discuss its implication in statistics. To the best of our knowledge, this is the first formal documentation on the complexity of this type of problem.  相似文献   

6.
Jingjing Wu 《Statistics》2015,49(4):711-740
The successful application of the Hellinger distance approach to fully parametric models is well known. The corresponding optimal estimators, known as minimum Hellinger distance (MHD) estimators, are efficient and have excellent robustness properties [Beran R. Minimum Hellinger distance estimators for parametric models. Ann Statist. 1977;5:445–463]. This combination of efficiency and robustness makes MHD estimators appealing in practice. However, their application to semiparametric statistical models, which have a nuisance parameter (typically of infinite dimension), has not been fully studied. In this paper, we investigate a methodology to extend the MHD approach to general semiparametric models. We introduce the profile Hellinger distance and use it to construct a minimum profile Hellinger distance estimator of the finite-dimensional parameter of interest. This approach is analogous in some sense to the profile likelihood approach. We investigate the asymptotic properties such as the asymptotic normality, efficiency, and adaptivity of the proposed estimator. We also investigate its robustness properties. We present its small-sample properties using a Monte Carlo study.  相似文献   

7.
In this paper, we study minimum Hellinger distance estimators (MHDEs) for multivariate distributions from the Johnson system. We prove some properties of these estimators, such as consistency and asymptotic normality, and show that they represent a robust alternative for other existing estimators.  相似文献   

8.
The investigation of multi-parameter likelihood functions is simplified if the log likelihood is quadratic near the maximum, as then normal approximations to the likelihood can be accurately used to obtain quantities such as likelihood regions. This paper proposes that data-based transformations of the parameters can be employed to make the log likelihood more quadratic, and illustrates the method with one of the simplest bivariate likelihoods, the normal two-parameter likelihood.  相似文献   

9.
10.
In the present paper, minimum Hellinger distance estimates for parameters of a bilinear time series model are presented. The probabilistic properties such as stationarity, existence of moments of the stationary distribution and strong mixing property of the model are well known (see for instance [J. Liu, A note on causality and invertibility of a general bilinear time series model, Adv. Appl. Probab. 22 (1990) 247–250; J. Liu, P.J. Brockwell, On the general bilinear time series model, J. Appl. Probab. 25 (1988) 553–564; D.T. Pham, The mixing property of bilinear and generalised random coefficients autoregressive models, Stoch. Process Appl. 23 (1986) 291–300]). We establish, under some mild conditions, the consistency and the asymptotic normality of the minimum Hellinger distance estimates of the parameters of the model.  相似文献   

11.
12.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

13.
Efficiency and robustness are two fundamental concepts in parametric estimation problems. It was long thought that there was an inherent contradiction between the aims of achieving robustness and efficiency; that is, a robust estimator could not be efficient and vice versa. It is now known that the minimum Hellinger distance approached introduced by Beran [R. Beran, Annals of Statistics 1977;5:445–463] is one way of reconciling the conflicting concepts of efficiency and robustness. For parametric models, it has been shown that minimum Hellinger estimators achieve efficiency at the model density and simultaneously have excellent robustness properties. In this article, we examine the application of this approach in two semiparametric models. In particular, we consider a two‐component mixture model and a two‐sample semiparametric model. In each case, we investigate minimum Hellinger distance estimators of finite‐dimensional Euclidean parameters of particular interest and study their basic asymptotic properties. Small sample properties of the proposed estimators are examined using a Monte Carlo study. The results can be extended to semiparametric models of general form as well. The Canadian Journal of Statistics 37: 514–533; 2009 © 2009 Statistical Society of Canada  相似文献   

14.
It is important that the proportion of true null hypotheses be estimated accurately in a multiple hypothesis context. Current estimation methods, however, are not suitable for high-dimensional data such as microarray data. First, they do not consider the (strong) dependence between hypotheses (or genes), thereby resulting in inaccurate estimation. Second, the unknown distribution of false null hypotheses cannot be estimated properly by these methods. Third, the estimation is affected strongly by outliers. In this paper, we find out the optimal procedure for estimating the proportion of true null hypotheses under a (strong) dependence based on the Dirichlet process prior. In addition, by using the minimum Hellinger distance, the estimation should be robust to any model misspecification as well as to any outliers while maintaining efficiency. The results are confirmed by a simulation study, and the newly developed methodology is illustrated by a real microarray data.  相似文献   

15.
It is shown that linear transformations of the logarithm are the only functions of the likelihood whose expected values discriminate between correct and incorrect likelihoods by a simple ordering property, assuming the correct probability density function is continuous. Also, an extension of this result is given for the predictive densities considered by Akaike.  相似文献   

16.
The purpose of this article is to provide validation for the approximate algebraic propagation algorithms to accommodate non-Gaussian dynamic processes. These algorithms have been developed to carry out Bayesian analysis based on conjugate forms and presented with detailed examples of response distributions such as Poisson and Lognormal. The validity of the approximation algorithms can be checked by introducing a metric (Hellinger divergence measure) over the distribution of the states (parameters) and use it to judge the approximation. Theoretical bounds for the efficacy of such procedure are discussed.  相似文献   

17.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

18.
Various models have previously been proposed for data comprising m repeated measurements on each of N subjects. Log likelihood ratio tests may be used to help choose between possible models, but these tests are based on distributions which in theory apply only asymptotically. With small N , the log likelihood ratio approximation is unreliable, tending to reject the simpler of two models more often than it should. This is shown by reference to three datasets and analogous simulated data. For two of the three datasets, subjects fall into two groups. Log likelihood ratio tests confirm that for each of these two datasets group means over time differ. Tests suggest that group covariance structures also differ.  相似文献   

19.
A robust estimator introduced by Beran (1977a, 1977b), which is based on the minimum Hellinger distance between a projection model density and a nonparametric sample density, is studied empirically. An extensive simulation provides an estimate of the small sample distribution and supplies empirical evidence of the estimator performance for a normal location-scale model. While the performance of the minimum Hellinger distance estimator is seen to be competitive with the maximum likelihood estimator at the true model, its robustness to deviations from normality is shown to be competitive in this setting with that obtained from the M-estimator and the Cramér-von Mises minimum distance estimator. Beran also introduced a goodness-of-fit statisticH 2, based on the minimized Hellinger distance between a member of a parametric family of densities and a nonparametric density estimate. We investigate the statistic H (the square root of H 2) as a test for normality when both location and scale are unspecified. Empirically derived critical values are given which do not require extensive tables. The power of the statistic H compares favorably with four other widely used tests for normality.  相似文献   

20.
It has recently been observed that, given the mean‐variance relation, one can improve on the accuracy of the quasi‐likelihood estimator by the adaptive estimator based on the estimation of the higher moments. The estimation of such moments is usually unstable, however, and consequently only for large samples does the improvement become evident. The author proposes a nonparametric estimating equation that does not depend on the estimation of such moments, but instead on the penalized minimization of asymptotic variance. His method provides a strong improvement over the quasi‐likelihood estimator and the adaptive estimators, for a wide range of sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号