首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
The aim of this paper is to investigate the robustness properties of likelihood inference with respect to rounding effects. Attention is focused on exponential families and on inference about a scalar parameter of interest, also in the presence of nuisance parameters. A summary value of the influence function of a given statistic, the local-shift sensitivity, is considered. It accounts for small fluctuations in the observations. The main result is that the local-shift sensitivity is bounded for the usual likelihood-based statistics, i.e. the directed likelihood, the Wald and score statistics. It is also bounded for the modified directed likelihood, which is a higher-order adjustment of the directed likelihood. The practical implication is that likelihood inference is expected to be robust with respect to rounding effects. Theoretical analysis is supplemented and confirmed by a number of Monte Carlo studies, performed to assess the coverage probabilities of confidence intervals based on likelihood procedures when data are rounded. In addition, simulations indicate that the directed likelihood is less sensitive to rounding effects than the Wald and score statistics. This provides another criterion for choosing among first-order equivalent likelihood procedures. The modified directed likelihood shows the same robustness as the directed likelihood, so that its gain in inferential accuracy does not come at the price of an increase in instability with respect to rounding.  相似文献   

2.
Penalized Maximum Likelihood Estimator for Normal Mixtures   总被引:1,自引:0,他引:1  
The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator. Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one.  相似文献   

3.
The empirical likelihood method is proposed to construct the confidence regions for the difference in value between coefficients of two-sample linear regression model. Unlike existing empirical likelihood procedures for one-sample linear regression models, as the empirical likelihood ratio function is not concave, the usual maximum empirical likelihood estimation cannot be obtained directly. To overcome this problem, we propose to incorporate a natural and well-explained restriction into likelihood function and obtain a restricted empirical likelihood ratio statistic (RELR). It is shown that RELR has an asymptotic chi-squared distribution. Furthermore, to improve the coverage accuracy of the confidence regions, a Bartlett correction is applied. The effectiveness of the proposed approach is demonstrated by a simulation study.  相似文献   

4.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

5.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

6.
A maximum likelihood methodology for the parameters of models with an intractable likelihood is introduced. We produce a likelihood-free version of the stochastic approximation expectation-maximization (SAEM) algorithm to maximize the likelihood function of model parameters. While SAEM is best suited for models having a tractable “complete likelihood” function, its application to moderately complex models is a difficult or even impossible task. We show how to construct a likelihood-free version of SAEM by using the “synthetic likelihood” paradigm. Our method is completely plug-and-play, requires almost no tuning and can be applied to both static and dynamic models.  相似文献   

7.
Under the generalized linear models for a binary variable, an approximate bias of the maximum likelihood estimator of the coefficient, that is a special case of linear parameter in Cordeiro and McCullagh (1991), is derived without a calculation of the third-order derivative of the log likelihood function. Using the obtained approximate bias of the maximum likelihood estimator, a bias-corrected maximum likelihood estimator is defined. Through a simulation study, we show that the bias-corrected maximum likelihood estimator and its variance estimator have a better performance than the maximum likelihood estimator and its variance estimator.  相似文献   

8.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

9.
Abstract. In this article, a naive empirical likelihood ratio is constructed for a non‐parametric regression model with clustered data, by combining the empirical likelihood method and local polynomial fitting. The maximum empirical likelihood estimates for the regression functions and their derivatives are obtained. The asymptotic distributions for the proposed ratio and estimators are established. A bias‐corrected empirical likelihood approach to inference for the parameters of interest is developed, and the residual‐adjusted empirical log‐likelihood ratio is shown to be asymptotically chi‐squared. These results can be used to construct a class of approximate pointwise confidence intervals and simultaneous bands for the regression functions and their derivatives. Owing to our bias correction for the empirical likelihood ratio, the accuracy of the obtained confidence region is not only improved, but also a data‐driven algorithm can be used for selecting an optimal bandwidth to estimate the regression functions and their derivatives. A simulation study is conducted to compare the empirical likelihood method with the normal approximation‐based method in terms of coverage accuracies and average widths of the confidence intervals/bands. An application of this method is illustrated using a real data set.  相似文献   

10.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

11.
Summary. The strength of statistical evidence is measured by the likelihood ratio. Two key performance properties of this measure are the probability of observing strong misleading evidence and the probability of observing weak evidence. For the likelihood function associated with a parametric statistical model, these probabilities have a simple large sample structure when the model is correct. Here we examine how that structure changes when the model fails. This leads to criteria for determining whether a given likelihood function is robust (continuing to perform satisfactorily when the model fails), and to a simple technique for adjusting both likelihoods and profile likelihoods to make them robust. We prove that the expected information in the robust adjusted likelihood cannot exceed the expected information in the likelihood function from a true model. We note that the robust adjusted likelihood is asymptotically fully efficient when the working model is correct, and we show that in some important examples this efficiency is retained even when the working model fails. In such cases the Bayes posterior probability distribution based on the adjusted likelihood is robust, remaining correct asymptotically even when the model for the observable random variable does not include the true distribution. Finally we note a link to standard frequentist methodology—in large samples the adjusted likelihood functions provide robust likelihood-based confidence intervals.  相似文献   

12.
The profile likelihood function is often criticized for giving strange or unintuitive results. In the cases discussed here these are due to the use of density functions that have singularities. These singularities are naturally inherited by the profile likelihood function. It is therefore apparently important to be reminded that likelihood functions are proportional to probability functions, and so cannot have singularities. When this issue is addressed, then the profile likelihood poses no problems of this sort. This is of particular importance since the profile likelihood is a commonly used method for dealing with separate estimation of parameters.  相似文献   

13.
Abstract.  A vector-valued estimating function, such as the quasi-score, is typically not the gradient of any objective function. Consequently, an analogue of the likelihood function cannot be unambiguously defined by integrating the estimating function. This paper studies an analogue of the likelihood inference in the framework of optimal estimating functions. We propose a quadratic artificial likelihood function for an optimal estimating function. The objective function is uniquely identified as the potential function from the vector field decomposition by imposing some natural restriction on the divergence-free part. The artificial likelihood function is shown to resemble a genuine likelihood function in a number of respects. A bootstrap version of the artificial likelihood function is also studied, which may be used for selecting a root as an estimate from among multiple roots to an estimating equation.  相似文献   

14.
Pairwise likelihood functions are convenient surrogates for the ordinary likelihood, useful when the latter is too difficult or even impractical to compute. One drawback of pairwise likelihood inference is that, for a multidimensional parameter of interest, the pairwise likelihood analogue of the likelihood ratio statistic does not have the standard chi-square asymptotic distribution. Invoking the theory of unbiased estimating functions, this paper proposes and discusses a computationally and theoretically attractive approach based on the derivation of empirical likelihood functions from the pairwise scores. This approach produces alternatives to the pairwise likelihood ratio statistic, which allow reference to the usual asymptotic chi-square distribution and which are useful when the elements of the Godambe information are troublesome to evaluate or in the presence of large data sets with relative small sample sizes. Two Monte Carlo studies are performed in order to assess the finite-sample performance of the proposed empirical pairwise likelihoods.  相似文献   

15.
16.
In this article, empirical likelihood inferences for semiparametric varying-coefficient partially linear models with longitudinal data are investigated. We propose a groupwise empirical likelihood procedure to handle the inter-series dependence of the longitudinal data. By using residual-adjustment, an empirical likelihood ratio function for the nonparametric component is constructed, and a nonparametric version Wilks' phenomenons is proved. Compared with methods based on normal approximations, the empirical likelihood does not require consistent estimators for the asymptotic variance and bias. A simulation study is undertaken to assess the finite sample performance of the proposed confidence regions.  相似文献   

17.
Consider the Lehmann model with time-dependent covariates, which is different from Cox’s model. We find out that (1) the parameter space for β under the Lehmann model is restricted, and the maximum point of the parametric likelihood for β may lie outside the parameter space; (2) for some particular time-dependent covariate, under the standard generalized likelihood the semiparametric maximum likelihood estimator (SMLE) is inconsistent and we propose a modified generalized likelihood which leads to the consistent SMLE.  相似文献   

18.
It is common practice to compare the fit of non‐nested models using the Akaike (AIC) or Bayesian (BIC) information criteria. The basis of these criteria is the log‐likelihood evaluated at the maximum likelihood estimates of the unknown parameters. For the general linear model (and the linear mixed model, which is a special case), estimation is usually carried out using residual or restricted maximum likelihood (REML). However, for models with different fixed effects, the residual likelihoods are not comparable and hence information criteria based on the residual likelihood cannot be used. For model selection, it is often suggested that the models are refitted using maximum likelihood to enable the criteria to be used. The first aim of this paper is to highlight that both the AIC and BIC can be used for the general linear model by using the full log‐likelihood evaluated at the REML estimates. The second aim is to provide a derivation of the criteria under REML estimation. This aim is achieved by noting that the full likelihood can be decomposed into a marginal (residual) and conditional likelihood and this decomposition then incorporates aspects of both the fixed effects and variance parameters. Using this decomposition, the appropriate information criteria for model selection of models which differ in their fixed effects specification can be derived. An example is presented to illustrate the results and code is available for analyses using the ASReml‐R package.  相似文献   

19.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   

20.
Kendall and Gehan estimating functions are commonly used to estimate the regression parameter in accelerated failure time model with censored observations in survival analysis. In this paper, we apply the jackknife empirical likelihood method to overcome the computation difficulty about interval estimation. A Wilks’ theorem of jackknife empirical likelihood for U-statistic type estimating equations is established, which is used to construct the confidence intervals for the regression parameter. We carry out an extensive simulation study to compare the Wald-type procedure, the empirical likelihood method, and the jackknife empirical likelihood method. The proposed jackknife empirical likelihood method has a better performance than the existing methods. We also use a real data set to compare the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号