首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
The non-Gaussian maximum likelihood estimator is frequently used in GARCH models with the intention of capturing heavy-tailed returns. However, unless the parametric likelihood family contains the true likelihood, the estimator is inconsistent due to density misspecification. To correct this bias, we identify an unknown scale parameter ηf that is critical to the identification for consistency and propose a three-step quasi-maximum likelihood procedure with non-Gaussian likelihood functions. This novel approach is consistent and asymptotically normal under weak moment conditions. Moreover, it achieves better efficiency than the Gaussian alternative, particularly when the innovation error has heavy tails. We also summarize and compare the values of the scale parameter and the asymptotic efficiency for estimators based on different choices of likelihood functions with an increasing level of heaviness in the innovation tails. Numerical studies confirm the advantages of the proposed approach.  相似文献   

2.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

3.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

4.
Nonparametric models with jump points have been considered by many researchers. However, most existing methods based on least squares or likelihood are sensitive when there are outliers or the error distribution is heavy tailed. In this article, a local piecewise-modal method is proposed to estimate the regression function with jump points in nonparametric models, and a piecewise-modal EM algorithm is introduced to estimate the proposed estimator. Under some regular conditions, the large-sample theory is established for the proposed estimators. Several simulations are presented to evaluate the performances of the proposed method, which shows that the proposed estimator is more efficient than the local piecewise-polynomial regression estimator in the presence of outliers or heavy tail error distribution. What is more, the proposed procedure is asymptotically equivalent to the local piecewise-polynomial regression estimator under the assumption that the error distribution is a Gaussian distribution. The proposed method is further illustrated via the sea-level pressures.  相似文献   

5.
The quasi-likelihood function proposed by Wedderburn [Quasi-likelihood functions, generalized linear models, and the Gauss–Newton method. Biometrika. 1974;61:439–447] broadened the application scope of generalized linear models (GLM) by specifying the mean and variance function instead of the entire distribution. However, in many situations, complete specification of variance function in the quasi-likelihood approach may not be realistic. Following Fahrmeir's [Maximum likelihood estimation in misspecified generalized linear models. Statistics. 1990;21:487–502] treating with misspecified GLM, we define a quasi-likelihood nonlinear models (QLNM) with misspecified variance function by replacing the unknown variance function with a known function. In this paper, we propose some mild regularity conditions, under which the existence and the asymptotic normality of the maximum quasi-likelihood estimator (MQLE) are obtained in QLNM with misspecified variance function. We suggest computing MQLE of unknown parameter in QLNM with misspecified variance function by the Gauss–Newton iteration procedure and show it to work well in a simulation study.  相似文献   

6.
During recent years, analysts have been relying on approximate methods of inference to estimate multilevel models for binary or count data. In an earlier study of random-intercept models for binary outcomes we used simulated data to demonstrate that one such approximation, known as marginal quasi-likelihood, leads to a substantial attenuation bias in the estimates of both fixed and random effects whenever the random effects are non-trivial. In this paper, we fit three-level random-intercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized quasi-likelihood, also underestimate the underlying parameters. The extent of the bias is assessed by two standards of comparison: exact maximum likelihood estimates, based on a Gauss–Hermite numerical quadrature procedure, and a set of Bayesian estimates, obtained from Gibbs sampling with diffuse priors. We also examine the effectiveness of a parametric bootstrap procedure for reducing the bias. The results indicate that second-order penalized quasi-likelihood estimates provide a considerable improvement over the other approximations, but all the methods of approximate inference result in a substantial underestimation of the fixed and random effects when the random effects are sizable. We also find that the parametric bootstrap method can eliminate the bias but is computationally very intensive.  相似文献   

7.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   

8.
Estimating the parameters of multivariate mixed Poisson models is an important problem in image processing applications, especially for active imaging or astronomy. The classical maximum likelihood approach cannot be used for these models since the corresponding masses cannot be expressed in a simple closed form. This paper studies a maximum pairwise likelihood approach to estimate the parameters of multivariate mixed Poisson models when the mixing distribution is a multivariate Gamma distribution. The consistency and asymptotic normality of this estimator are derived. Simulations conducted on synthetic data illustrate these results and show that the proposed estimator outperforms classical estimators based on the method of moments. An application to change detection in low-flux images is also investigated.  相似文献   

9.
For the last decade, various simulation-based nonlinear and non-Gaussian filters and smoothers have been proposed. In the case where the unknown parameters are included in the nonlinear and non-Gaussian system, however, it is very difficult to estimate the parameters together with the state variables, because the state-space model includes a lot of parameters in general and the simulation-based procedures are subject to the simulation errors or the sampling errors. Therefore, clearly, precise estimates of the parameters cannot be obtained (i.e., the obtained estimates may not be the global optima). In this paper, an attempt is made to estimate the state variables and the unknown parameters simultaneously, where the Monte Carlo optimization procedure is adopted for maximization of the likelihood function.  相似文献   

10.
When measurement error is present in covariates, it is well known that naïvely fitting a generalized linear model results in inconsistent inferences. Several methods have been proposed to adjust for measurement error without making undue distributional assumptions about the unobserved true covariates. Stefanski and Carroll focused on an unbiased estimating function rather than a likelihood approach. Their estimating function, known as the conditional score, exists for logistic regression models but has two problems: a poorly behaved Wald test and multiple solutions. They suggested a heuristic procedure to identify the best solution that works well in practice but has little theoretical support compared with maximum likelihood estimation. To help to resolve these problems, we propose a conditional quasi-likelihood to accompany the conditional score that provides an alternative to Wald's test and successfully identifies the consistent solution in large samples.  相似文献   

11.
Abstract.  It is well known that one or more outlying points in the data may adversely affect the consistency of the quasi-likelihood or the likelihood estimators for the regression effects. Similar to the quasi-likelihood approach, the existing outliers-resistant Mallow's type quasi-likelihood (MQL) estimation approach may also produce biased regression estimators. As a remedy, by using a fully standardized score function in the MQL estimating equation, in this paper, we demonstrate that the fully standardized MQL estimators are almost unbiased ensuring its higher consistency performance. Both count and binary responses subject to one or more outliers are used in the study. The small sample as well as asymptotic results for the competitive estimators are discussed.  相似文献   

12.
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed.  相似文献   

13.
A commonly used procedure in a wide class of empirical applications is to impute unobserved regressors, such as expectations, from an auxiliary econometric model. This two-step (T-S) procedure fails to account for the fact that imputed regressors are measured with sampling error, so hypothesis tests based on the estimated covariance matrix of the second-step estimator are biased, even in large samples. We present a simple yet general method of calculating asymptotically correct standard errors in T-S models. The procedure may be applied even when joint estimation methods, such as full information maximum likelihood, are inappropriate or computationally infeasible. We present two examples from recent empirical literature in which these corrections have a major impact on hypothesis testing.  相似文献   

14.
The Kalman filter gives a recursive procedure for estimating state vectors. The recursive procedure is determined by a matrix, so-called gain matrix, where the gain matrix is varied based on the system to which the Kalman filter is applied. Traditionally the gain matrix is derived through the maximum likelihood approach when the probability structure of underlying system is known. As an alternative approach, the quasi-likelihood method is considered in this paper. This method is used to derive the gain matrix without the full knowledge of the probability structure of the underlying system. Two models are considered in this paper, the simple state space model and the model with correlated between measurement and transition equation disturbances. The purposes of this paper are (i) to show a simple way to derive the gain matrix; (ii) to give an alternative approach for obtaining optimal estimation of state vector when underlying system is relatively complex.  相似文献   

15.
Coefficient estimation in linear regression models with missing data is routinely carried out in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy‐tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an ICQ ‐type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.  相似文献   

16.
The authors provide a rigorous large sample theory for linear models whose response variable has been subjected to the Box‐Cox transformation. They provide a continuous asymptotic approximation to the distribution of estimators of natural parameters of the model. They show, in particular, that the maximum likelihood estimator of the ratio of slope to residual standard deviation is consistent and relatively stable. The authors further show the importance for inference of normality of the errors and give tests for normality based on the estimated residuals. For non‐normal errors, they give adjustments to the log‐likelihood and to asymptotic standard errors.  相似文献   

17.
The consistency of model selection criterion BIC has been well and widely studied for many nonlinear regression models. However, few of them had considered models with lag variables as regressors and auto-correlated errors in time series settings, which is common in both linear and nonlinear time series modeling. This paper studies a dynamic semi-varying coefficient model with ARMA errors, using an approach based on spectrum analysis of time series. The consistency property of the proposed model selection criteria is established and an implementation procedure of model selection is proposed for practitioners. Simulation studies have also been conducted to numerically show the consistency property.  相似文献   

18.
Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop efficient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.  相似文献   

19.
In applications of Gaussian processes (GPs) where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. This is normally done by means of standard Markov chain Monte Carlo (MCMC) algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable amount of expensive proposals, this paper develops an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this paper studies the application of AMIS for GPs in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. The results suggest that the proposed framework outperforms MCMC-based inference of covariance parameters in a wide range of scenarios.  相似文献   

20.
We study the asymptotic properties of the reduced-rank estimator of error correction models of vector processes observed with measurement errors. Although it is well known that there is no asymptotic measurement error bias when predictor variables are integrated processes in regression models [Phillips BCB, Durlauf SN. Multiple time series regression with integrated processes. Rev Econom Stud. 1986;53:473–495], we systematically investigate the effects of the measurement errors (in the dependent variables as well as in the predictor variables) on the estimation of not only cointegrating vectors but also the speed of the adjustment matrix. Furthermore, we present the asymptotic properties of the estimators. We also obtain the asymptotic distribution of the likelihood ratio test for the cointegrating ranks. We investigate the effects of the measurement errors on estimation and test through a Monte Carlo simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号