首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The literature on multivariate stochastic volatility (MSV) models has developed significantly over the last few years. This paper reviews the substantial literature on specification, estimation, and evaluation of MSV models. A wide range of MSV models is presented according to various categories, namely, (i) asymmetric models, (ii) factor models, (iii) time-varying correlation models, and (iv) alternative MSV specifications, including models based on the matrix exponential transformation, the Cholesky decomposition, and the Wishart autoregressive process. Alternative methods of estimation, including quasi-maximum likelihood, simulated maximum likelihood, and Markov chain Monte Carlo methods, are discussed and compared. Various methods of diagnostic checking and model comparison are also reviewed.  相似文献   

2.
Modified Profile Likelihood for Fixed-Effects Panel Data Models   总被引:1,自引:0,他引:1  
We show how modified profile likelihood methods, developed in the statistical literature, may be effectively applied to estimate the structural parameters of econometric models for panel data, with a remarkable reduction of bias with respect to ordinary likelihood methods. Initially, the implementation of these methods is illustrated for general models for panel data including individual-specific fixed effects and then, in more detail, for the truncated linear regression model and dynamic regression models for binary data formulated along with different specifications. Simulation studies show the good behavior of the inference based on the modified profile likelihood, even when compared to an ideal, although infeasible, procedure (in which the fixed effects are known) and also to alternative estimators existing in the econometric literature. The proposed estimation methods are implemented in an R package that we make available to the reader.  相似文献   

3.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

4.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

5.
Time-varying coefficient models with autoregressive and moving-average–generalized autoregressive conditional heteroscedasticity structure are proposed for examining the time-varying effects of risk factors in longitudinal studies. Compared with existing models in the literature, the proposed models give explicit patterns for the time-varying coefficients. Maximum likelihood and marginal likelihood (based on a Laplace approximation) are used to estimate the parameters in the proposed models. Simulation studies are conducted to evaluate the performance of these two estimation methods, which is measured in terms of the Kullback–Leibler divergence and the root mean square error. The marginal likelihood approach leads to the more accurate parameter estimates, although it is more computationally intensive. The proposed models are applied to the Framingham Heart Study to investigate the time-varying effects of covariates on coronary heart disease incidence. The Bayesian information criterion is used for specifying the time series structures of the coefficients of the risk factors.  相似文献   

6.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

7.
Bayesian methods have proved effective for quantile estimation, including for financial Value-at-Risk forecasting. Expected shortfall (ES) is a competing tail risk measure, favoured by the Basel Committee, that can be semi-parametrically estimated via asymmetric least squares. An asymmetric Gaussian density is proposed, allowing a likelihood to be developed, that facilitates both pseudo-maximum likelihood and Bayesian semi-parametric estimation, and leads to forecasts of quantiles, expectiles and ES. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are developed for the Bayesian estimation. The proposed models are favoured in an empirical study forecasting eight financial return series: evidence of more accurate ES forecasting, compared to a range of competing methods, is found, while Bayesian estimated models tend to be more accurate. However, during a financial crisis period most models perform badly, while two existing models perform best.  相似文献   

8.
Estimation is considered for a class of models which are simple extensions of the generalized extreme value (GEV) distribution, suitable for introducing time dependence into models which are otherwise only spatially dependent. Maximum likelihood estimation and the method of probability weighted moment estimation are identified as most useful for fitting these models. The relative merits of these methods, and others, is discussed in the context of estimation for the GEV distribution, with particular reference to the non - regularity of the GEV distribution for particular parameter values. In the case of maximum likelihood estimation, first and second derivatives of the log likelihood are evaluated for the models.  相似文献   

9.
Covariance tapering for multivariate Gaussian random fields estimation   总被引:2,自引:0,他引:2  
In recent literature there has been a growing interest in the construction of covariance models for multivariate Gaussian random fields. However, effective estimation methods for these models are somehow unexplored. The maximum likelihood method has attractive features, but when we deal with large data sets this solution becomes impractical, so computationally efficient solutions have to be devised. In this paper we explore the use of the covariance tapering method for the estimation of multivariate covariance models. In particular, through a simulation study, we compare the use of simple separable tapers with more flexible multivariate tapers recently proposed in the literature and we discuss the asymptotic properties of the method under increasing domain asymptotics.  相似文献   

10.
Abstract.  Prediction error is critical to assess model fit and evaluate model prediction. We propose the cross-validation (CV) and approximated CV methods for estimating prediction error under the Bregman divergence (BD), which embeds nearly all of the commonly used loss functions in the regression, classification procedures and machine learning literature. The approximated CV formulas are analytically derived, which facilitate fast estimation of prediction error under BD. We then study a data-driven optimal bandwidth selector for local-likelihood estimation that minimizes the overall prediction error or equivalently the covariance penalty. It is shown that the covariance penalty and CV methods converge to the same mean-prediction-error-criterion. We also propose a lower-bound scheme for computing the local logistic regression estimates and demonstrate that the algorithm monotonically enhances the target local likelihood and converges. The idea and methods are extended to the generalized varying-coefficient models and additive models.  相似文献   

11.
The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.  相似文献   

12.
Missing data are common in many experiments, including surveys, clinical trials, epidemiological studies, and environmental studies. Unconstrained likelihood inferences for generalized linear models (GLMs) with nonignorable missing covariates have been studied extensively in the literature. However, parameter orderings or constraints may occur naturally in practice, and thus the efficiency of a statistical method may be improved by incorporating parameter constraints into the likelihood function. In this paper, we consider constrained inference for analysing GLMs with nonignorable missing covariates under linear inequality constraints on the model parameters. Specifically, constrained maximum likelihood (ML) estimation is based on the gradient projection expectation maximization approach. Further, we investigate the asymptotic null distribution of the constrained likelihood ratio test (LRT). Simulations study the empirical properties of the constrained ML estimators and LRTs, which demonstrate improved precision of these constrained techniques. An application to contaminant levels in an environmental study is also presented.  相似文献   

13.
Maximum pseudolikelihood (MPL) estimators are useful alternatives to maximum likelihood (ML) estimators when likelihood functions are more difficult to manipulate than their marginal and conditional components. Furthermore, MPL estimators subsume a large number of estimation techniques including ML estimators, maximum composite marginal likelihood estimators, and maximum pairwise likelihood estimators. When considering only the estimation of discrete models (on a possibly countably infinite support), we show that a simple finiteness assumption on an entropy-based measure is sufficient for assessing the consistency of the MPL estimator. As a consequence, we demonstrate that the MPL estimator of any discrete model on a bounded support will be consistent. Our result is valid in parametric, semiparametric, and nonparametric settings.  相似文献   

14.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

15.
Mixture cure models are widely used when a proportion of patients are cured. The proportional hazards mixture cure model and the accelerated failure time mixture cure model are the most popular models in practice. Usually the expectation–maximisation (EM) algorithm is applied to both models for parameter estimation. Bootstrap methods are used for variance estimation. In this paper we propose a smooth semi‐nonparametric (SNP) approach in which maximum likelihood is applied directly to mixture cure models for parameter estimation. The variance can be estimated by the inverse of the second derivative of the SNP likelihood. A comprehensive simulation study indicates good performance of the proposed method. We investigate stage effects in breast cancer by applying the proposed method to breast cancer data from the South Carolina Cancer Registry.  相似文献   

16.
The nonparametric maximum likelihood estimation (NPMLE) of the distribution function from the interval censored (IC) data has been extensively studied in the extant literature. The NPMLE was also developed for the subdistribution functions in an IC competing risks model and in an illness-death model under various interval-censoring scenarios. But the important problem of estimation of the cumulative intensities (CIs) in the interval-censored models has not been considered previously. We develop the NPMLE of the CI in a simple alive/dead model and of the CIs in a competing risks model. Assuming that data are generated by a discrete and finite mixed case interval censoring mechanism we provide a discussion and the simulation study of the asymptotic properties of the NPMLEs of the CIs. In particular we show that they are asymptotically unbiased; in contrast the ad hoc estimators presented in extant literature are substantially biased. We illustrate our methods with the data from a prospective cohort study on the longevity of dental veneers.  相似文献   

17.
This article proposes a mixture double autoregressive model by introducing the flexibility of mixture models to the double autoregressive model, a novel conditional heteroscedastic model recently proposed in the literature. To make it more flexible, the mixing proportions are further assumed to be time varying, and probabilistic properties including strict stationarity and higher order moments are derived. Inference tools including the maximum likelihood estimation, an expectation–maximization (EM) algorithm for searching the estimator and an information criterion for model selection are carefully studied for the logistic mixture double autoregressive model, which has two components and is encountered more frequently in practice. Monte Carlo experiments give further support to the new models, and the analysis of an empirical example is also reported.  相似文献   

18.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

19.
This paper develops the asymptotic theory for the estimation of smooth semiparametric generalized estimating equations models with weakly dependent data. The paper proposes new estimation methods based on smoothed two-step versions of the generalised method of moments and generalised empirical likelihood methods. An important aspect of the paper is that it allows the first-step estimation to have an effect on the asymptotic variances of the second-step estimators and explicitly characterises this effect for the empirically relevant case of the so-called generated regressors. The results of the paper are illustrated with a partially linear model that has not been previously considered in the literature. The proofs of the results utilise a new uniform strong law of large numbers and a new central limit theorem for U-statistics with varying kernels that are of independent interest.  相似文献   

20.
Multivariate Logit models are convenient to describe multivariate correlated binary choices as they provide closed-form likelihood functions. However, the computation time required for calculating choice probabilities increases exponentially with the number of choices, which makes maximum likelihood-based estimation infeasible when many choices are considered. To solve this, we propose three novel estimation methods: (i) stratified importance sampling, (ii) composite conditional likelihood (CCL), and (iii) generalized method of moments, which yield consistent estimates and still have similar small-sample bias to maximum likelihood. Our simulation study shows that computation times for CCL are much smaller and that its efficiency loss is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号