首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article focuses on simulation-based inference for the time-deformation models directed by a duration process. In order to better capture the heavy tail property of the time series of financial asset returns, the innovation of the observation equation is subsequently assumed to have a Student-t distribution. Suitable Markov chain Monte Carlo (MCMC) algorithms, which are hybrids of Gibbs and slice samplers, are proposed for estimation of the parameters of these models. In the algorithms, the parameters of the models can be sampled either directly from known distributions or through an efficient slice sampler. The states are simulated one at a time by using a Metropolis-Hastings method, where the proposal distributions are sampled through a slice sampler. Simulation studies conducted in this article suggest that our extended models and accompanying MCMC algorithms work well in terms of parameter estimation and volatility forecast.  相似文献   

2.
孙艳  何建敏  周伟 《统计研究》2011,28(8):103-110
 随机条件持续期(SCD)模型能有效刻画超高频时间序列中持续期的变化,但该模型假定期望持续期生成机制固定,且模型参数估计存在一定的困难。文章在不假定条件均值形式和冲击项分布的基础上结合核估计方法提出了非参数SCD模型及其迭代求解方法。然后,基于TEACD(1,1)模型生成的模拟数据,将非参数SCD模型与用卡尔漫滤波进行伪似然估计的参数SCD模型和用Gibbs抽样进行马尔科夫蒙特卡罗估计的参数SCD模型的拟合效果进行比较,实证表明在大样本条件下非参数SCD模型的拟合效果与用MCMC估计的参数SCD模型的拟合结果相差不大,但明显优于用QML估计的参数SCD模型的拟合结果,且非参数SCD模型能为参数SCD模型的参数设定提供参考。  相似文献   

3.
This paper conducts simulation-based comparison of several stochastic volatility models with leverage effects. Two new variants of asymmetric stochastic volatility models, which are subject to a logarithmic transformation on the squared asset returns, are proposed. The leverage effect is introduced into the model through correlation either between the innovations of the observation equation and the latent process, or between the logarithm of squared asset returns and the latent process. Suitable Markov Chain Monte Carlo algorithms are developed for parameter estimation and model comparison. Simulation results show that our proposed formulation of the leverage effect and the accompanying inference methods give rise to reasonable parameter estimates. Applications to two data sets uncover a negative correlation (which can be interpreted as a leverage effect) between the observed returns and volatilities, and a negative correlation between the logarithm of squared returns and volatilities.  相似文献   

4.
Factor models, structural equation models (SEMs) and random-effect models share the common feature that they assume latent or unobserved random variables. Factor models and SEMs allow well developed procedures for a rich class of covariance models with many parameters, while random-effect models allow well developed procedures for non-normal models including heavy-tailed distributions for responses and random effects. In this paper, we show how these two developments can be combined to result in an extremely rich class of models, which can be beneficial to both areas. A new fitting procedures for binary factor models and a robust estimation approach for continuous factor models are proposed.  相似文献   

5.
Spatial data and non parametric methods arise frequently in studies of different areas and it is a common practice to analyze such data with semi-parametric spatial autoregressive (SPSAR) models. We propose the estimations of SPSAR models based on maximum likelihood estimation (MLE) and kernel estimation. The estimation of spatial regression coefficient ρ was done by optimizing the concentrated log-likelihood function with respect to ρ. Furthermore, under appropriate conditions, we derive the limiting distributions of our estimators for both the parametric and non parametric components in the model.  相似文献   

6.
In this paper we study estimating the joint conditional distributions of multivariate longitudinal outcomes using regression models and copulas. For the estimation of marginal models, we consider a class of time-varying transformation models and combine the two marginal models using nonparametric empirical copulas. Our models and estimation method can be applied in many situations where the conditional mean-based models are not good enough. Empirical copulas combined with time-varying transformation models may allow quite flexible modelling for the joint conditional distributions for multivariate longitudinal data. We derive the asymptotic properties for the copula-based estimators of the joint conditional distribution functions. For illustration we apply our estimation method to an epidemiological study of childhood growth and blood pressure.  相似文献   

7.
M-quantile models with application to poverty mapping   总被引:1,自引:0,他引:1  
Over the last decade there has been growing demand for estimates of population characteristics at small area level. Unfortunately, cost constraints in the design of sample surveys lead to small sample sizes within these areas and as a result direct estimation, using only the survey data, is inappropriate since it yields estimates with unacceptable levels of precision. Small area models are designed to tackle the small sample size problem. The most popular class of models for small area estimation is random effects models that include random area effects to account for between area variations. However, such models also depend on strong distributional assumptions, require a formal specification of the random part of the model and do not easily allow for outlier robust inference. An alternative approach to small area estimation that is based on the use of M-quantile models was recently proposed by Chambers and Tzavidis (Biometrika 93(2):255–268, 2006) and Tzavidis and Chambers (Robust prediction of small area means and distributions. Working paper, 2007). Unlike traditional random effects models, M-quantile models do not depend on strong distributional assumption and automatically provide outlier robust inference. In this paper we illustrate for the first time how M-quantile models can be practically employed for deriving small area estimates of poverty and inequality. The methodology we propose improves the traditional poverty mapping methods in the following ways: (a) it enables the estimation of the distribution function of the study variable within the small area of interest both under an M-quantile and a random effects model, (b) it provides analytical, instead of empirical, estimation of the mean squared error of the M-quantile small area mean estimates and (c) it employs a robust to outliers estimation method. The methodology is applied to data from the 2002 Living Standards Measurement Survey (LSMS) in Albania for estimating (a) district level estimates of the incidence of poverty in Albania, (b) district level inequality measures and (c) the distribution function of household per-capita consumption expenditure in each district. Small area estimates of poverty and inequality show that the poorest Albanian districts are in the mountainous regions (north and north east) with the wealthiest districts, which are also linked with high levels of inequality, in the coastal (south west) and southern part of country. We discuss the practical advantages of our methodology and note the consistency of our results with results from previous studies. We further demonstrate the usefulness of the M-quantile estimation framework through design-based simulations based on two realistic survey data sets containing small area information and show that the M-quantile approach may be preferable when the aim is to estimate the small area distribution function.  相似文献   

8.
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model with right-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.  相似文献   

9.
ABSTRACT

Standard econometric methods can overlook individual heterogeneity in empirical work, generating inconsistent parameter estimates in panel data models. We propose the use of methods that allow researchers to easily identify, quantify, and address estimation issues arising from individual slope heterogeneity. We first characterize the bias in the standard fixed effects estimator when the true econometric model allows for heterogeneous slope coefficients. We then introduce a new test to check whether the fixed effects estimation is subject to heterogeneity bias. The procedure tests the population moment conditions required for fixed effects to consistently estimate the relevant parameters in the model. We establish the limiting distribution of the test and show that it is very simple to implement in practice. Examining firm investment models to showcase our approach, we show that heterogeneity bias-robust methods identify cash flow as a more important driver of investment than previously reported. Our study demonstrates analytically, via simulations, and empirically the importance of carefully accounting for individual specific slope heterogeneity in drawing conclusions about economic behavior.  相似文献   

10.
Abstract. We propose a Bayesian semiparametric methodology for quantile regression modelling. In particular, working with parametric quantile regression functions, we develop Dirichlet process mixture models for the error distribution in an additive quantile regression formulation. The proposed non‐parametric prior probability models allow the shape of the error density to adapt to the data and thus provide more reliable predictive inference than models based on parametric error distributions. We consider extensions to quantile regression for data sets that include censored observations. Moreover, we employ dependent Dirichlet processes to develop quantile regression models that allow the error distribution to change non‐parametrically with the covariates. Posterior inference is implemented using Markov chain Monte Carlo methods. We assess and compare the performance of our models using both simulated and real data sets.  相似文献   

11.
We display pseudo-likelihood as a special case of a general estimation technique based on proper scoring rules. Such a rule supplies an unbiased estimating equation for any statistical model, and this can be extended to allow for missing data. When the scoring rule has a simple local structure, as in many spatial models, the need to compute problematic normalising constants is avoided. We illustrate the approach through an analysis of data on disease in bell pepper plants.  相似文献   

12.
Left-truncated and right-censored (LTRC) data are encountered frequently due to a prevalent cohort sampling in follow-up studies. Because of the skewness of the distribution of survival time, quantile regression is a useful alternative to the Cox's proportional hazards model and the accelerated failure time model for survival analysis. In this paper, we apply the quantile regression model to LTRC data and develops an unbiased estimating equation for regression coefficients. The proposed estimation methods use the inverse probabilities of truncation and censoring weighting technique. The resulting estimator is uniformly consistent and asymptotically normal. The finite-sample performance of the proposed estimation methods is also evaluated using extensive simulation studies. Finally, analysis of real data is presented to illustrate our proposed estimation methods.  相似文献   

13.
Differential equations have been used in statistics to define functions such as probability densities. But the idea of using differential equation formulations of stochastic models has a much wider scope. The author gives several examples, including simultaneous estimation of a regression model and residual density, monotone smoothing, specification of a link function, differential equation models of data, and smoothing over complicated multidimensional domains. This paper aims to stimulate interest in this approach to functional estimation problems, rather than provide carefully worked out methods.  相似文献   

14.
We construct an integer-valued stationary symmetric AR(1) process which can have either a positive or a negative lag-one autocorrelation. Nearly all integer-valued time series models are designed for observations which are non-negative integers or counts. They have innovations which are distributed on the non-negative integers and therefore obviously non-symmetric. We build our model using innovations that come from the difference of two independent identically distributed Poisson random variables. These innovations have a symmetric distribution, which has many advantages; in particular, they will allow us to model negative correlations. For our AR(1) process, we examine its basic properties and consider estimation via conditional least squares.  相似文献   

15.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   

16.
Multivariate Poisson regression with covariance structure   总被引:1,自引:0,他引:1  
In recent years the applications of multivariate Poisson models have increased, mainly because of the gradual increase in computer performance. The multivariate Poisson model used in practice is based on a common covariance term for all the pairs of variables. This is rather restrictive and does not allow for modelling the covariance structure of the data in a flexible way. In this paper we propose inference for a multivariate Poisson model with larger structure, i.e. different covariance for each pair of variables. Maximum likelihood estimation, as well as Bayesian estimation methods are proposed. Both are based on a data augmentation scheme that reflects the multivariate reduction derivation of the joint probability function. In order to enlarge the applicability of the model we allow for covariates in the specification of both the mean and the covariance parameters. Extension to models with complete structure with many multi-way covariance terms is discussed. The method is demonstrated by analyzing a real life data set.  相似文献   

17.
A. Baccini  M. Fekri  J. Fine 《Statistics》2013,47(4):267-300
Different sorts of bilinear models (models with bilinear interaction terms) are currently used when analyzing contingency tables: association models, correlation models... All these can be included in a general family of bilinear models: power models. In this framework, Maximum Likelihood (ML) estimation is not always possible, as explained in an introductory example. Thus, Generalized Least Squares (GLS) estimation is sometimes needed in order to estimate parameters. A subclass of power models is then considered in this paper: separable reduced-rank (SRR) models. They allow an optimal choice of weights for GLS estimation and simplifications in asymptotic studies concerning GLS estimators. Power 2 models belong to the subclass of SRR models and the asymptotic properties of GLS estimators are established. Similar results are also established for association models which are not SRR models. However, these results are more difficult to prove. Finally, 2 examples are considered to illustrate our results.  相似文献   

18.
Increasingly complex generative models are being used across disciplines as they allow for realistic characterization of data, but a common difficulty with them is the prohibitively large computational cost to evaluate the likelihood function and thus to perform likelihood-based statistical inference. A likelihood-free inference framework has emerged where the parameters are identified by finding values that yield simulated data resembling the observed data. While widely applicable, a major difficulty in this framework is how to measure the discrepancy between the simulated and observed data. Transforming the original problem into a problem of classifying the data into simulated versus observed, we find that classification accuracy can be used to assess the discrepancy. The complete arsenal of classification methods becomes thereby available for inference of intractable generative models. We validate our approach using theory and simulations for both point estimation and Bayesian inference, and demonstrate its use on real data by inferring an individual-based epidemiological model for bacterial infections in child care centers.  相似文献   

19.
Summary Quantile regression methods are emerging as a popular technique in econometrics and biometrics for exploring the distribution of duration data. This paper discusses quantile regression for duration analysis allowing for a flexible specification of the functional relationship and of the error distribution. Censored quantile regression addresses the issue of right censoring of the response variable which is common in duration analysis. We compare quantile regression to standard duration models. Quantile regression does not impose a proportional effect of the covariates on the hazard over the duration time. However, the method cannot take account of time-varying covariates and it has not been extended so far to allow for unobserved heterogeneity and competing risks. We also discuss how hazard rates can be estimated using quantile regression methods. This paper benefitted from the helpful comments by an anonymous referee. Due to space constraints, we had to omit the details of the empirical application. These can be found in the long version of this paper, Fitzenberger and Wilke (2005). We gratefully acknowledge financial support by the German Research Foundation (DFG) through the research project ‘Microeconometric modelling of unemployment durations under consideration of the macroeconomic situation’. Thanks are due to Xuan Zhang for excellent research assistance. All errors are our sole responsibility.  相似文献   

20.
Stochastic models are of fundamental importance in many scientific and engineering applications. For example, stochastic models provide valuable insights into the causes and consequences of intra-cellular fluctuations and inter-cellular heterogeneity in molecular biology. The chemical master equation can be used to model intra-cellular stochasticity in living cells, but analytical solutions are rare and numerical simulations are computationally expensive. Inference of system trajectories and estimation of model parameters from observed data are important tasks and are even more challenging. Here, we consider the case where the observed data are aggregated over time. Aggregation of data over time is required in studies of single cell gene expression using a luciferase reporter, where the emitted light can be very faint and is therefore collected for several minutes for each observation. We show how an existing approach to inference based on the linear noise approximation (LNA) can be generalised to the case of temporally aggregated data. We provide a Kalman filter (KF) algorithm which can be combined with the LNA to carry out inference of system variable trajectories and estimation of model parameters. We apply and evaluate our method on both synthetic and real data scenarios and show that it is able to accurately infer the posterior distribution of model parameters in these examples. We demonstrate how applying standard KF inference to aggregated data without accounting for aggregation will tend to underestimate the process noise and can lead to biased parameter estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号