首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
This paper complements a recently published study (Janczura and Weron in AStA-Adv Stat Anal 96(3):385–407, 2012) on efficient estimation of Markov regime-switching models. Here, we propose a new goodness-of-fit testing scheme for the marginal distribution of such models. We consider models with an observable (like threshold autoregressions) as well as a latent state process (like Markov regime-switching). The test is based on the Kolmogorov–Smirnov supremum-distance statistic and the concept of the weighted empirical distribution function. The motivation for this research comes from a recent stream of literature in energy economics concerning electricity spot price models. While the existence of distinct regimes in such data is generally unquestionable (due to the supply stack structure), the actual goodness-of-fit of the models requires statistical validation. We illustrate the proposed scheme by testing whether commonly used Markov regime-switching models fit deseasonalized electricity prices from the NEPOOL (US) day-ahead market.  相似文献   

2.
ABSTRACT

We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semimartingale log asset price process, which is subject to noise and nonsynchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM), which recently has been introduced by Bibinger et al.. We prove consistency and a point-wise stable central limit theorem for the proposed spot covariance estimator in a very general setup with stochastic volatility, leverage effects, and general noise distributions. Moreover, we extend the LMM estimator to be robust against autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. Based on simulations we provide empirical guidance on the effective implementation of the estimator and apply it to high-frequency data of a cross-section of Nasdaq blue chip stocks. Employing the estimator to estimate spot covariances, correlations, and volatilities in normal but also unusual periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, and (iii) can increase strongly and nearly instantaneously if new information arrives. Supplementary materials for this article are available online.  相似文献   

3.
Fairly general rational expectation (RE) models are proved to have linear vector autoregressive moving average (VARMA) models as their reduced forms, using Muth's method of undetermined coefficients (MUC). An advantage of using VARMAs is that RE estimation can benefit from well-developed theory of solutions of stochastic difference equations and computer packages for VARMA and state space (SS) models. For example, we report theoretical derivations of explicit dynamics associated with the RE structure for Muth's and Lucas-Sargent-Wallace's models, suggesting generalizations and new solutions. We illustrate an RE structure estimation by using energy prices and Fackler and Krieger's (J. Bus. Econom. Statist. 4 (1986), 71–80) study of five major macroeconomic variables.  相似文献   

4.
We use several models using classical and Bayesian methods to forecast employment for eight sectors of the US economy. In addition to using standard vector-autoregressive and Bayesian vector autoregressive models, we also augment these models to include the information content of 143 additional monthly series in some models. Several approaches exist for incorporating information from a large number of series. We consider two multivariate approaches—extracting common factors (principal components) and Bayesian shrinkage. After extracting the common factors, we use Bayesian factor-augmented vector autoregressive and vector error-correction models, as well as Bayesian shrinkage in a large-scale Bayesian vector autoregressive models. For an in-sample period of January 1972 to December 1989 and an out-of-sample period of January 1990 to March 2010, we compare the forecast performance of the alternative models. More specifically, we perform ex-post and ex-ante out-of-sample forecasts from January 1990 through March 2009 and from April 2009 through March 2010, respectively. We find that factor augmented models, especially error-correction versions, generally prove the best in out-of-sample forecast performance, implying that in addition to macroeconomic variables, incorporating long-run relationships along with short-run dynamics play an important role in forecasting employment. Forecast combination models, however, based on the simple average forecasts of the various models used, outperform the best performing individual models for six of the eight sectoral employment series.  相似文献   

5.
Summary: In this paper the complexity of high dimensional data with cyclical variation is reduced using analysis of variance and factor analysis. It is shown that the prediction of a small number of main cyclical factors is more useful than forecasting all the time-points separately as it is usually done by seasonal time series models. To give an example for this approach we analyze the electricity demand per quarter of an hour of industrial customers in Germany. The necessity of such predictions results from the liberalization of the German electricity market in 1998 due to legal requirements of the EC in 1996.  相似文献   

6.
In this paper, we present a Bayesian analysis of double seasonal autoregressive moving average models. We first consider the problem of estimating unknown lagged errors in the moving average part using non linear least squares method, and then using natural conjugate and Jeffreys’ priors we approximate the marginal posterior distributions to be multivariate t and gamma distributions for the model coefficients and precision, respectively. We evaluate the proposed Bayesian methodology using simulation study, and apply to real-world hourly electricity load data sets.  相似文献   

7.

Two-piece location-scale models are used for modeling data presenting departures from symmetry. In this paper, we propose an objective Bayesian methodology for the tail parameter of two particular distributions of the above family: the skewed exponential power distribution and the skewed generalised logistic distribution. We apply the proposed objective approach to time series models and linear regression models where the error terms follow the distributions object of study. The performance of the proposed approach is illustrated through simulation experiments and real data analysis. The methodology yields improvements in density forecasts, as shown by the analysis we carry out on the electricity prices in Nordpool markets.

  相似文献   

8.
Abstract

The regression model with ordinal outcome has been widely used in a lot of fields because of its significant effect. Moreover, predictors measured with error and multicollinearity are long-standing problems and often occur in regression analysis. However there are not many studies on dealing with measurement error models with generally ordinal response, even fewer when they suffer from multicollinearity. The purpose of this article is to estimate parameters of ordinal probit models with measurement error and multicollinearity. First, we propose to use regression calibration and refined regression calibration to estimate parameters in ordinal probit models with measurement error. Second, we develop new methods to obtain estimators of parameters in the presence of multicollinearity and measurement error in ordinal probit model. Furthermore we also extend all the methods to quadratic ordinal probit models and talk about the situation in ordinal logistic models. These estimators are consistent and asymptotically normally distributed under general conditions. They are easy to compute, perform well and are robust against the normality assumption for the predictor variables in our simulation studies. The proposed methods are applied to some real datasets.  相似文献   

9.
In this paper, we study M-estimators of regression parameters in semiparametric linear models for censored data. A class of consistent and asymptotically normal M-estimators is constructed. A resampling method is developed for the estimation of the asymptotic covariance matrix of the estimators.  相似文献   

10.
In this article, we define a new method (Si-GARCH) for signal segmentation based on a class of models coming from econometrics. We make use of these models not to perform prediction but to characterize portions of signals. This enables us to compare these portions in order to determine if there is a change in the signal’s dynamics and to define breaking points with an aim of segmenting it according to its dynamics. We, then, expand these models by defining a new coefficient to improve their accuracy. The Si-GARCH method was tested on several thousands of hours of biomedical signals coming from intensive care units.  相似文献   

11.
Calibration in macroeconomics involves choosing fre parameters by matching certain moments of simulted models with those of data. We formally examine this method by treating the process of calibration as an econometric estimator. A numerical version of the Mehra-Prescott (1985) economy is the setting for an evaluation of calibration estimators via Monte Carlo methods. While these estimators sometimes have reasonable finite-sample properties they are not robust to mistakes in setting non-free parameters. In contrast, generalized method-of-moments (GMM) estimators have satisfactory finite-sample characteristics, quick convergence, and informational requirements less stringent than those of calibration estimators. In dynamic equilibrium models in which GMM is infeasible we offer some suggestions for improving estimates based on calibration methodology.  相似文献   

12.
In this paper, we consider a multivariate linear model with complete/incomplete data, where the regression coefficients are subject to a set of linear inequality restrictions. We first develop an expectation/conditional maximization (ECM) algorithm for calculating restricted maximum likelihood estimates of parameters of interest. We then establish the corresponding convergence properties for the proposed ECM algorithm. Applications to growth curve models and linear mixed models are presented. Confidence interval construction via the double-bootstrap method is provided. Some simulation studies are performed and a real example is used to illustrate the proposed methods.  相似文献   

13.
Calibration in macroeconomics involves choosing fre parameters by matching certain moments of simulted models with those of data. We formally examine this method by treating the process of calibration as an econometric estimator. A numerical version of the Mehra-Prescott (1985) economy is the setting for an evaluation of calibration estimators via Monte Carlo methods. While these estimators sometimes have reasonable finite-sample properties they are not robust to mistakes in setting non-free parameters. In contrast, generalized method-of-moments (GMM) estimators have satisfactory finite-sample characteristics, quick convergence, and informational requirements less stringent than those of calibration estimators. In dynamic equilibrium models in which GMM is infeasible we offer some suggestions for improving estimates based on calibration methodology.  相似文献   

14.
Empirical likelihood ratio confidence regions based on the chi-square calibration suffer from an undercoverage problem in that their actual coverage levels tend to be lower than the nominal levels. The finite sample distribution of the empirical log-likelihood ratio is recognized to have a mixture structure with a continuous component on [0, + ∞) and a point mass at + ∞. The undercoverage problem of the Chi-square calibration is partly due to its use of the continuous Chi-square distribution to approximate the mixture distribution of the empirical log-likelihood ratio. In this article, we propose two new methods of calibration which will take advantage of the mixture structure; we construct two new mixture distributions by using the F and chi-square distributions and use these to approximate the mixture distributions of the empirical log-likelihood ratio. The new methods of calibration are asymptotically equivalent to the chi-square calibration. But the new methods, in particular the F mixture based method, can be substantially more accurate than the chi-square calibration for small and moderately large sample sizes. The new methods are also as easy to use as the chi-square calibration.  相似文献   

15.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   

16.
This paper presents a method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed. The technique operates by embedding the models in a composite model parameterised by an integer k which identifies a switching time when dynamics change from one model to the other. Likelihood ratios can then be estimated from the posterior density of k using Markov chain methods. The techniques are illustrated by a simulation study involving an immigration-death model and validated using analytic results derived for this case. They are also applied to compare the fit of stochastic epidemic models to historical data on a smallpox epidemic. In addition to estimating likelihood ratios, the method can be used for direct estimation of likelihoods by selecting one of the models in the comparison to have a known likelihood for the observations. Some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.  相似文献   

17.
In a calibration of near-infrared (NIR) instrument, we regress some chemical compositions of interest as a function of their NIR spectra. In this process, we have two immediate challenges: first, the number of variables exceeds the number of observations and, second, the multicollinearity between variables are extremely high. To deal with the challenges, prediction models that produce sparse solutions have recently been proposed. The term ‘sparse’ means that some model parameters are zero estimated and the other parameters are estimated naturally away from zero. In effect, a variable selection is embedded in the model to potentially achieve a better prediction. Many studies have investigated sparse solutions for latent variable models, such as partial least squares and principal component regression, and for direct regression models such as ridge regression (RR). However, in the latter, it mainly involves an L1 norm penalty to the objective function such as lasso regression. In this study, we investigate new sparse alternative models for RR within a random effects model framework, where we consider Cauchy and mixture-of-normals distributions on the random effects. The results indicate that the mixture-of-normals model produces a sparse solution with good prediction and better interpretation. We illustrate the methods using NIR spectra datasets from milk and corn specimens.  相似文献   

18.
In this paper we review existing work on robust estimation for simultaneous equations models. Then we sketch three strategies for obtaining estimators with a high breakdown point and a controllable efficiency: (a) robustifying three-stage least squares, (b) robustifying the full information maximum likelihood method by minimizing the determinant of a robust covariance matrix of residuals, and (c) generalizing multivariate tau-estimators (Lopuhaä, 1992, Can. J. Statist., 19, 307–321) to these models. They have the same order of computational complexity as high breakdown point multivariate estimators. The latter seems the most promising approach.  相似文献   

19.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   

20.
Regression calibration is a simple method for estimating regression models when covariate data are missing for some study subjects. It consists in replacing an unobserved covariate by an estimator of its conditional expectation given available covariates. Regression calibration has recently been investigated in various regression models such as the linear, generalized linear, and proportional hazards models. The aim of this paper is to investigate the appropriateness of this method for estimating the stratified Cox regression model with missing values of the covariate defining the strata. Despite its practical relevance, this problem has not yet been discussed in the literature. Asymptotic distribution theory is developed for the regression calibration estimator in this setting. A simulation study is also conducted to investigate the properties of this estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号