首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Temporal aggregation of cyclical models with business cycle applications   总被引:1,自引:0,他引:1  
This paper focuses on temporal aggregation of the cyclical component model as introduced by Harvey (1989). More specifically, it provides the properties of the aggregate process for any generic period of aggregation. As a consequence, the exact link between aggregate and disaggregate parameters can be easily derived. The cyclical model is important due to its relevance in the analysis of business cycle. Given this, two empirical applications are presented in order to compare the estimated parameters of the quarterly models for German and US gross domestic products with those of the corresponding models aggregated to annual frequency.  相似文献   

2.
Short-run household electricity demand has been estimated with conditional demand models by a variety of authors using both aggregate data and disaggregate data. Disaggregate data are most desirable for estimating these models. However, in many cases, available disaggregate data may be inappropriate. Furthermore, disaggregate data may be unavailable altogether. In these cases, readily available aggregate data may be more appropriate. This article develops and evaluates an econometric technique to generate unbiased estimates of household electricity demand using such aggregate data.  相似文献   

3.
Sufficiency is a widely used concept for reducing the dimensionality of a data set. Collecting data for a sufficient statistic is generally much easier and less expensive than collecting all of the available data. When the posterior distributions of a quantity of interest given the aggregate and disaggregate data are identical, perfect aggregation is said to hold, and in this case the aggregate data is a sufficient statistic for the quantity of interest. In this paper, the conditions for perfect aggregation are shown to depend on the functional form of the prior distribution. When the quantity of interest is the sum of some parameters in a vector having either a generalized Dirichlet or a Liouville distribution for analyzing compositional data, necessary and sufficient conditions for perfect aggregation are also established.  相似文献   

4.
This article develops a new approach for testing aggregation restrictions in estimated production-function and cost-function models. Rather than using the well-known separability conditions for the existence of an aggregate, this approach focuses on testing whether a particular aggregate is valid and develops empirically testable necessary and sufficient conditions for the validity of some known aggregation scheme. An empirical section examines the power of this test in the context of a simple production-function model.  相似文献   

5.
This research provides a generalized framework to disaggregate lower-frequency time series and evaluate the disaggregation performance. The proposed framework combines two models in separate stages: a linear regression model to exploit related independent variables in the first stage and a state–space model to disaggregate the residual from the regression in the second stage. For the purpose of providing a set of practical criteria for assessing the disaggregation performance, we measure the information loss that occurs during temporal aggregation while examining what effects take place when aggregating data. To validate the proposed framework, we implement Monte Carlo simulations and provide two empirical studies. Supplementary materials for this article are available online.  相似文献   

6.
系统性金融风险分析框架的选取问题是理论与实务界对系统性金融风险研究争论的焦点之一。建立完善的分析框架需要立足于合理的宏观加总,而用于构建系统性金融风险分析框架的加总模式主要包括简易累加、新古典宏观加总和宏观审慎原则下的新加总模式。对不同加总模式下的系统性金融风险研究成果进行纵向梳理与横向比较分析发现:当前对于系统性金融风险的研究应着眼在货币量值加总的基础上,形成具备一定理论基础的整体分析框架。  相似文献   

7.
This article analyzes the importance of exact aggregation restrictions and the modeling of demographic effects in Jorgenson, Lau, and Stoker's (1982) model of aggregate consumer behavior. These issues are examined at the household level, using Canadian cross-sectional microdata. Exact aggregation restrictions and some implicit restrictions on household demographic effects are strongly rejected by our data. These results do not preclude pooling aggregate time series data with cross-sectional microdata to estimate a model of aggregate consumer behavior. They do suggest, however, an alternative basis for the aggregate model.  相似文献   

8.
ABSTRACT

Autoregressive Moving Average (ARMA) time series model fitting is a procedure often based on aggregate data, where parameter estimation plays a key role. Therefore, we analyze the effect of temporal aggregation on the accuracy of parameter estimation of mixed ARMA and MA models. We derive the expressions required to compute the parameter values of the aggregate models as functions of the basic model parameters in order to compare their estimation accuracy. To this end, a simulation experiment shows that aggregation causes a severe accuracy loss that increases with the order of aggregation, leading to poor accuracy.  相似文献   

9.
This paper is concerned with the volatility modeling of a set of South African Rand (ZAR) exchange rates. We investigate the quasi-maximum-likelihood (QML) estimator based on the Kalman filter and explore how well a choice of stochastic volatility (SV) models fits the data. We note that a data set from a developing country is used. The main results are: (1) the SV model parameter estimates are in line with those reported from the analysis of high-frequency data for developed countries; (2) the SV models we considered, along with their corresponding QML estimators, fit the data well; (3) using the range return instead of the absolute return as a volatility proxy produces QML estimates that are both less biased and less variable; (4) although the log range of the ZAR exchange rates has a distribution that is quite far from normal, the corresponding QML estimator has a superior performance when compared with the log absolute return.  相似文献   

10.
In econometrics and finance, variables are collected at different frequencies. One straightforward regression model is to aggregate the higher frequency variable to match the lower frequency with a fixed weight function. However, aggregation with fixed weight functions may overlook useful information in the higher frequency variable. On the other hand, keeping all higher frequencies may result in overly complicated models. In literature, mixed data sampling (MIDAS) regression models have been proposed to balance between the two. In this article, a new model specification test is proposed that can help decide between the simple aggregation and the MIDAS model.  相似文献   

11.
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions, or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This article uses concepts from Bayesian inference and the maximum entropy principle to estimate the prior probability distribution, uncertainty, and correlations of source data when such information is not explicitly provided. In the absence of additional information, an isolated datum is described by a truncated Gaussian distribution, and if an uncertainty estimate is missing, its prior equals the best guess. When the sum of a set of disaggregate data is constrained to match an aggregate datum, it is possible to determine the prior correlations among disaggregate data. If aggregate uncertainty is missing, all prior correlations are positive. If aggregate uncertainty is available, prior correlations can be either all positive, all negative, or a mix of both. An empirical example is presented, which reports relative uncertainties and correlation priors for the County Business Patterns database. In this example, relative uncertainties range from 1% to 80% and 20% of data pairs exhibit correlations below ?0.9 or above 0.9. Supplementary materials for this article are available online.  相似文献   

12.
宏观物流成本是评价一个国家经济发展质量的重要指标,客观、科学地统计宏观物流成本具有非常重要的意义。在系统地介绍了南非物流成本统计模型中的分解与合计统计方法的基础上,着重介绍了合计方法。合计方法是按每一种产品的生产量分别进行统计,比较符合物流运作的实际情况,可以为中国物流成本统计方法的改进提供有益的参考和借鉴。  相似文献   

13.
Investigations of the forecasting power of econometric models of exchange rates by Meese and Rogoff (1983 and 1988) have led to the conclusion that these models can predict no better than the no-change forecast rule implied by the random walk model. This has often been interpreted as a confirmation of foreign exchange market efficiency. The present paper builds on models of real interest rate determination of the exchange rate. Estimates of the Dollar-DM exchange rate given here are stable in the face of changes of the data base, give predictions superior to the random walk model, and lead to the conclusion of foreign exchange market inefficiency.  相似文献   

14.
In time series analysis, Autoregressive Moving Average (ARMA) models play a central role. Because of the importance of parameter estimation in ARMA modeling and since it is based on aggregate time series so often, we analyze the effect of temporal aggregation on estimation accuracy. We derive the relationships between the aggregate and the basic parameters and compute the actual values of the former from those of the latter in order to measure and compare their estimation accuracy. We run a simulation experiment that shows that aggregation seriously worsens estimation accuracy and that the impact increases with the order of aggregation.  相似文献   

15.
This paper investigates the interaction between aggregation and nonlinearity through a monte carlo study. Various tests for neglected nonlinearity are used to compare the power of the tests for different nonlinear models to different levels of aggregation. Three types of aggregation, namely, cross-sectional aggregation, temporal aggregation and systematic sampling are considered. Aggregation is inclined to simplify nonlinearity. The degree to which nonlinearity is reduced depends on the importance of common factor and extent of the aggregation. The effect is larger when the size of common factor is smaller and when the extent of the aggregation is larger.  相似文献   

16.
现代金融经济学中连续时间模型能够更方便地描述重要经济变量的动态过程如股价、汇率和利率等。为连续时间模型提出了一种高频数据驱动的二阶段估计方法,增强了连续时间扩展模型的弹性和可操作性。以Vasicek模型为例给出了该方法的应用实例,首先在第一阶段使用实现波动率方法估计出模型的扩散项参数,然后使用实际数据的稳态分布的前向方程估计漂移项参数。此方法对模型初始设定和优化算法依赖程度低,结果较为稳定可靠。  相似文献   

17.
This paper investigates the interaction between aggregation and nonlinearity through a monte carlo study. Various tests for neglected nonlinearity are used to compare the power of the tests for different nonlinear models to different levels of aggregation. Three types of aggregation, namely, cross-sectional aggregation, temporal aggregation and systematic sampling are considered. Aggregation is inclined to simplify nonlinearity. The degree to which nonlinearity is reduced depends on the importance of common factor and extent of the aggregation. The effect is larger when the size of common factor is smaller and when the extent of the aggregation is larger.  相似文献   

18.
If unit‐level data are available, small area estimation (SAE) is usually based on models formulated at the unit level, but they are ultimately used to produce estimates at the area level and thus involve area‐level inferences. This paper investigates the circumstances under which using an area‐level model may be more effective. Linear mixed models (LMMs) fitted using different levels of data are applied in SAE to calculate synthetic estimators and empirical best linear unbiased predictors (EBLUPs). The performance of area‐level models is compared with unit‐level models when both individual and aggregate data are available. A key factor is whether there are substantial contextual effects. Ignoring these effects in unit‐level working models can cause biased estimates of regression parameters. The contextual effects can be automatically accounted for in the area‐level models. Using synthetic and EBLUP techniques, small area estimates based on different levels of LMMs are investigated in this paper by means of a simulation study.  相似文献   

19.
This paper proposes a new approach, based on the recent developments of the wavelet theory, to model the dynamic of the exchange rate. First, we consider the maximum overlap discrete wavelet transform (MODWT) to decompose the level exchange rates into several scales. Second, we focus on modelling the conditional mean of the detrended series as well as their volatilities. In particular, we consider the generalized fractional, one-factor, Gegenbauer process (GARMA) to model the conditional mean and the fractionally integrated generalized autoregressive conditional heteroskedasticity process (FIGARCH) to model the conditional variance. Moreover, we estimate the GARMA-FIGARCH model using the wavelet-based maximum likelihood estimator (Whitcher in Technometrics 46:225–238, 2004). To illustrate the usefulness of our methodology, we carry out an empirical application using the daily Tunisian exchange rates relative to the American Dollar, the Euro and the Japanese Yen. The empirical results show the relevance of the selected modelling approach which contributes to a better forecasting performance of the exchange rate series.  相似文献   

20.
Many economic and financial time series exhibit heteroskedasticity, where the variability changes are often based on recent past shocks, which cause large or small fluctuations to cluster together. Classical ways of modelling the changing variance include the use of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and Neural Networks models. The paper starts with a comparative study of these two models, both in terms of capturing the non-linear or heteroskedastic structure and forecasting performance. Monthly and daily exchange rates for three different countries are implemented. The paper continues with different methods for combining forecasts of the volatility from the competing models, in order to improve forecasting accuracy. Traditional methods for combining the predicted values from different models, using various weighting schemes are considered, such as the simple average or methods that find the best weights in terms of minimizing the squared forecast error. The main purpose of the paper is, however, to propose an alternative methodology for combining forecasts effectively. The new, hereby-proposed non-linear, non-parametric, kernel-based method, is shown to have the basic advantage of not being affected by outliers, structural breaks or shocks to the system and it does not require a specific functional form for the combination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号