首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and conditionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach—one new example of the concept of decouple/recouple in time series—enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.  相似文献   

2.
We discuss maximum likelihood and estimating equations methods for combining results from multiple studies in pooling projects and data consortia using a meta-analysis model, when the multivariate estimates with their covariance matrices are available. The estimates to be combined are typically regression slopes, often from relative risk models in biomedical and epidemiologic applications. We generalize the existing univariate meta-analysis model and investigate the efficiency advantages of the multivariate methods, relative to the univariate ones. We generalize a popular univariate test for between-studies homogeneity to a multivariate test. The methods are applied to a pooled analysis of type of carotenoids in relation to lung cancer incidence from seven prospective studies. In these data, the expected gain in efficiency was evident, sometimes to a large extent. Finally, we study the finite sample properties of the estimators and compare the multivariate ones to their univariate counterparts.  相似文献   

3.
4.
ABSTRACT

A long-standing puzzle in macroeconomic forecasting has been that a wide variety of multivariate models have struggled to out-predict univariate models consistently. We seek an explanation for this puzzle in terms of population properties. We derive bounds for the predictive R2 of the true, but unknown, multivariate model from univariate ARMA parameters alone. These bounds can be quite tight, implying little forecasting gain even if we knew the true multivariate model. We illustrate using CPI inflation data. Supplementary materials for this article are available online.  相似文献   

5.
Functional time series whose sample elements are recorded sequentially over time are frequently encountered with increasing technology. Recent studies have shown that analyzing and forecasting of functional time series can be performed easily using functional principal component analysis and existing univariate/multivariate time series models. However, the forecasting performance of such functional time series models may be affected by the presence of outlying observations which are very common in many scientific fields. Outliers may distort the functional time series model structure, and thus, the underlying model may produce high forecast errors. We introduce a robust forecasting technique based on weighted likelihood methodology to obtain point and interval forecasts in functional time series in the presence of outliers. The finite sample performance of the proposed method is illustrated by Monte Carlo simulations and four real-data examples. Numerical results reveal that the proposed method exhibits superior performance compared with the existing method(s).  相似文献   

6.
Instantaneous dependence among several asset returns is the main reason for the computational and statistical complexities in working with full multivariate GARCH models. Using the Cholesky decomposition of the covariance matrix of such returns, we introduce a broad class of multivariate models where univariate GARCH models are used for variances of individual assets and parsimonious models for the time-varying unit lower triangular matrices. This approach, while reducing the number of parameters and severity of the positive-definiteness constraint, has several advantages compared to the traditional orthogonal and related GARCH models. Its major drawback is the potential need for an a priori ordering or grouping of the stocks in a portfolio, which through a case study we show can be taken advantage of so far as reducing the forecast error of the volatilities and the dimension of the parameter space are concerned. Moreover, the Cholesky decomposition, unlike its competitors, decompose the normal likelihood function as a product of univariate normal likelihoods with independent parameters, resulting in fast estimation algorithms. Gaussian maximum likelihood methods of estimation of the parameters are developed. The methodology is implemented for a real financial dataset with seven assets, and its forecasting power is compared with other existing models.  相似文献   

7.
8.
We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalizations of univariate stochastic volatility models and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We discuss model fitting based on retrospective data and sequential analysis for forward filtering and short-term forecasting. Analyses are compared with results from the much simpler method of dynamic variance-matrix discounting that, for over a decade, has been a standard approach in applied financial econometrics. We study these models in analysis, forecasting, and sequential portfolio allocation for a selected set of international exchange-rate-return time series. Our goals are to understand a range of modeling questions arising in using these factor models and to explore empirical performance in portfolio construction relative to discount approaches. We report on our experiences and conclude with comments about the practical utility of structured factor models and on future potential model extensions.  相似文献   

9.
Stock & Watson (1999) consider the relative quality of different univariate forecasting techniques. This paper extends their study on forecasting practice, comparing the forecasting performance of two popular model selection procedures, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). This paper considers several topics: how AIC and BIC choose lags in autoregressive models on actual series, how models so selected forecast relative to an AR(4) model, the effect of using a maximum lag on model selection, and the forecasting performance of combining AR(4), AIC, and BIC models with an equal weight.  相似文献   

10.
Forecasting in economic data analysis is dominated by linear prediction methods where the predicted values are calculated from a fitted linear regression model. With multiple predictor variables, multivariate nonparametric models were proposed in the literature. However, empirical studies indicate the prediction performance of multi-dimensional nonparametric models may be unsatisfactory. We propose a new semiparametric model average prediction (SMAP) approach to analyse panel data and investigate its prediction performance with numerical examples. Estimation of individual covariate effect only requires univariate smoothing and thus may be more stable than previous multivariate smoothing approaches. The estimation of optimal weight parameters incorporates the longitudinal correlation and the asymptotic properties of the estimated results are carefully studied in this paper.  相似文献   

11.
A set of longitudinal binary, partially incomplete, data on obesity among children in the USA is reanalysed. The multivariate Bernoulli distribution is parameterized by the univariate marginal probabilities and dependence ratios of all orders, which together support maximum likelihood inference. The temporal association of obesity is strong and complex but stationary. We fit a saturated model for the distribution of response patterns and find that non-response is missing completely at random for boys but that the probability of obesity is consistently higher among girls who provided incomplete records than among girls who provided complete records. We discuss the statistical and substantive features of, respectively, pattern mixture and selection models for this data set.  相似文献   

12.
In this paper a semi-parametric approach is developed to model non-linear relationships in time series data using polynomial splines. Polynomial splines require very little assumption about the functional form of the underlying relationship, so they are very flexible and can be used to model highly non-linear relationships. Polynomial splines are also computationally very efficient. The serial correlation in the data is accounted for by modelling the noise as an autoregressive integrated moving average (ARIMA) process, by doing so, the efficiency in nonparametric estimation is improved and correct inferences can be obtained. The explicit structure of the ARIMA model allows the correlation information to be used to improve forecasting performance. An algorithm is developed to automatically select and estimate the polynomial spline model and the ARIMA model through backfitting. This method is applied on a real-life data set to forecast hourly electricity usage. The non-linear effect of temperature on hourly electricity usage is allowed to be different at different hours of the day and days of the week. The forecasting performance of the developed method is evaluated in post-sample forecasting and compared with several well-accepted models. The results show the performance of the proposed model is comparable with a long short-term memory deep learning model.  相似文献   

13.
变权重组合预测模型的局部加权最小二乘解法   总被引:2,自引:0,他引:2  
随着科学技术的不断进步,预测方法也得到了很大的发展,常见的预测方法就有数十种之多。而组合预测是将不同的预测方法组合起来,综合利用各个方法所提供的信息,其效果往往优于单一的预测方法,故得到了广泛的应用。而基于变系数模型的思想研究了组合预测模型,将变权重的求取转化为变系数模型中系数函数的估计问题,从而可以基于局部加权最小二乘方法求解,利用交叉证实法选取光滑参数。其结果表明所提方法预测精度很高,效果优于其他方法。  相似文献   

14.
We consider the forecasting of cointegrated variables, and we show that at long horizons nothing is lost by ignoring cointegration when forecasts are evaluated using standard multivariate forecast accuracy measures. In fact, simple univariate Box–Jenkins forecasts are just as accurate. Our results highlight a potentially important deficiency of standard forecast accuracy measures—they fail to value the maintenance of cointegrating relationships among variables—and we suggest alternatives that explicitly do so.  相似文献   

15.
Summary.  The literature on multivariate linear regression includes multivariate normal models, models that are used in survival analysis and a variety of models that are used in other areas such as econometrics. The paper considers the class of location–scale models, which includes a large proportion of the preceding models. It is shown that, for complete data, the maximum likelihood estimators for regression coefficients in a linear location–scale framework are consistent even when the joint distribution is misspecified. In addition, gains in efficiency arising from the use of a bivariate model, as opposed to separate univariate models, are studied. A major area of application for multivariate regression models is to clustered, 'parallel' lifetime data, so we also study the case of censored responses. Estimators of regression coefficients are no longer consistent under model misspecification, but we give simulation results that show that the bias is small in many practical situations. Gains in efficiency from bivariate models are also examined in the censored data setting. The methodology in the paper is illustrated by using lifetime data from the Diabetic Retinopathy Study.  相似文献   

16.
Nonparametric regression methods have been widely studied in functional regression analysis in the context of functional covariates and univariate response, but it is not the case for functional covariates with multivariate response. In this paper, we present two new solutions for the latter problem: the first is to directly extend the nonparametric method for univariate response to multivariate response, and in the second, the correlation among different responses is incorporated into the model. The asymptotic properties of the estimators are studied, and the effectiveness of the proposed methods is demonstrated through several simulation studies and a real data example.  相似文献   

17.
Quantitative model validation is playing an increasingly important role in performance and reliability assessment of a complex system whenever computer modelling and simulation are involved. The foci of this paper are to pursue a Bayesian probabilistic approach to quantitative model validation with non-normality data, considering data uncertainty and to investigate the impact of normality assumption on validation accuracy. The Box–Cox transformation method is employed to convert the non-normality data, with the purpose of facilitating the overall validation assessment of computational models with higher accuracy. Explicit expressions for the interval hypothesis testing-based Bayes factor are derived for the transformed data in the context of univariate and multivariate cases. Bayesian confidence measure is presented based on the Bayes factor metric. A generalized procedure is proposed to implement the proposed probabilistic methodology for model validation of complicated systems. Classic hypothesis testing method is employed to conduct a comparison study. The impact of data normality assumption and decision threshold variation on model assessment accuracy is investigated by using both classical and Bayesian approaches. The proposed methodology and procedure are demonstrated with a univariate stochastic damage accumulation model, a multivariate heat conduction problem and a multivariate dynamic system.  相似文献   

18.
Regression models with random effects are proposed for joint analysis of negative binomial and ordinal longitudinal data with nonignorable missing values under fully parametric framework. The presented model simultaneously considers a multivariate probit regression model for the missing mechanisms, which provides the ability of examining the missing data assumptions and a multivariate mixed model for the responses. Random effects are used to take into account the correlation between longitudinal responses of the same individual. A full likelihood-based approach that allows yielding maximum likelihood estimates of the model parameters is used. The model is applied to a medical data, obtained from an observational study on women, where the correlated responses are the ordinal response of osteoporosis of the spine and negative binomial response is the number of joint damage. A sensitivity of the results to the assumptions is also investigated. The effect of some covariates on all responses are investigated simultaneously.  相似文献   

19.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

20.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号