首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this study, we use the wavelet analysis to construct a test statistic to test for the existence of a trend in the series. We also propose a new approach for testing the presence of trend based on the periodogram of the data. Since we are also interested in the presence of a long-memory process among the data, we study the properties of our test statistics under different degrees of dependency. We compare the results when using the band periodogram test and the wavelet test with results obtained by applying the ordinary least squares (OLS) method under the same conditions.  相似文献   

2.
The heterogeneity of error variance often causes a huge interpretive problem in linear regression analysis. Before taking any remedial measures we first need to detect this problem. A large number of diagnostic plots are now available in the literature for detecting heteroscedasticity of error variances. Among them the ‘residuals’ and ‘fits’ (R–F) plot is very popular and commonly used. In the R–F plot residuals are plotted against the fitted responses, where both these components are obtained using the ordinary least squares (OLS) method. It is now evident that the OLS fits and residuals suffer a huge setback in the presence of unusual observations and hence the R–F plot may not exhibit the real scenario. The deletion residuals based on a data set free from all unusual cases should estimate the true errors in a better way than the OLS residuals. In this paper we propose ‘deletion residuals’ and the ‘deletion fits’ (DR–DF) plot for the detection of the heterogeneity of error variances in a linear regression model to get a more convincing and reliable graphical display. Examples show that this plot locates unusual observations more clearly than the R–F plot. The advantage of using deletion residuals in the detection of heteroscedasticity of error variance is investigated through Monte Carlo simulations under a variety of situations.  相似文献   

3.
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating model from which the latent process can be simulated. Given the presence of a latent long-memory process, we require a modification of the importance sampling technique. In particular, the long-memory process needs to be approximated by a finite dynamic linear process. Two possible approximations are discussed and are compared with each other. We show that an autoregression obtained from minimizing mean squared prediction errors leads to an effective and feasible method. In our empirical study, we analyze ten daily log-return series from the S&P 500 stock index by univariate and multivariate long-memory stochastic volatility models. We compare the in-sample and out-of-sample performance of a number of models within the class of long-memory stochastic volatility models.  相似文献   

4.
In the presence of collinearity certain biased estimation procedures like ridge regression, generalized inverse estimator, principal component regression, Liu estimator, or improved ridge and Liu estimators are used to improve the ordinary least squares (OLS) estimates in the linear regression model. In this paper new biased estimator (Liu estimator), almost unbiased (improved) Liu estimator and their residuals will be analyzed and compared with OLS residuals in terms of mean-squared error.  相似文献   

5.
Control charts for residuals, based on the regression model, require a robust fitting technique for minimizing the error resulting from the fitted model. However, in the multivariate case, when the number of variables is high and data become complex, traditional fitting techniques, such as ordinary least squares (OLS), lose efficiency. In this paper, support vector regression (SVR) is used to construct robust control charts for residuals, called SVR-chart. This choice is based on the fact that the SVR is designed to minimize the structural error whereas other techniques minimize the empirical error. An application shows that SVR methods gives competitive results in comparison with the OLS and the partial least squares method, in terms of standard deviation of the error prediction and the standard error of performance. A sensitivity study is conducted to evaluate the SVR-chart performance based on the average run length (ARL) and showed that the SVR-chart has the best ARL behaviour in comparison with the other residuals control charts.  相似文献   

6.
We formulate and evaluate weighted least squares (WLS) and ordinary least squares (OLS) procedures for estimating the parametric mean-value function of a nonhomogeneous Poisson process. We focus the development on processes having an exponential rate function, where the exponent may include a polynomial component or some trigonometric components. Unanticipated problems with the WLS procedure are explained by an analysis of the associated residuals. The OLS procedure is based on a square root transformation of the "detrended" event (arrival) times - that is, the fitted mean-value function evaluated at the observed event times; and under appropriate conditions, the corresponding residuals are proved to converge weakly to a normal distribution with mean 0 and variance 0.25. The results of a Monte Carlo study indicate the advantages of the OLS procedure with respect to estimation accuracy and computational efficiency.  相似文献   

7.
This paper considers residuals for time series regression. Despite much literature on visual diagnostics for uncorrelated data, there is little on the autocorrelated case. To examine various aspects of the fitted time series regression model, three residuals are considered. The fitted regression model can be checked using orthogonal residuals; the time series error model can be analysed using marginal residuals; and the white noise error component can be tested using conditional residuals. When used together, these residuals allow identification of outliers, model mis‐specification and mean shifts. Due to the sensitivity of conditional residuals to model mis‐specification, it is suggested that the orthogonal and marginal residuals be examined first.  相似文献   

8.
In heteroskedastic regression models, the least squares (OLS) covariance matrix estimator is inconsistent and inference is not reliable. To deal with inconsistency one can estimate the regression coefficients by OLS, and then implement a heteroskedasticity consistent covariance matrix (HCCM) estimator. Unfortunately the HCCM estimator is biased. The bias is reduced by implementing a robust regression, and by using the robust residuals to compute the HCCM estimator (RHCCM). A Monte-Carlo study analyzes the behavior of RHCCM and of other HCCM estimators, in the presence of systematic and random heteroskedasticity, and of outliers in the explanatory variables.  相似文献   

9.
The paper considers the fitting of polynomial trends to data when the residuals are autocorrelated. Although OLS is asymptoti­cally efficient it can be quite inefficient in small samples. Hence it is suggested that a test for autocorrelation be carried out and to this end we present a table of exact critical values of the Durbin-Watson test for this model.  相似文献   

10.
在金融风险的度量中,拟合分布的选取直接影响到风险度量的精度问题。针对金融收益序列的动态变化,在SV模型中引入广义双曲线学生偏t分布(SV-GHSKt)拟合金融收益序列的尖峰厚尾、不对称以及杠杆效应等特征,通过马尔科夫蒙特卡洛模拟的方法将收益率序列转化为标准残差序列,然后用极值理论的POT模型拟合标准残差序列尾部分布,进而建立一种新的金融风险度量模型———基于SV-GHSKt-POT的动态VaR模型。用该模型对上证综合指数做实证研究,结果表明,SV-GHSKt-POT的动态VaR模型能很好地模拟金融收益序列的尖峰厚尾性、波动集聚性及杠杆效应,并且能够合理有效地提高风险测度的精度,尤其在高的置信水平下表现更好。  相似文献   

11.

We consider a sieve bootstrap procedure to quantify the estimation uncertainty of long-memory parameters in stationary functional time series. We use a semiparametric local Whittle estimator to estimate the long-memory parameter. In the local Whittle estimator, discrete Fourier transform and periodogram are constructed from the first set of principal component scores via a functional principal component analysis. The sieve bootstrap procedure uses a general vector autoregressive representation of the estimated principal component scores. It generates bootstrap replicates that adequately mimic the dependence structure of the underlying stationary process. We first compute the estimated first set of principal component scores for each bootstrap replicate and then apply the semiparametric local Whittle estimator to estimate the memory parameter. By taking quantiles of the estimated memory parameters from these bootstrap replicates, we can nonparametrically construct confidence intervals of the long-memory parameter. As measured by coverage probability differences between the empirical and nominal coverage probabilities at three levels of significance, we demonstrate the advantage of using the sieve bootstrap compared to the asymptotic confidence intervals based on normality.

  相似文献   

12.
Abstract

It is well known that prior application of GLS detrending, as advocated by Elliot et al. [Elliot, G., Rothenberg, T., Stock, J. (1996). Efficient tests for an autoregressive unit root. Econometrica 64:813–836], can produce a significant increase in power to reject the unit root null over that obtained from a conventional OLS-based Dickey and Fuller [Dickey, D., Fuller, W. (1979). Distribution of the estimators for autoregressive time series with a unit root. J. Am. Statist. Assoc. 74:427–431] testing equation. However, this paper employs Monte Carlo simulation to demonstrate that this increase in power is not necessarily obtained when breaks occur in either level or trend. It is found that neither OLS nor GLS-based tests are robust to level or trend breaks, their size and power properties both deteriorating as the break size increases.  相似文献   

13.
It is common for linear regression models that the error variances are not the same for all observations and there are some high leverage data points. In such situations, the available literature advocates the use of heteroscedasticity consistent covariance matrix estimators (HCCME) for the testing of regression coefficients. Primarily, such estimators are based on the residuals derived from the ordinary least squares (OLS) estimator that itself can be seriously inefficient in the presence of heteroscedasticity. To get efficient estimation, many efficient estimators, namely the adaptive estimators are available but their performance has not been evaluated yet when the problem of heteroscedasticity is accompanied with the presence of high leverage data. In this article, the presence of high leverage data is taken into account to evaluate the performance of the adaptive estimator in terms of efficiency. Furthermore, our numerical work also evaluates the performance of the robust standard errors based on this efficient estimator in terms of interval estimation and null rejection rate (NRR).  相似文献   

14.
We study the persistence of intertrade durations, counts (number of transactions in equally spaced intervals of clock time), squared returns and realized volatility in 10 stocks trading on the New York Stock Exchange. A semiparametric analysis reveals the presence of long memory in all of these series, with potentially the same memory parameter. We introduce a parametric latent-variable long-memory stochastic duration (LMSD) model which is shown to better fit the data than the autoregressive conditional duration model (ACD) in a variety of ways. The empirical evidence we present here is in agreement with theoretical results on the propagation of memory from durations to counts and realized volatility presented in Deo et al. (2009).  相似文献   

15.
In statistical and econometric practice it is not uncommon to find that regression parameter estimates obtained using estimated generalized least squares (EGLS) do not differ much from those obtained through ordinary least squares (OLS), even when the assumption of spherical errors is violated. To investigate if one could ignore non-spherical errors, and legitimately continue with OLS estimation under the non-spherical disturbance setting, Banerjee and Magnus (1999) developed statistics to measure the sensitivity of the OLS estimator to covariance misspecification. Wan et al. (2007) generalized this work by allowing for linear restrictions on the regression parameters. This paper extends the aforementioned studies by exploring the sensitivity of the equality restrictions pre-test estimator to covariance misspecification. We find that the pre-test estimators can be very sensitive to covariance misspecification, and the degree of sensitivity of the pre-test estimator often lies between that of its unrestricted and restricted components. In addition, robustness to non-normality is investigated. It is found that existing results remain valid if elliptically symmetric, instead of normal, errors are assumed.  相似文献   

16.
We consider a generalized exponential (GEXP) model in the frequency domain for modeling seasonal long-memory time series. This model generalizes the fractional exponential (FEXP) model [Beran, J., 1993. Fitting long-memory models by generalized linear regression. Biometrika 80, 817–822] to allow the singularity in the spectral density occurring at an arbitrary frequency for modeling persistent seasonality and business cycles. Moreover, the short-memory structure of this model is characterized by the Bloomfield [1973. An exponential model for the spectrum of a scalar time series. Biometrika 60, 217–226] model, which has a fairly flexible semiparametric form. The proposed model includes fractionally integrated processes, Bloomfield models, FEXP models as well as GARMA models [Gray, H.L., Zhang, N.-F., Woodward, W.A., 1989. On generalized fractional processes. J. Time Ser. Anal. 10, 233–257] as special cases. We develop a simple regression method for estimating the seasonal long-memory parameter. The asymptotic bias and variance of the corresponding long-memory estimator are derived. Our methodology is applied to a sunspot data set and an Internet traffic data set for illustration.  相似文献   

17.
18.
The classical growth curve model is considered when one continuous characteristic is measured at q time points. The covariance adjusted estimator of growth curve parameters is the OLS estimator adjusted using analysis of covariance. The covariates are obtained from functions of within individuals error contrasts. On the other hand, REML estimators emerge from maximization of the likelihood of OLS residuals. We compare the efficiency of estimators of growth curve parameters obtained by REML with that of covariance-adjusted least squares estimators with covariates selected via CAIC.  相似文献   

19.
We propose a bivariate integer-valued fractional integrated (BINFIMA) model to account for the long-memory property and apply the model to high-frequency stock transaction data. The BINFIMA model allows for both positive and negative correlations between the counts. The unconditional and conditional first- and second-order moments are given. The model is capable of capturing the covariance between and within intra-day time series of high-frequency transaction data due to macroeconomic news and news related to a specific stock. Empirically, it is found that Ericsson B has mean recursive process while AstraZeneca has long-memory property.  相似文献   

20.
Nonstationary time series are frequently detrended in empirical investigations by regressing the series on time or a function of time. The effects of the detrending on the tests for causal relationships in the sense of Granger are investigated using quarterly U.S. data. The causal relationships between nominal or real GNP and M1, inferred from the Granger–Sims tests, are shown to depend very much on, among other factors, whether or not series are detrended. Detrending tends to remove or weaken causal relationships, and conversely, failure to detrend tends to introduce or enhance causal relationships. The study suggests that we need a more robust test or a better definition of causality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号