首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   

2.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

3.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

4.
We propose a new method for dimension reduction in regression using the first two inverse moments. We develop corresponding weighted chi-squared tests for the dimension of the regression. The proposed method considers linear combinations of sliced inverse regression (SIR) and the method using a new candidate matrix which is designed to recover the entire inverse second moment subspace. The optimal combination may be selected based on the p-values derived from the dimension tests. Theoretically, the proposed method, as well as sliced average variance estimate (SAVE), is more capable of recovering the complete central dimension reduction subspace than SIR and principle Hessian directions (pHd). Therefore it can substitute for SIR, pHd, SAVE, or any linear combination of them at a theoretical level. Simulation study indicates that the proposed method may have consistently greater power than SIR, pHd, and SAVE.  相似文献   

5.
A new estimation method for the dimension of a regression at the outset of an analysis is proposed. A linear subspace spanned by projections of the regressor vector X , which contains part or all of the modelling information for the regression of a vector Y on X , and its dimension are estimated via the means of parametric inverse regression. Smooth parametric curves are fitted to the p inverse regressions via a multivariate linear model. No restrictions are placed on the distribution of the regressors. The estimate of the dimension of the regression is based on optimal estimation procedures. A simulation study shows the method to be more powerful than sliced inverse regression in some situations.  相似文献   

6.
SAVE and PHD are effective methods in dimension reduction problems. Both methods are based on two assumptions: linearity condition and constant covariance condition. But in the situation where constant covariance condition fails, even if linearity condition holds, SAVE and PHD often pick the directions which are out side of the central subspace (CS) or central mean subspace (CMS). In this article, we generalize the SAVE and PHD under weaker conditions. This generalization make it possible to get the correct estimates of central subspace (CS) and central mean subspace (CMS).  相似文献   

7.
This paper deals with the nonparametric estimation of the mean and variance functions of univariate time series data. We propose a nonparametric dimension reduction technique for both mean and variance functions of time series. This method does not require any model specification and instead we seek directions in both the mean and variance functions such that the conditional distribution of the current observation given the vector of past observations is the same as that of the current observation given a few linear combinations of the past observations without loss of inferential information. The directions of the mean and variance functions are estimated by maximizing the Kullback–Leibler distance function. The consistency of the proposed estimators is established. A computational procedure is introduced to detect lags of the conditional mean and variance functions in practice. Numerical examples and simulation studies are performed to illustrate and evaluate the performance of the proposed estimators.  相似文献   

8.
In this article, we propose to use sparse sufficient dimension reduction as a novel method for Markov blanket discovery of a target variable, where we do not take any distributional assumption on the variables. By assuming sparsity on the basis of the central subspace, we developed a penalized loss function estimate on the high-dimensional covariance matrix. A coordinate descent algorithm based on an inverse regression is used to get the sparse basis of the central subspace. Finite sample behavior of the proposed method is explored by simulation study and real data examples.  相似文献   

9.
In the field of financial time series, threshold-asymmetric conditional variance models can be used to explain asymmetric volatilities [C.W. Li and W.K. Li, On a double-threshold autoregressive heteroscedastic time series model, J. Appl. Econometrics 11 (1996), pp. 253–274]. In this paper, we consider a broad class of threshold-asymmetric GARCH processes (TAGARCH, hereafter) including standard ARCH and GARCH models as special cases. Since sample autocorrelation function provides a useful information to identify an appropriate time-series model for the data, we derive asymptotic distributions of sample autocorrelations both for original process and for squared process. It is verified that standard errors of sample autocorrelations for TAGARCH models are significantly different from unity for lower lags and they are exponentially converging to unity for higher lags. Furthermore they are shown to be asymptotically dependent while being independent of standard GARCH models. These results will be interesting in the light of the fact that TAGARCH processes are serially uncorrelated. A simulation study is reported to illustrate our results.  相似文献   

10.
Sliced average variance estimation is one of many methods for estimating the central subspace. It was shown to be more comprehensive than sliced inverse regression in the sense that it consistently estimates the central subspace under mild conditions while slice inverse regression may estimate only a proper subset of the central subspace. In this paper we extend this method to regressions with qualitative predictors. We also provide tests of dimension and a marginal coordinate hypothesis test. We apply the method to a data set concerning lakes infested by Eurasian Watermilfoil, and compare this new method to the partial inverse regression estimator.  相似文献   

11.
Summary.  Because highly correlated data arise from many scientific fields, we investigate parameter estimation in a semiparametric regression model with diverging number of predictors that are highly correlated. For this, we first develop a distribution-weighted least squares estimator that can recover directions in the central subspace, then use the distribution-weighted least squares estimator as a seed vector and project it onto a Krylov space by partial least squares to avoid computing the inverse of the covariance of predictors. Thus, distrbution-weighted partial least squares can handle the cases with high dimensional and highly correlated predictors. Furthermore, we also suggest an iterative algorithm for obtaining a better initial value before implementing partial least squares. For theoretical investigation, we obtain strong consistency and asymptotic normality when the dimension p of predictors is of convergence rate O { n 1/2/ log ( n )} and o ( n 1/3) respectively where n is the sample size. When there are no other constraints on the covariance of predictors, the rates n 1/2 and n 1/3 are optimal. We also propose a Bayesian information criterion type of criterion to estimate the dimension of the Krylov space in the partial least squares procedure. Illustrative examples with a real data set and comprehensive simulations demonstrate that the method is robust to non-ellipticity and works well even in 'small n –large p ' problems.  相似文献   

12.
In this paper, we propose several dimension reduction methods when the covariates are measured with additive distortion measurement errors. These distortions are modelled by unknown functions of a commonly observable confounding variable. To estimate the central subspace, we propose residuals-based dimension reduction estimation methods and direct estimation methods. The consistency and asymptotic normality of the proposed estimators are investigated. Furthermore, we conduct some simulations to evaluate the performance of our proposed method and compare with existing methods, and a real data set is analysed for illustration.  相似文献   

13.
Sliced regression is an effective dimension reduction method by replacing the original high-dimensional predictors with its appropriate low-dimensional projection. It is free from any probabilistic assumption and can exhaustively estimate the central subspace. In this article, we propose to incorporate shrinkage estimation into sliced regression so that variable selection can be achieved simultaneously with dimension reduction. The new method can improve the estimation accuracy and achieve better interpretability for the reduced variables. The efficacy of proposed method is shown through both simulation and real data analysis.  相似文献   

14.
To characterize the dependence of a response on covariates of interest, a monotonic structure is linked to a multivariate polynomial transformation of the central subspace (CS) directions with unknown structural degree and dimension. Under a very general semiparametric model formulation, such a sufficient dimension reduction (SDR) score is shown to enjoy the existence, optimality, and uniqueness up to scale and location in the defined concordance probability function. In light of these properties and its single-index representation, two types of concordance-based generalized Bayesian information criteria are constructed to estimate the optimal SDR score and the maximum concordance index. The estimation criteria are further carried out by effective computational procedures. Generally speaking, the outer product of gradients estimation in the first approach has an advantage in computational efficiency and the parameterization system in the second approach greatly reduces the number of parameters in estimation. Different from most existing SDR approaches, only one CS direction is required to be continuous in the proposals. Moreover, the consistency of structural degree and dimension estimators and the asymptotic normality of the optimal SDR score and maximum concordance index estimators are established under some suitable conditions. The performance and practicality of our methodology are also investigated through simulations and empirical illustrations.  相似文献   

15.
In the time series literature, recent interest has focused on the so-called subspace methods. These techniques use canonical correlations and linear regressions to estimate the system matrices of an ARMAX model expressed in state space form. In this article, we use subspace methods to forecast two series with the help of some exogenous variables related to them. We compare the results with those obtained using traditional transfer function models and find that the forecasts obtained with both methods are similar. This result is very encouraging because, in contrast to transfer function models, subspace methods can be considered as almost automatic.  相似文献   

16.
Variable selection problem is one of the most important tasks in regression analysis, especially in a high-dimensional setting. In this paper, we study this problem in the context of scalar response functional regression model, which is a linear model with scalar response and functional regressors. The functional model can be represented by certain multiple linear regression model via basis expansions of functional variables. Based on this model and random subspace method of Mielniczuk and Teisseyre (Comput Stat Data Anal 71:725–742, 2014), two simple variable selection procedures for scalar response functional regression model are proposed. The final functional model is selected by using generalized information criteria. Monte Carlo simulation studies conducted and a real data example show very satisfactory performance of new variable selection methods under finite samples. Moreover, they suggest that considered procedures outperform solutions found in the literature in terms of correctly selected model, false discovery rate control and prediction error.  相似文献   

17.
Given a noisy time series (or signal), one may wish to remove the noise from the observed series. Assuming that the noise-free series lies in some low-dimensional subspace of rank r, a common approach is to embed the noisy time series into a Hankel trajectory matrix. The singular value decomposition is then used to deconstruct the Hankel matrix into a sum of rank-one components. We wish to demonstrate that there may be some potential in using difference-based methods of the observed series in order to provide guidance regarding the separation of the noise from the signal, and to estimate the rank of the low-dimensional subspace in which the true signal is assumed to lie.  相似文献   

18.
对于包含近单整时间序列的预测模型,在进行Scheffe检验时由于内生性问题的影响,导致参数统计量的检验水平过于保守,由此也相应降低了检验功效。通过加入解释变量的超前项与滞后项差分项的动态方法进行修正,并对修正前后的统计量有限样本性质进行仿真比较,结果显示这一修正方法可以有效降低内生性问题对Scheffe检验结果的影响。在小样本条件下,经过修正的Scheffe检验不仅提高了统计量的检验功效,同时明显减少了检验水平的扭曲现象。  相似文献   

19.
Sufficient dimension reduction methods aim to reduce the dimensionality of predictors while preserving regression information relevant to the response. In this article, we develop Minimum Average Deviance Estimation (MADE) methodology for sufficient dimension reduction. The purpose of MADE is to generalize Minimum Average Variance Estimation (MAVE) beyond its assumption of additive errors to settings where the outcome follows an exponential family distribution. As in MAVE, a local likelihood approach is used to learn the form of the regression function from the data and the main parameter of interest is a dimension reduction subspace. To estimate this parameter within its natural space, we propose an iterative algorithm where one step utilizes optimization on the Stiefel manifold. MAVE is seen to be a special case of MADE in the case of Gaussian outcomes with a common variance. Several procedures are considered to estimate the reduced dimension and to predict the outcome for an arbitrary covariate value. Initial simulations and data analysis examples yield encouraging results and invite further exploration of the methodology.  相似文献   

20.
Panel data models with factor structures in both the errors and the regressors have received considerable attention recently. In these models, the errors and the regressors are correlated and the standard estimators are inconsistent. This paper shows that, for such models, a modified first-difference estimator (in which the time and the cross-sectional dimensions are interchanged) is consistent as the cross-sectional dimension grows but the time dimension is small. Although the estimator has a non standard asymptotic distribution, t and F tests have standard asymptotic distribution under the null hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号