首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

2.
Time series which have more than one time dependent variable require building an appropriate model in which the variables not only have relationships with each other, but also depend on previous values in time. Based on developments for a sufficient dimension reduction, we investigate a new class of multiple time series models without parametric assumptions. First, for the dependent and independent time series, we simply use a univariate time series central subspace to estimate the autoregressive lags of the series. Secondly, we extract the successive directions to estimate the time series central subspace for regressors which include past lags of dependent and independent series in a mutual information multiple-index time series. Lastly, we estimate a multiple time series model for the reduced directions. In this article, we propose a unified estimation method of minimal dimension using an Akaike information criterion, for situations in which the dimension for multiple regressors is unknown. We present an analysis using real data from the housing price index showing that our approach is an alternative for multiple time series modeling. In addition, we check the accuracy for the multiple time series central subspace method using three simulated data sets.  相似文献   

3.
Sufficient dimension reduction (SDR) is a popular supervised machine learning technique that reduces the predictor dimension and facilitates subsequent data analysis in practice. In this article, we propose principal weighted logistic regression (PWLR), an efficient SDR method in binary classification where inverse-regression-based SDR methods often suffer. We first develop linear PWLR for linear SDR and study its asymptotic properties. We then extend it to nonlinear SDR and propose the kernel PWLR. Evaluations with both simulated and real data show the promising performance of the PWLR for SDR in binary classification.  相似文献   

4.
In this paper we present an indirect estimation procedure for (ARFIMA) fractional time series models.The estimation method is based on an ‘incorrect’criterion which does not directly provide a consistent estimator of the parameters of interest,but leads to correct inference by using simulations.

The main steps are the following. First,we consider an auxiliary model which can be easily estimated.Specifically,we choose the finite lag Autoregressive model.Then, this is estimated on the observations and simulated values drawn from the ARFIMA model associated with a given value of the parameters of interest.Finally,the latter is calibrated in order to obtain close values of the two estimators of the auxiliary parameters.

In this article,we describe the estimation procedure and compare the performance of the indirect estimator with some alternative estimators based on the likelihood function by a Monte Carlo study.  相似文献   

5.
Multivariate (or interchangeably multichannel) autoregressive (MCAR) modeling of stationary and nonstationary time series data is achieved doing things one channel at-a-time using only scalar computations on instantaneous data. The one channel at-a-time modeling is achieved as an instantaneous response multichannel autoregressive model with orthogonal innovations variance. Conventional MCAR models are expressible as linear algebraic transformations of the instantaneous response orthogonal innovations models. By modeling multichannel time series one channel at-a-time, the problems of modeling multichannel time series are reduced to problems in the modeling of scalar autoregressive time series. The three longstanding time series modeling problems of achieving a relatively parsimonious MCAR representation, of multichannel stationary time series spectral estimation and of the modeling of nonstationary covariance time series are addressed using this paradigm.  相似文献   

6.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

7.
This paper deals with the nonparametric estimation of the mean and variance functions of univariate time series data. We propose a nonparametric dimension reduction technique for both mean and variance functions of time series. This method does not require any model specification and instead we seek directions in both the mean and variance functions such that the conditional distribution of the current observation given the vector of past observations is the same as that of the current observation given a few linear combinations of the past observations without loss of inferential information. The directions of the mean and variance functions are estimated by maximizing the Kullback–Leibler distance function. The consistency of the proposed estimators is established. A computational procedure is introduced to detect lags of the conditional mean and variance functions in practice. Numerical examples and simulation studies are performed to illustrate and evaluate the performance of the proposed estimators.  相似文献   

8.
To characterize the dependence of a response on covariates of interest, a monotonic structure is linked to a multivariate polynomial transformation of the central subspace (CS) directions with unknown structural degree and dimension. Under a very general semiparametric model formulation, such a sufficient dimension reduction (SDR) score is shown to enjoy the existence, optimality, and uniqueness up to scale and location in the defined concordance probability function. In light of these properties and its single-index representation, two types of concordance-based generalized Bayesian information criteria are constructed to estimate the optimal SDR score and the maximum concordance index. The estimation criteria are further carried out by effective computational procedures. Generally speaking, the outer product of gradients estimation in the first approach has an advantage in computational efficiency and the parameterization system in the second approach greatly reduces the number of parameters in estimation. Different from most existing SDR approaches, only one CS direction is required to be continuous in the proposals. Moreover, the consistency of structural degree and dimension estimators and the asymptotic normality of the optimal SDR score and maximum concordance index estimators are established under some suitable conditions. The performance and practicality of our methodology are also investigated through simulations and empirical illustrations.  相似文献   

9.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

10.
In this article we consider the problem of detecting changes in level and trend in time series model in which the number of change-points is unknown. The approach of Bayesian stochastic search model selection is introduced to detect the configuration of changes in a time series. The number and positions of change-points are determined by a sequence of change-dependent parameters. The sequence is estimated by its posterior distribution via the maximum a posteriori (MAP) estimation. Markov chain Monte Carlo (MCMC) method is used to estimate posterior distributions of parameters. Some actual data examples including a time series of traffic accidents and two hydrological time series are analyzed.  相似文献   

11.
In this article, we develop a series estimation method for unknown time-inhomogeneous functionals of Lévy processes involved in econometric time series models. To obtain an asymptotic distribution for the proposed estimators, we establish a general asymptotic theory for partial sums of bivariate functionals of time and nonstationary variables. These results show that the proposed estimators in different situations converge to quite different random variables. In addition, the rates of convergence depend on various factors rather than just the sample size. Finite sample simulations are provided to evaluate the finite sample performance of the proposed model and estimation method.  相似文献   

12.
In recent years, modelling count data has become one of the most important and popular topics in time‐series analysis. At the same time, variable selection methods have become widely used in many fields as an effective statistical modelling tool. In this paper, we consider using a variable selection method to solve a modelling problem regarding the first‐order Poisson integer‐valued autoregressive (PINAR(1)) model with covariables. The PINAR(1) model with covariables is widely used in many areas because of its practicality. When using this model to deal with practical problems, multiple covariables are added to the model because it is impossible to know in advance which covariables will affect the results. But the inclusion of some insignificant covariables is almost impossible to avoid. Unfortunately, the usual estimation method is not adequate for the task of deleting the insignificant covariables that cause statistical inferences to become biased. To overcome this defect, we propose a penalised conditional least squares (PCLS) method, which can consistently select the true model. The PCLS estimator is also provided and its asymptotic properties are established. Simulation studies demonstrate that the PCLS method is effective for estimation and variable selection. One practical example is also presented to illustrate the practicability of the PCLS method.  相似文献   

13.
Article: 2     
Summary. Searching for an effective dimension reduction space is an important problem in regression, especially for high dimensional data. We propose an adaptive approach based on semiparametric models, which we call the (conditional) minimum average variance estimation (MAVE) method, within quite a general setting. The MAVE method has the following advantages. Most existing methods must undersmooth the nonparametric link function estimator to achieve a faster rate of consistency for the estimator of the parameters (than for that of the nonparametric function). In contrast, a faster consistency rate can be achieved by the MAVE method even without undersmoothing the nonparametric link function estimator. The MAVE method is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included. Because of the faster rate of consistency for the parameter estimators, it is possible for us to estimate the dimension of the space consistently. The relationship of the MAVE method with other methods is also investigated. In particular, a simple outer product gradient estimator is proposed as an initial estimator. In addition to theoretical results, we demonstrate the efficacy of the MAVE method for high dimensional data sets through simulation. Two real data sets are analysed by using the MAVE approach.  相似文献   

14.
In this paper, we introduce a new non-negative integer-valued autoregressive time series model based on a new thinning operator, so called generalized zero-modified geometric (GZMG) thinning operator. The first part of the paper is devoted to the distribution, GZMG distribution, which is obtained as the convolution of the zero-modified geometric (ZMG) distributed random variables. Some properties of this distribution are derived. Then, we construct a thinning operator based on the counting processes with ZMG distribution. Finally, an INAR(1) time series model is introduced and its properties including estimation issues are derived and discussed. A small Monte Carlo experiment is conducted to evaluate the performance of maximum likelihood estimators in finite samples. At the end of the paper, we consider an empirical illustration of the introduced INAR(1) model.  相似文献   

15.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

16.
In the field of chaotic time series analysis, there is a lack of a distributional theory for the main quantities used to characterize the underlying data generating process (DGP). In this paper a method for resampling time series generated by a chaotic dynamical system is proposed. The basic idea is to develop an algorithm for building trajectories which lie on the same attractor of the true DGP, that is with the same dynamical and geometrical properties of the original data. We performed some numerical experiments on some short noise-free and high-noise series confirming that we are able to correctly reproduce the distribution of the largest finite-time Lyapunov exponent and of the correlation dimension.  相似文献   

17.
A common practice in time series analysis is to fit a centered model to the mean-corrected data set. For stationary autoregressive moving-average (ARMA) processes, as far as the parameter estimation is concerned, fitting an ARMA model without intercepts to the mean-corrected series is asymptotically equivalent to fitting an ARMA model with intercepts to the observed series. We show that, related to the parameter least squares estimation of periodic ARMA models, the second approach can be arbitrarily more efficient than the mean-corrected counterpart. This property is illustrated by means of a periodic first-order autoregressive model. The asymptotic variance of the estimators for both approaches is derived. Moreover, empirical experiments based on simulations investigate the finite sample properties of the estimators.  相似文献   

18.

Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression, techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions’ span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest that truly important directions are unimportant.

  相似文献   

19.

Sufficient dimension reduction (SDR) provides a framework for reducing the predictor space dimension in statistical regression problems. We consider SDR in the context of dimension reduction for deterministic functions of several variables such as those arising in computer experiments. In this context, SDR can reveal low-dimensional ridge structure in functions. Two algorithms for SDR—sliced inverse regression (SIR) and sliced average variance estimation (SAVE)—approximate matrices of integrals using a sliced mapping of the response. We interpret this sliced approach as a Riemann sum approximation of the particular integrals arising in each algorithm. We employ the well-known tools from numerical analysis—namely, multivariate numerical integration and orthogonal polynomials—to produce new algorithms that improve upon the Riemann sum-based numerical integration in SIR and SAVE. We call the new algorithms Lanczos–Stieltjes inverse regression (LSIR) and Lanczos–Stieltjes average variance estimation (LSAVE) due to their connection with Stieltjes’ method—and Lanczos’ related discretization—for generating a sequence of polynomials that are orthogonal with respect to a given measure. We show that this approach approximates the desired integrals, and we study the behavior of LSIR and LSAVE with two numerical examples. The quadrature-based LSIR and LSAVE eliminate the first-order algebraic convergence rate bottleneck resulting from the Riemann sum approximation, thus enabling high-order numerical approximations of the integrals when appropriate. Moreover, LSIR and LSAVE perform as well as the best-case SIR and SAVE implementations (e.g., adaptive partitioning of the response space) when low-order numerical integration methods (e.g., simple Monte Carlo) are used.

  相似文献   

20.
ABSTRACT

Singular spectrum analysis (SSA) is a relatively new method for time series analysis and comes as a non-parametric alternative to the classical methods. This methodology has proven to be effective in analysing non-stationary and complex time series since it is a non-parametric method and do not require the classical assumptions over the stationarity or over the normality of the residuals. Although SSA have proved to provide advantages over traditional methods, the challenges that arise when long time series are considered, make the standard SSA very demanding computationally and often not suitable. In this paper we propose the randomized SSA which is an alternative to SSA for long time series without losing the quality of the analysis. The SSA and the randomized SSA are compared in terms of quality of the model fit and forecasting, and computational time. This is done by using Monte Carlo simulations and real data about the daily prices of five of the major world commodities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号