首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

2.
This paper deals with the nonparametric estimation of the mean and variance functions of univariate time series data. We propose a nonparametric dimension reduction technique for both mean and variance functions of time series. This method does not require any model specification and instead we seek directions in both the mean and variance functions such that the conditional distribution of the current observation given the vector of past observations is the same as that of the current observation given a few linear combinations of the past observations without loss of inferential information. The directions of the mean and variance functions are estimated by maximizing the Kullback–Leibler distance function. The consistency of the proposed estimators is established. A computational procedure is introduced to detect lags of the conditional mean and variance functions in practice. Numerical examples and simulation studies are performed to illustrate and evaluate the performance of the proposed estimators.  相似文献   

3.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   

4.
Jae Keun Yoo 《Statistics》2016,50(5):1086-1099
The purpose of this paper is to define the central informative predictor subspace to contain the central subspace and to develop methods for estimating the former subspace. Potential advantages of the proposed methods are no requirements of linearity, constant variance and coverage conditions in methodological developments. Therefore, the central informative predictor subspace gives us the benefit of restoring the central subspace exhaustively despite failing the conditions. Numerical studies confirm the theories, and real data analyses are presented.  相似文献   

5.
In this article, we propose a new method for sufficient dimension reduction when both response and predictor are vectors. The new method, using distance covariance, keeps the model-free advantage, and can fully recover the central subspace even when many predictors are discrete. We then extend this method to the dual central subspace, including a special case of canonical correlation analysis. We illustrated estimators through extensive simulations and real datasets, and compared to some existing methods, showing that our estimators are competitive and robust.  相似文献   

6.
We investigate the issue of bandwidth estimation in a functional nonparametric regression model with function-valued, continuous real-valued and discrete-valued regressors under the framework of unknown error density. Extending from the recent work of Shang (2013 Shang, H.L. (2013), ‘Bayesian Bandwidth Estimation for a Nonparametric Functional Regression Model with Unknown Error Density’, Computational Statistics &; Data Analysis, 67, 185198. doi: 10.1016/j.csda.2013.05.006[Crossref], [Web of Science ®] [Google Scholar]) [‘Bayesian Bandwidth Estimation for a Nonparametric Functional Regression Model with Unknown Error Density’, Computational Statistics &; Data Analysis, 67, 185–198], we approximate the unknown error density by a kernel density estimator of residuals, where the regression function is estimated by the functional Nadaraya–Watson estimator that admits mixed types of regressors. We derive a likelihood and posterior density for the bandwidth parameters under the kernel-form error density, and put forward a Bayesian bandwidth estimation approach that can simultaneously estimate the bandwidths. Simulation studies demonstrated the estimation accuracy of the regression function and error density for the proposed Bayesian approach. Illustrated by a spectroscopy data set in the food quality control, we applied the proposed Bayesian approach to select the optimal bandwidths in a functional nonparametric regression model with mixed types of regressors.  相似文献   

7.
To estimate parameters defined by estimating equations with covariates missing at random, we consider three bias-corrected nonparametric approaches based on inverse probability weighting, regression and augmented inverse probability weighting. However, when the dimension of covariates is not low, the estimation efficiency will be affected due to the curse of dimensionality. To address this issue, we propose a two-stage estimation procedure by using the dimension-reduced kernel estimation in conjunction with bias-corrected estimating equations. We show that the resulting three estimators are asymptotically equivalent and achieve the desirable properties. The impact of dimension reduction in nonparametric estimation of parameters is also investigated. The finite-sample performance of the proposed estimators is studied through simulation, and an application to an automobile data set is also presented.  相似文献   

8.
This article shows how a non-decimated wavelet packet transform (NWPT) can be used to model a response time series, Y t, in terms of an explanatory time series, X t. The proposed computational technique transforms the explanatory time series into a NWPT representation and then uses standard statistical modelling methods to identify which wavelet packets are useful for modelling the response time series. We exhibit S-Plus functions from the freeware WaveThresh package that implement our methodology.The proposed modelling methodology is applied to an important problem from the wind energy industry: how to model wind speed at a target location using wind speed and direction from a reference location. Our method improves on existing target site wind speed predictions produced by widely used industry standard techniques. However, of more importance, our NWPT representation produces models to which we can attach physical and scientific interpretations and in the wind example enable us to understand more about the transfer of wind energy from site to site.  相似文献   

9.
We propose a new algorithm for simultaneous variable selection and parameter estimation for the single-index quantile regression (SIQR) model . The proposed algorithm, which is non iterative , consists of two steps. Step 1 performs an initial variable selection method. Step 2 uses the results of Step 1 to obtain better estimation of the conditional quantiles and , using them, to perform simultaneous variable selection and estimation of the parametric component of the SIQR model. It is shown that the initial variable selection method consistently estimates the relevant variables , and the estimated parametric component derived in Step 2 satisfies the oracle property.  相似文献   

10.
To characterize the dependence of a response on covariates of interest, a monotonic structure is linked to a multivariate polynomial transformation of the central subspace (CS) directions with unknown structural degree and dimension. Under a very general semiparametric model formulation, such a sufficient dimension reduction (SDR) score is shown to enjoy the existence, optimality, and uniqueness up to scale and location in the defined concordance probability function. In light of these properties and its single-index representation, two types of concordance-based generalized Bayesian information criteria are constructed to estimate the optimal SDR score and the maximum concordance index. The estimation criteria are further carried out by effective computational procedures. Generally speaking, the outer product of gradients estimation in the first approach has an advantage in computational efficiency and the parameterization system in the second approach greatly reduces the number of parameters in estimation. Different from most existing SDR approaches, only one CS direction is required to be continuous in the proposals. Moreover, the consistency of structural degree and dimension estimators and the asymptotic normality of the optimal SDR score and maximum concordance index estimators are established under some suitable conditions. The performance and practicality of our methodology are also investigated through simulations and empirical illustrations.  相似文献   

11.
In this paper, we consider the estimation of partially linear additive quantile regression models where the conditional quantile function comprises a linear parametric component and a nonparametric additive component. We propose a two-step estimation approach: in the first step, we approximate the conditional quantile function using a series estimation method. In the second step, the nonparametric additive component is recovered using either a local polynomial estimator or a weighted Nadaraya–Watson estimator. Both consistency and asymptotic normality of the proposed estimators are established. Particularly, we show that the first-stage estimator for the finite-dimensional parameters attains the semiparametric efficiency bound under homoskedasticity, and that the second-stage estimators for the nonparametric additive component have an oracle efficiency property. Monte Carlo experiments are conducted to assess the finite sample performance of the proposed estimators. An application to a real data set is also illustrated.  相似文献   

12.
In this paper a semi-parametric approach is developed to model non-linear relationships in time series data using polynomial splines. Polynomial splines require very little assumption about the functional form of the underlying relationship, so they are very flexible and can be used to model highly non-linear relationships. Polynomial splines are also computationally very efficient. The serial correlation in the data is accounted for by modelling the noise as an autoregressive integrated moving average (ARIMA) process, by doing so, the efficiency in nonparametric estimation is improved and correct inferences can be obtained. The explicit structure of the ARIMA model allows the correlation information to be used to improve forecasting performance. An algorithm is developed to automatically select and estimate the polynomial spline model and the ARIMA model through backfitting. This method is applied on a real-life data set to forecast hourly electricity usage. The non-linear effect of temperature on hourly electricity usage is allowed to be different at different hours of the day and days of the week. The forecasting performance of the developed method is evaluated in post-sample forecasting and compared with several well-accepted models. The results show the performance of the proposed model is comparable with a long short-term memory deep learning model.  相似文献   

13.
The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aims to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya–Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. We ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test which is based on a linear mean-regression, we find that VIX index predicts excess returns both at short and long horizons.  相似文献   

14.
With rapid development in the technology of measuring disease characteristics at molecular or genetic level, it is possible to collect a large amount of data on various potential predictors of the clinical outcome of interest in medical research. It is often of interest to effectively use the information on a large number of predictors to make prediction of the interested outcome. Various statistical tools were developed to overcome the difficulties caused by the high-dimensionality of the covariate space in the setting of a linear regression model. This paper focuses on the situation, where the interested outcomes are subjected to right censoring. We implemented the extended partial least squares method along with other commonly used approaches for analyzing the high-dimensional covariates to the ACTG333 data set. Especially, we compared the prediction performance of different approaches with extensive cross-validation studies. The results show that the Buckley–James based partial least squares, stepwise subset model selection and principal components regression have similar promising predictive power and the partial least square method has several advantages in terms of interpretability and numerical computation.  相似文献   

15.
In the estimation of rational transfer function models, it has been recommended that starting values of a transfer function component be assumed to be zero (or a constant) in the recursive computation of the transfer function response. It is demonstrated that such algorithms may lead to serious bias in the estimation of moving average parameters. This paper discusses several other algorithms that may rectify this problem. It is found that the starting-value-free (SVF) method is a more reliable algorithm. For computer programs using the traditional algorithm, i.e., the zero-starting-value (ZSV) method, the bias problem can be easily remedied using a short-cut method that omits appropriate number of values at the beginning of the residual series.  相似文献   

16.
This paper investigates a biased regression approach to the preliminary estimation of the Box-Jenkins transfer function weights. Using statistical simulation to generate time series, 14 estimators (various OLS, ridge and principal components estimators) are compared in terms of MSE and standard error of the weight estimators. The estimators are investigated for different levels of multicollinearity, signal-to-noise ratio, number of independent variables, length of time series and number of lags included in the estimation. The results show that the ridge estimators nearly always give lower MSE than the OLS estimator, and in the computationally difficult cases give much lower MSE than the OLS estimator. The principal components estimators can give lower MSE than the OLS, but also higher values. All biased estimators nearly always give much lower estimated standard error than OLS when estimating the weights.  相似文献   

17.
In the past decades, the number of variables explaining observations in different practical applications increased gradually. This has led to heavy computational tasks, despite of widely using provisional variable selection methods in data processing. Therefore, more methodological techniques have appeared to reduce the number of explanatory variables without losing much of the information. In these techniques, two distinct approaches are apparent: ‘shrinkage regression’ and ‘sufficient dimension reduction’. Surprisingly, there has not been any communication or comparison between these two methodological categories, and it is not clear when each of these two approaches are appropriate. In this paper, we fill some of this gap by first reviewing each category in brief, paying special attention to the most commonly used methods in each category. We then compare commonly used methods from both categories based on their accuracy, computation time, and their ability to select effective variables. A simulation study on the performance of the methods in each category is generated as well. The selected methods are concurrently tested on two sets of real data which allows us to recommend conditions under which one approach is more appropriate to be applied to high-dimensional data.  相似文献   

18.
Joint modeling of recurrent and terminal events has attracted considerable interest and extensive investigations by many authors. The assumption of low-dimensional covariates has been usually applied in the existing studies, which is however inapplicable in many practical situations. In this paper, we consider a partial sufficient dimension reduction approach for a joint model with high-dimensional covariates. Some simulations as well as three real data applications are presented to confirm and assess the performance of the proposed model and approach.  相似文献   

19.
The problem of estimating a smooth distribution function F at a point t is treated under the proportional hazard model of random censorship. It is shown that a certain class of properly chosen kernel type estimator of F asymptotically perform better than the maximum likelihood estimator. It is shown that the relative deficiency of the maximum likelihood estimator of F under the proportional hazard model with respect to the properly chosen kernel type estimator tends to infinity as the sample size tends to infinity.  相似文献   

20.
We define a nonlinear autoregressive time series model based on the generalized hyperbolic distribution in an attempt to model time series with non-Gaussian features such as skewness and heavy tails. We show that the resulting process has a simple condition for stationarity and it is also ergodic. An empirical example with a forecasting experiment is presented to illustrate the features of the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号