首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

SiZer (significant zero crossings of derivatives) is an effective tool for exploring significant features in curves from the viewpoint of the scale space theory. In this paper, a SiZer approach is developed for generalized varying coefficient models (GVCMs) in order to achieve the task of understanding dynamic characteristics of the regression relationship at multiscales. The proposed SiZer method is based on the local-linear maximum likelihood estimation of GVCMs and the one-step estimation procedure is employed to alleviate the computational cost of estimating the coefficients and their derivatives at different scales. Simulation studies are performed to assess the performance of the SiZer inference and two real-world examples are given to demonstrate its applications.  相似文献   

2.
In this paper, we extend SiZer (SIgnificant ZERo crossing of the derivatives) to dependent data for the purpose of goodness-of-fit tests for time series models. Dependent SiZer compares the observed data with a specific null model being tested by adjusting the statistical inference using an assumed autocovariance function. This new approach uses a SiZer type visualization to flag statistically significant differences between the data and a given null model. The power of this approach is demonstrated through some examples of time series of Internet traffic data. It is seen that such time series can have even more burstiness than is predicted by the popular, long- range dependent, Fractional Gaussian Noise model.  相似文献   

3.
Sizer Map is proposed as a graphical tool for assistance in nonparametric additive regression testing problems. Four problems have been analyzed by using SiZer Map: testing for additivity, testing the components significance, testing parametric models for the components and testing for interactions. The simplicity and flexibility of SiZer Map for our purposes are highlighted from the performed empirical study with several real datasets. With these data, we compare the conclusions derived from SiZer analysis with the global results derived from standard tests, previously proposed in the literature.  相似文献   

4.
In a nonparametric regression setting, we consider the kernel estimation of the logarithm of the error variance function, which might be assumed to be homogeneous or heterogeneous. The objective of the present study is to discover important features in the variation of the data at multiple locations and scales based on a nonparametric kernel smoothing technique. Traditional kernel approaches estimate the function by selecting an optimal bandwidth, but it often turns out to be unsatisfying in practice. In this paper, we develop a SiZer (SIgnificant ZERo crossings of derivatives) tool based on a scale-space approach that provides a more flexible way of finding meaningful features in the variation. The proposed approach utilizes local polynomial estimators of a log-variance function using a wide range of bandwidths. We derive the theoretical quantile of confidence intervals in SiZer inference and also study the asymptotic properties of the proposed approach in scale-space. A numerical study via simulated and real examples demonstrates the usefulness of the proposed SiZer tool.  相似文献   

5.
SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth.  相似文献   

6.
Varying coefficient models are a useful statistical tool to explore dynamic patterns of a regression relationship, in which the variation features of the regression coefficients are taken as the main evidence to reflect the dynamic relationship between the response and the explanatory variables. In this study, we propose a SiZer approach as a visually diagnostic device to uncover the statistically significant features of the coefficients. This method can highlight the significant structures of the coefficients under different scales and can therefore extract relatively full information in the data. The simulation studies and real-world data analysis show that the SiZer approach performs satisfactorily in mining the significant features of the coefficients.  相似文献   

7.
A periodically stationary time series has seasonal variances. A local linear trend estimation is proposed to accommodate unequal variances. A comparison of this proposed estimator with the estimator commonly used for a stationary time series is provided. The optimal bandwidth selection for this new trend estimator is discussed.  相似文献   

8.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

9.
We propose two preprocessing algorithms suitable for climate time series. The first algorithm detects outliers based on an autoregressive cost update mechanism. The second one is based on the wavelet transform, a method from pattern recognition. In order to benchmark the algorithms'' performance we compare them to existing methods based on a synthetic data set. Eventually, for exemplary purposes, the proposed methods are applied to a data set of high-frequent temperature measurements from Novi Sad, Serbia. The results show that both methods together form a powerful tool for signal preprocessing: In case of solitary outliers the autoregressive cost update mechanism prevails, whereas the wavelet-based mechanism is the method of choice in the presence of multiple consecutive outliers.  相似文献   

10.
It is an important problem to compare two time series in many applications. In this paper, a computational bootstrap procedure is proposed to test if two dependent stationary time series have the same autocovariance structures. The blocks of blocks bootstrap on bivariate time series is employed to estimate the covariance matrix which is necessary in order to construct the proposed test statistic. Without much additional effort, the bootstrap critical values can also be computed as a byproduct from the same bootstrap procedure. The asymptotic distribution of the test statistic under the null hypothesis is obtained. A simulation study is conducted to examine the finite sample performance of the test. The simulation results show that the proposed procedure with the bootstrap critical values performs well empirically and is especially useful when time series are short and non-normal. The proposed test is applied to an analysis of a real data set to understand the relationship between the input and output signals of a chemical process.  相似文献   

11.
This paper presents a simple diagnostic tool for time series. Based on a coefficient α that veries between 1 and 0, the tool measures the approximation of a time series to an arithmetic progression (i.e., a linear function of time). The proposed α is based on the ratio of the average squared second difference to the average squared first difference of the ginven series. As such, α reduces to the Von Neumann ratio η of the series of first differences, namely, α = 1-η/4. For an arithmetic progression α = 1, and deviations therefrom cause it to decrease. Unlike the correlation coefficient (between the entries and the indics), α is sensitive to local, or piecewise, linearity. Here α is evaluated for an assortment of simple time series models such as random walk, AR(1) and MA(1). Large-sample distribution yields a number of commonly used stochastic models including non-normal process. For most standard deterministic and stochastic models, α stabilizes as n approaches infinity, and provides a statistic that is capable of distinguishing between many different standard random and deterministic models. A further measure τ, which together with α distinguisches between random walks and deterministic trend plus i.i.d., is also suggested. Some examples based on empirical data are also studied.  相似文献   

12.
This article is concerned with inference for the parameter vector in stationary time series models based on the frequency domain maximum likelihood estimator. The traditional method consistently estimates the asymptotic covariance matrix of the parameter estimator and usually assumes the independence of the innovation process. For dependent innovations, the asymptotic covariance matrix of the estimator depends on the fourth‐order cumulants of the unobserved innovation process, a consistent estimation of which is a difficult task. In this article, we propose a novel self‐normalization‐based approach to constructing a confidence region for the parameter vector in such models. The proposed procedure involves no smoothing parameter, and is widely applicable to a large class of long/short memory time series models with weakly dependent innovations. In simulation studies, we demonstrate favourable finite sample performance of our method in comparison with the traditional method and a residual block bootstrap approach.  相似文献   

13.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

14.
Periodically integrated time series require a periodic differencing filter to remove the stochastic trend. A non-periodic integrated time series needs the first-difference filter for similar reasons. When the changing sea- sonal fluctuations for the non-periodic integrated series can be described by seasonal dummy variables for which the corresponding parameters are not constant within the sampie, such a series may not be easily & stinguished from a periodically integrated time series. In this paper, nested and non-nested testing procedures are proposed to distinguish between these two alternative stochastic and non-stochastic seasonal processes, When it is assumed there is a single unknown structural break in the seasonal dummy parameters. Several empirical examples using quarterly real macroeconomic time series for the United Kingdom illustrate the nested and non-nested approaches.  相似文献   

15.
In statistical data analysis it is often important to compare, classify, and cluster different time series. For these purposes various methods have been proposed in the literature, but they usually assume time series with the same sample size. In this article, we propose a spectral domain method for handling time series of unequal length. The method make the spectral estimates comparable by producing statistics at the same frequency. The procedure is compared with other methods proposed in the literature by a Monte Carlo simulation study. As an illustrative example, the proposed spectral method is applied to cluster industrial production series of some developed countries.  相似文献   

16.
Inference, quantile forecasting and model comparison for an asymmetric double smooth transition heteroskedastic model is investigated. A Bayesian framework in employed and an adaptive Markov chain Monte Carlo scheme is designed. A mixture prior is proposed that alleviates the usual identifiability problem as the speed of transition parameter tends to zero, and an informative prior for this parameter is suggested, that allows for reliable inference and a proper posterior, despite the non-integrability of the likelihood function. A formal Bayesian posterior model comparison procedure is employed to compare the proposed model with its two limiting cases: the double threshold GARCH and symmetric ARX GARCH models. The proposed methods are illustrated using both simulated and international stock market return series. Some illustrations of the advantages of an adaptive sampling scheme for these models are also provided. Finally, Bayesian forecasting methods are employed in a Value-at-Risk study of the international return series. The results generally favour the proposed smooth transition model and highlight explosive and smooth nonlinear behaviour in financial markets.  相似文献   

17.
The relative performance of a component of a series system in two different environments is considered. The conditional probability of the failure of the system due to the failure of the specified component given that the system failed before time t is regarded as a measure of relative importance of the component to the system. A U-statistic test for checking the equality of the relative importance of the component to the system in two different environments against the alternative that the relative importance is smaller in one of the environments, is proposed. Some simulation results for estimating the power of the test are reported. The proposed test is applied to one real data set and it is seen that a different aspect of the data is brought out by this comparison than that by the comparisons of the absolute importance functions such as the subsurvival functions, considered in earlier studies.  相似文献   

18.
Nonlinear time series analysis plays an important role in recent econometric literature, especially the bilinear model. In this paper, we cast the bilinear time series model in a Bayesian framework and make inference by using the Gibbs sampler, a Monte Carlo method. The methodology proposed is illustrated by using generated examples, two real data sets, as well as a simulation study. The results show that the Gibbs sampler provides a very encouraging option in analyzing bilinear time series.  相似文献   

19.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

20.

In the traditional Box-Jenkins procedure for fitting ARMA time series models to data, the first step is order identification. The sample autocorrelation function can be used to identify pure moving average behavior. In this paper we consider using the autocovariation function identify the order of a univariate Gaussian time series. Simulation evidence indicates the suggested method may be a superior order identification tool when at least 100 observations are taken.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号