首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The choice of the model framework in a regression setting depends on the nature of the data. The focus of this study is on changepoint data, exhibiting three phases: incoming and outgoing, both of which are linear, joined by a curved transition. Bent-cable regression is an appealing statistical tool to characterize such trajectories, quantifying the nature of the transition between the two linear phases by modeling the transition as a quadratic phase with unknown width. We demonstrate that a quadratic function may not be appropriate to adequately describe many changepoint data. We then propose a generalization of the bent-cable model by relaxing the assumption of the quadratic bend. The properties of the generalized model are discussed and a Bayesian approach for inference is proposed. The generalized model is demonstrated with applications to three data sets taken from environmental science and economics. We also consider a comparison among the quadratic bent-cable, generalized bent-cable and piecewise linear models in terms of goodness of fit in analyzing both real-world and simulated data. This study suggests that the proposed generalization of the bent-cable model can be valuable in adequately describing changepoint data that exhibit either an abrupt or gradual transition over time.  相似文献   

2.
3.
Long memory versus structural breaks: An overview   总被引:1,自引:0,他引:1  
We discuss the increasing literature on misspecifying structural breaks or more general trends as long-range dependence. We consider tests on structural breaks in the long-memory regression model as well as the behaviour of estimators of the memory parameter when structural breaks or trends are in the data but long memory is not. Methods for distinguishing both of these phenomena are proposed. The financial support of Volkswagenstiftung is gratefully acknowledged.  相似文献   

4.
We propose a simulation-based Bayesian approach to the analysis of long memory stochastic volatility models, stationary and nonstationary. The main tool used to reduce the likelihood function to a tractable form is an approximate state-space representation of the model, A data set of stock market returns is analyzed with the proposed method. The approach taken here allows a quantitative assessment of the empirical evidence in favor of the stationarity, or nonstationarity, of the instantaneous volatility of the data.  相似文献   

5.
We study Bayesian dynamic models for detecting changepoints in count time series that present structural breaks. As the inferential approach, we develop a parameter learning version of the algorithm proposed by Chopin [Chopin N. Dynamic detection of changepoints in long time series. Annals of the Institute of Statistical Mathematics 2007;59:349–366.], called the Chopin filter with parameter learning, which allows us to estimate the static parameters in the model. In this extension, the static parameters are addressed by using the kernel smoothing approximations proposed by Liu and West [Liu J, West M. Combined parameters and state estimation in simulation-based filtering. In: Doucet A, de Freitas N, Gordon N, editors. Sequential Monte Carlo methods in practice. New York: Springer-Verlag; 2001]. The proposed methodology is then applied to both simulated and real data sets and the time series models include distributions that allow for overdispersion and/or zero inflation. Since our procedure is general, robust and naturally adaptive because the particle filter approach does not require restrictive specifications to ensure its validity and effectiveness, we believe it is a valuable alternative for dealing with the problem of detecting changepoints in count time series. The proposed methodology is also suitable for count time series with no changepoints and for independent count data.  相似文献   

6.
Statistics and Computing - This article focuses on the challenging problem of efficiently detecting changes in mean within multivariate data sequences. Multivariate changepoints can be detected by...  相似文献   

7.
AStA Advances in Statistical Analysis - Notions of data depth have motivated nonparametric multivariate analysis, especially in supervised learning. Maximum depth classifiers, classifiers based on...  相似文献   

8.
This paper investigates persistence in financial time series at three different frequencies (daily, weekly and monthly). The analysis is carried out for various financial markets (stock markets, FOREX, commodity markets) over the period from 2000 to 2016 using two different long memory approaches (R/S analysis and fractional integration) for robustness purposes. The results indicate that persistence is higher at lower frequencies, for both returns and their volatility. This is true of the stock markets (both developed and emerging) and partially of the FOREX and commodity markets examined. Such evidence against the random walk behaviour implies predictability and is inconsistent with the Efficient Market Hypothesis (EMH), since abnormal profits can be made using trading strategies based on trend analysis.  相似文献   

9.
This article proposes a new procedure for obtaining one-sided tolerance limits in unbalanced random effects models. The procedure is a generalization of that proposed by Mee and Owen for the balanced situation, and can be easily implemented, because it only needs a non-central-t table. Two simulation studies are carried out to assess the performance of the new procedure and to compare it with one of the other procedures laid out in previous statistical literature. The article findings show that the new procedure is much simpler to compute and performs better than the previous ones, having inferior values of the gamma bias in a wide range of situations, representative of many actual industrial applications, and behaving also reasonably well in more extreme sampling situations. The use of the new limits is illustrated by an application to an actual example from the steel industry.  相似文献   

10.
This paper proposes a statistical procedure for the automatic volumetric primitives classification and segmentation of 3D objects surveyed with high density laser scanning range measurements. The procedure is carried out in three main phases: first, a Taylor’s expansion nonparametric model is applied to study the differential local properties of the surface so to classify and identify homogeneous point clusters. Classification is based on the study of the surface Gaussian and mean curvature, computed for each point from estimated differential parameters of the Taylor’s formula extended to second order terms. The geometrical primitives are classified into the following basic types: elliptic, hyperbolic, parabolic and planar. The last phase corresponds to a parametric regression applied to perform a robust segmentation of the various primitives. A Simultaneous AutoRegressive model is applied to define the trend surface for each geometric feature, and a Forward Search procedure puts in evidence outliers or clusters of non stationary data. An erratum to this article can be found at  相似文献   

11.
12.
A version of the multiple decsion problem is studied in which the procedure is based only on the current observation and the previous decision. A necessary and sufficient condition for inconsistency of the stepwise maximum likelihood procedure is shown to be the boundedness of the likelihood ratios. In the case of consistency the (typically slow) rate of convergence to zero of the error probabilities is determined.  相似文献   

13.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


14.
15.
ABSTRACT

This work presents advanced computational aspects of a new method for changepoint detection on spatio-temporal point process data. We summarize the methodology, based on building a Bayesian hierarchical model for the data and declaring prior conjectures on the number and positions of the changepoints, and show how to take decisions regarding the acceptance of potential changepoints. The focus of this work is about choosing an approach that detects the correct changepoint and delivers smooth reliable estimates in a feasible computational time; we propose Bayesian P-splines as a suitable tool for managing spatial variation, both under a computational and a model fitting performance perspective. The main computational challenges are outlined and a solution involving parallel computing in R is proposed and tested on a simulation study. An application is also presented on a data set of seismic events in Italy over the last 20 years.  相似文献   

16.
This paper considers the problem of testing a sub-hypothesis in homoscedastic linear regression models where errors form long memory moving average processes and designs are non-random. Unlike in the random design case, asymptotic null distribution of the likelihood ratio type test based on the Whittle quadratic form is shown to be non-standard and non-chi-square. Moreover, the rate of consistency of the minimum Whittle dispersion estimator of the slope parameter vector is shown to be n-(1-α)/2n-(1-α)/2, different from the rate n-1/2n-1/2 obtained in the random design case, where αα is the rate at which the error spectral density explodes at the origin. The proposed test is shown to be consistent against fixed alternatives and has non-trivial asymptotic power against local alternatives that converge to null hypothesis at the rate n-(1-α)/2n-(1-α)/2.  相似文献   

17.
We study the persistence of intertrade durations, counts (number of transactions in equally spaced intervals of clock time), squared returns and realized volatility in 10 stocks trading on the New York Stock Exchange. A semiparametric analysis reveals the presence of long memory in all of these series, with potentially the same memory parameter. We introduce a parametric latent-variable long-memory stochastic duration (LMSD) model which is shown to better fit the data than the autoregressive conditional duration model (ACD) in a variety of ways. The empirical evidence we present here is in agreement with theoretical results on the propagation of memory from durations to counts and realized volatility presented in Deo et al. (2009).  相似文献   

18.
A new sampling-based Bayesian approach to the long memory stochastic volatility (LMSV) process is presented; the method is motivated by the GPH-estimator in fractionally integrated autoregressive moving average (ARFIMA) processes, which was originally proposed by J. Geweke and S. Porter-Hudak [The estimation and application of long memory time series models, Journal of Time Series Analysis, 4 (1983) 221–238]. In this work, we perform an estimation of the memory parameter in the Bayesian framework; an estimator is obtained by maximizing the posterior density of the memory parameter. Finally, we compare the GPH-estimator and the Bayes-estimator by means of a simulation study and our new approach is illustrated using several stock market indices; the new estimator is proved to be relatively stable for the various choices of frequencies used in the regression.  相似文献   

19.
The mixed effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications of the methods to circular models and longitudinal data are discussed. Furthermore, simulation results indicate that better estimates of variance components do not necessarily imply higher power of the tests or shorter confidence intervals.  相似文献   

20.
This paper presents and illustrates a new nonsequential design procedure for simultaneous parameter estimation and model discrimination for a collection of nonlinear regression models. This design criterion is extended to make it robust to initial parameter choices by using a Bayesian design approach, and is also extended to yield efficient estimation-discrimination designs which take account of curvature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号