首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   

2.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   

3.
In a large variety of applications, the data for a variable we wish to explain are ordered and categorical. In this paper, we present a new similarity-based model for the scenario and investigate its properties. We establish that the process is ψ-mixing and strictly stationary and derive the explicit form of the autocorrelation function in some special cases. Consistency and asymptotic normality of the maximum likelihood estimator of the model’s parameters are proven. A simulation study supports our findings. The results are applied to the Netflix data set, comprised of a survey on users’ grading of movies.  相似文献   

4.
Recently, Perron has carried out tests of the unit-root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil-price shock. His analysis covers the Nelson–Plosser macroeconomic data series as well as a postwar quarterly real gross national product (GNP) series. His tests reject the unit-root null hypothesis for most of the series. This article takes issue with the assumption used by Perron that the Great Crash and the oil-price shock can be treated as exogenous events. A variation of Perron's test is considered in which the breakpoint is estimated rather than fixed. We argue that this test is more appropriate than Perron's because it circumvents the problem of data-mining. The asymptotic distribution of the estimated breakpoint test statistic is determined. The data series considered by Perron are reanalyzed using this test statistic. The empirical results make use of the asymptotics developed for the test statistic as well as extensive finite-sample corrections obtained by simulation. The effect on the empirical results of fat-tailed and temporally dependent innovations is investigated, in brief, by treating the breakpoint as endogenous, we find that there is less evidence against the unit-root hypothesis than Perron finds for many of the data series but stronger evidence against it for several of the series, including the Nelson-Plosser industrial-production, nominal-GNP, and real-GNP series.  相似文献   

5.
平稳性检验是时间序列回归分析的一个关键问题,已有的检验方法在处理海量时间序列数据时显得乏力,检验准确率有待提高。采用分类技术建立平稳性检验的新方法,可以有效地处理海量时间序列数据。首先计算时间序列自相关函数,构建一个充分非必要的判定准则;然后建立序列收敛的量化分析方法,研究收敛参数的最优取值,并提取平稳性特征向量;最后采用k-means聚类建立平稳性分类识别方法。采用一组模拟数据和股票数据进行分析,将ADF检验、PP检验、KPSS检验进行对比,实证结果表明新方法的准确率较高。  相似文献   

6.
The detection of (structural) breaks or the so called change point problem has drawn increasing attention from the theoretical, applied economic and financial fields. Much of the existing research concentrates on the detection of change points and asymptotic properties of their estimators in panels when N, the number of panels, as well as T, the number of observations in each panel are large. In this paper we pursue a different approach, i.e., we consider the asymptotic properties when N→∞ while keeping T fixed. This situation is typically related to large (firm-level) data containing financial information about an immense number of firms/stocks across a limited number of years/quarters/months. We propose a general approach for testing for break(s) in this setup. In particular, we obtain the asymptotic behavior of test statistics. We also propose a wild bootstrap procedure that could be used to generate the critical values of the test statistics. The theoretical approach is supplemented by numerous simulations and by an empirical illustration. We demonstrate that the testing procedure works well in the framework of the four factors CAPM model. In particular, we estimate the breaks in the monthly returns of US mutual funds during the period January 2006 to February 2010 which covers the subprime crises.  相似文献   

7.
The failure rate function commonly has a bathtub shape in practice. In this paper we discuss a regression model considering new Weibull extended distribution developed by Xie et al. (2002) that can be used to model this type of failure rate function. Assuming censored data, we discuss parameter estimation: maximum likelihood method and a Bayesian approach where Gibbs algorithms along with Metropolis steps are used to obtain the posterior summaries of interest. We derive the appropriate matrices for assessing the local influence on the parameter estimates under different perturbation schemes, and we also present some ways to perform global influence. Also, some discussions on case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback–Leibler divergence. Besides, for different parameter settings, sample sizes and censoring percentages, are performed various simulations and display and compare the empirical distribution of the Martingale-type residual with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the martingale-type residual in log-Weibull extended models with censored data. Finally, we analyze a real data set under a log-Weibull extended regression model. We perform diagnostic analysis and model check based on the martingale-type residual to select an appropriate model.  相似文献   

8.
Tree-structured methods for exploratory data analysis have previously been extended to right-censored survival data. We further extend these methods to allow for truncation and time-dependent covariates. We apply the new methods to a data set on incubation times of acquired immunodeficiency syndrome (AIDS), using calendar time as a time-dependent covariate. Contrary to expectation, we find that rates of progression to AIDS appear to be faster after August 1989 than before.  相似文献   

9.
We consider the problem of modelling a long-memory time series using piecewise fractional autoregressive integrated moving average processes. The number as well as the locations of structural break points (BPs) and the parameters of each regime are assumed to be unknown. A four-step procedure is proposed to find out the BPs and to estimate the parameters of each regime. Its effectiveness is shown by Monte Carlo simulations and an application to real traffic data modelling is considered.  相似文献   

10.
This article proposes a bivariate integer-valued autoregressive time-series model of order 1 (BINAR(1) with COM–Poisson marginals to analyze a pair of non stationary time series of counts. The interrelation between the series is induced by the correlated innovations, while the non stationarity is captured through a common set of time-dependent covariates that influence the count responses. The regression and dependence effects are estimated using generalized quasi-likelihood (GQL) approach. Simulation experiments are performed to assess the performance of the estimation algorithms. The proposed BINAR(1) process is applied to analyze a real-life series of day and night accidents in Mauritius.  相似文献   

11.
In response to Congressional directive, the Interstate Commerce Commission (ICC) has created a railroad costing system that includes as key components ratios designed to estimate variable costs associated with freight transportation. The estimated variability ratios are used to determine freight surcharges, jurisdictional threshold rates, and basic rail rates in administrative law and federal court proceedings. In this article we assess the quality and reliability of the estimated variability ratios and their components against standards from economic theory and statistical theory and practice. Our work includes reproduction of the naive ICC regressions, updated naive regressions for the latest data set, estimation based on more secure econometric foundations, and sensitivity analyses comparing alternative estimation procedures. Fundamental questions arise concerning the scientific and evidentiary standards that are required for econometric methodology in policy making and regulatory activities.  相似文献   

12.
The analysis of time-indexed categorical data is important in many fields, e.g., in telecommunication network monitoring, manufacturing process control, ecology, etc. Primary interest is in detecting and measuring serial associations and dependencies in such data. For cardinal time series analysis, autocorrelation is a convenient and informative measure of serial association. Yet, for categorical time series analysis an analogous convenient measure and corresponding concepts of weak stationarity have not been provided. For two categorical variables, several ways of measuring association have been suggested. This paper reviews such measures and investigates their properties in a serial context. We discuss concepts of weak stationarity of a categorical time series, in particular of stationarity in association measures. Serial association and weak stationarity are studied in the class of discrete ARMA processes introduced by Jacobs and Lewis (J. Time Ser. Anal. 4(1):19–36, 1983). An intrinsic feature of a time series is that, typically, adjacent observations are dependent. The nature of this dependence among observations of a time series is of considerable practical interest. Time series analysis is concerned with techniques for the analysis of this dependence. (Box et al. 1994p. 1)  相似文献   

13.
Quantifying driver crash risks has been difficult because the exposure data are often incompatible with crash frequency data. Induced exposure methods provide a promising idea that a relative measurement of driver crash risks can be derived solely from crash frequency data. This paper describes an application of the extended Bradley–Terry model for paired preferences to estimating driver crash risks. We estimate the crash risk for driver groups defined by driver–vehicle characteristics from log-linear models in terms of a set of relative risk scores by using only crash frequency data. Illustrative examples using police-reported crash data from Hawaii are presented.  相似文献   

14.
The Buckley–James estimator (BJE) [J. Buckley and I. James, Linear regression with censored data, Biometrika 66 (1979), pp. 429–436] has been extended from right-censored (RC) data to interval-censored (IC) data by Rabinowitz et al. [D. Rabinowitz, A. Tsiatis, and J. Aragon, Regression with interval-censored data, Biometrika 82 (1995), pp. 501–513]. The BJE is defined to be a zero-crossing of a modified score function H(b), a point at which H(·) changes its sign. We discuss several approaches (for finding a BJE with IC data) which are extensions of the existing algorithms for RC data. However, these extensions may not be appropriate for some data, in particular, they are not appropriate for a cancer data set that we are analysing. In this note, we present a feasible iterative algorithm for obtaining a BJE. We apply the method to our data.  相似文献   

15.
The volatility pattern of financial time series is often characterized by several peaks and abrupt changes, consistent with the time-varying coefficients of the underlying data-generating process. As a consequence, the model-based classification of the volatility of a set of assets could vary over a period of time. We propose a procedure to classify the unconditional volatility obtained from an extended family of Multiplicative Error Models with time-varying coefficients to verify if it changes in correspondence with different regimes or particular dates. The proposed procedure is experimented on 15 stock indices.  相似文献   

16.
A nonparametric inference algorithm developed by Davis and Geman (1983) is extended problem. The algorithm and applied to a medical prediction employs an estimation procedure for acquiring pairwise statistics among variables of a binary data set, allows for the data-driven creation of interaction terms among the variables, and employs a decision rule which asymptotically gives the minimum expected error. The inference procedure was designed for large data sets but has been extended via the method of cross-validation to encompass smaller data sets.  相似文献   

17.
In this article, we extend a previously formulated threshold dose-response model with random litter effects that was applied to a data set from a developmental toxicity study. The dose-response pattern of the data indicates that a threshold dose level may exist. Additionally, there is noticeable variation between the responses across the dose levels. With threshold estimation being critical, the assumed variability structure should adequately model the variation while not taking away from the estimation of the threshold as well as the other parameters directly involved in the dose-response relationship. In the prior formulation, the random effect was modeled assuming identical variation in the interlitter response probabilities across all dose levels, that is, the model had a single parameter to account for the interlitter variability. In this new model, the random effect is modeled as having different response variability across dose levels, that is, multiple interlitter variability parameters. We performed the likelihood ratio test (LRT) to compare our extended model to the previous model. We conducted a simulation study to compare the bias of each model when fit to data generated with the underlying parametric structure of the opposing model. The extended threshold dose-response model with multiple response variation was less biased.  相似文献   

18.
Transition models are an important framework that can be used to model longitudinal categorical data. A relevant issue in applying these models is the condition of stationarity, or homogeneity of transition probabilities over time. We propose two tests to assess stationarity in transition models: Wald and likelihood-ratio tests, which do not make use of transition probabilities, using only the estimated parameters of the models in contrast to the classical test available in the literature. In this paper, we present two motivating studies, with ordinal longitudinal data, to which proportional odds transition models are fitted and the two proposed tests are applied as well as the classical test. Additionally, their performances are assessed through simulation studies. The results show that the proposed tests have good performance, being better for control of type-I error and they present equivalent power functions asymptotically. Also, the correlations between the Wald, likelihood-ratio and the classical test statistics are positive and large, an indicator of general concordance. Additionally, both of the proposed tests are more flexible and can be applied in studies with qualitative and quantitative covariates.  相似文献   

19.
We introduce a technique for extending the classical method of linear discriminant analysis (LDA) to data sets where the predictor variables are curves or functions. This procedure, which we call functional linear discriminant analysis ( FLDA ), is particularly useful when only fragments of the curves are observed. All the techniques associated with LDA can be extended for use with FLDA. In particular FLDA can be used to produce classifications on new (test) curves, give an estimate of the discriminant function between classes and provide a one- or two-dimensional pictorial representation of a set of curves. We also extend this procedure to provide generalizations of quadratic and regularized discriminant analysis.  相似文献   

20.
Stationary long memory processes have been extensively studied over the past decades. When we deal with financial, economic, or environmental data, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists. To take into account this phenomenon, we propose a new class of stochastic processes: locally stationary k-factor Gegenbauer process. We present a procedure to estimate consistently the time-varying parameters by applying discrete wavelet packet transform. The robustness of the algorithm is investigated through a simulation study. And we apply our methods on Nikkei Stock Average 225 (NSA 225) index series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号