首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
《Econometric Reviews》2013,32(4):425-443
The integer-valued AR1 model is generalized to encompass some of the more likely features of economic time series of count data. The generalizations come at the price of loosing exact distributional properties. For most specifications the first and second order both conditional and unconditional moments can be obtained. Hence estimation, testing and forecasting are feasible and can be based on least squares or GMM techniques. An illustration based on the number of plants within an industrial sector is considered.  相似文献   

2.
In this paper, we consider the statistical inference for the success probability in the case of start-up demonstration tests in which rejection of units is possible when a pre-fixed number of failures is observed before the required number of consecutive successes are achieved for acceptance of the unit. Since the expected value of the stopping time is not a monotone function of the unknown parameter, the method of moments is not useful in this situation. Therefore, we discuss two estimation methods for the success probability: (1) the maximum likelihood estimation (MLE) via the expectation-maximization (EM) algorithm and (2) Bayesian estimation with a beta prior. We examine the small-sample properties of the MLE and Bayesian estimator. Finally, we present an example to illustrate the method of inference discussed here.  相似文献   

3.
ABSTRACT

The bootstrap is typically less reliable in the context of time-series models with serial correlation of unknown form than when regularity conditions for the conventional IID bootstrap apply. It is, therefore, useful to have diagnostic techniques capable of evaluating bootstrap performance in specific cases. Those suggested in this paper are closely related to the fast double bootstrap (FDB) and are not computationally intensive. They can also be used to gauge the performance of the FDB itself. Examples of bootstrapping time series are presented, which illustrate the diagnostic procedures, and show how the results can cast light on bootstrap performance.  相似文献   

4.
This paper is motivated by the pioneering work of Emanuel Parzen wherein he advanced the estimation of (spectral) densities via kernel smoothing and established the role of reproducing kernel Hilbert spaces (RKHS) in field of time series analysis. Here, we consider analysis of power (ANOPOW) for replicated time series collected in an experimental design where the main goals are to estimate, and to detect differences among, group spectra. To accomplish these goals, we obtain smooth estimators of the group spectra by assuming that each spectral density is in some RKHS; we then apply penalized least squares in a smoothing spline ANOPOW. For inference, we obtain simultaneous confidence intervals for the estimated group spectra via bootstrapping.  相似文献   

5.
Kadilar and Cingi [Ratio estimators in simple random sampling, Appl. Math. Comput. 151 (3) (2004), pp. 893–902] introduced some ratio-type estimators of finite population mean under simple random sampling. Recently, Kadilar and Cingi [New ratio estimators using correlation coefficient, Interstat 4 (2006), pp. 1–11] have suggested another form of ratio-type estimators by modifying the estimator developed by Singh and Tailor [Use of known correlation coefficient in estimating the finite population mean, Stat. Transit. 6 (2003), pp. 655–560]. Kadilar and Cingi [Improvement in estimating the population mean in simple random sampling, Appl. Math. Lett. 19 (1) (2006), pp. 75–79] have suggested yet another class of ratio-type estimators by taking a weighted average of the two known classes of estimators referenced above. In this article, we propose an alternative form of ratio-type estimators which are better than the competing ratio, regression, and other ratio-type estimators considered here. The results are also supported by the analysis of three real data sets that were considered by Kadilar and Cingi.  相似文献   

6.
A regular supply of applicants to Queen's University in Kingston, Ontario is provided by 65 high schools. Each high school can be characterized by a series of grading standards which change from year to year. To aid admissions decisions, it is desirable to forecast the current year's grading standards for all 65 high schools using grading standards estimated from past year's data. We develop and apply a Bayesian break-point time-series model that generates forecasts which involve smoothing across time for each school and smoothing across schools. “Break point” refers to a point in time which divides the past into the “old past” and the “recent past” where the yearly observations in the recent past are exchangeable with the observations in the year to be forecast. We show that this model works fairly well when applied to 11 years of Queen's University data. The model can be applied to other data sets with the parallel time-series structure and short history, and can be extended in several ways to more complicated structures.  相似文献   

7.
Zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models are recommended for handling excessive zeros in count data. For various reasons, researchers may not address zero inflation. This paper helps educate researchers on (1) the importance of accounting for zero inflation and (2) the consequences of misspecifying the statistical model. Using simulations, we found that when the zero inflation in the data was ignored, estimation was poor and statistically significant findings were missed. When overdispersion within the zero-inflated data was ignored, poor estimation and inflated Type I errors resulted. Recommendations on when to use the ZINB and ZIP models are provided. In an illustration using a two-step model selection procedure (likelihood ratio test and the Vuong test), the ZIP model was correctly identified only when the distributions had moderate means and sample sizes and did not correctly identify the ZINB model or the zero inflation in the ZIP and ZINB distributions.  相似文献   

8.
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926 Yule , G. U. ( 1926 ). Why we do sometimes get nonsense-correlations between time-series? A study in sampling and the nature of time-series (with discussion) . J. Roy. Statist. Soc. 89 : 164 .[Crossref] [Google Scholar]). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999 Bauwens , L. , Lubrano , M. , Richard , J.-F. ( 1999 ). Bayesian Inference in Dynamic Econometric Models . Oxford : Oxford University Press . [Google Scholar]) highlighted that “the topic of selecting the cointegrating rank has not yet given very useful and convincing results”.

The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.  相似文献   

9.
In this paper, we propose a methodology to analyze longitudinal data through distances between pairs of observations (or individuals) with regard to the explanatory variables used to fit continuous response variables. Restricted maximum-likelihood and generalized least squares are used to estimate the parameters in the model. We applied this new approach to study the effect of gender and exposure on the deviant behavior variable with respect to tolerance for a group of youths studied over a period of 5 years. Were performed simulations where we compared our distance-based method with classic longitudinal analysis with both AR(1) and compound symmetry correlation structures. We compared them under Akaike and Bayesian information criterions, and the relative efficiency of the generalized variance of the errors of each model. We found small gains in the proposed model fit with regard to the classical methodology, particularly in small samples, regardless of variance, correlation, autocorrelation structure and number of time measurements.  相似文献   

10.
Recursive partitioning algorithms separate a feature space into a set of disjoint rectangles. Then, usually, a constant in every partition is fitted. While this is a simple and intuitive approach, it may still lack interpretability as to how a specific relationship between dependent and independent variables may look. Or it may be that a certain model is assumed or of interest and there is a number of candidate variables that may non-linearly give rise to different model parameter values. We present an approach that combines generalized linear models (GLM) with recursive partitioning that offers enhanced interpretability of classical trees as well as providing an explorative way to assess a candidate variable's influence on a parametric model. This method conducts recursive partitioning of a GLM by (1) fitting the model to the data set, (2) testing for parameter instability over a set of partitioning variables, (3) splitting the data set with respect to the variable associated with the highest instability. The outcome is a tree where each terminal node is associated with a GLM. We will show the method's versatility and suitability to gain additional insight into the relationship of dependent and independent variables by two examples, modelling voting behaviour and a failure model for debt amortization, and compare it to alternative approaches.  相似文献   

11.
The monitoring of web servers through statistical frameworks is of utmost important in order to verify possible suspicious anomalies in network traffic or misuse actions that compromise integrity, confidentiality, and availability of information. In this paper, by considering the Plackett copula function, we propose a bivariate beta-autoregressive moving average time-series model for proportion data over time, which is the case for variables present in web server monitoring such as error rates. To illustrate the proposed methodology, we monitor a Brazilian web server's rate of connection synchronization and rejection errors in a web system, with error logging rate in the past 10?min. In essence, the entire methodology may be generalized to any number of time-series of error rates.  相似文献   

12.
This paper is about vector autoregressive‐moving average models with time‐dependent coefficients to represent non‐stationary time series. Contrary to other papers in the univariate case, the coefficients depend on time but not on the series' length n. Under appropriate assumptions, it is shown that a Gaussian quasi‐maximum likelihood estimator is almost surely consistent and asymptotically normal. The theoretical results are illustrated by means of two examples of bivariate processes. It is shown that the assumptions underlying the theoretical results apply. In the second example, the innovations are marginally heteroscedastic with a correlation ranging from ?0.8 to 0.8. In the two examples, the asymptotic information matrix is obtained in the Gaussian case. Finally, the finite‐sample behaviour is checked via a Monte Carlo simulation study for n from 25 to 400. The results confirm the validity of the asymptotic properties even for short series and the asymptotic information matrix deduced from the theory.  相似文献   

13.
Bivariate integer-valued time series occur in many areas, such as finance, epidemiology, business etc. In this article, we present bivariate autoregressive integer-valued time-series models, based on the signed thinning operator. Compared to classical bivariate INAR models, the new processes have the advantage to allow for negative values for both the time series and the autocorrelation functions. Strict stationarity and ergodicity of the processes are established. The moments and the autocovariance functions are determined. The conditional least squares estimator of the model parameters is considered and the asymptotic properties of the obtained estimators are derived. An analysis of a real dataset from finance and a simulation study are carried out to assess the performance of the model.  相似文献   

14.
与阿基米德copula相比,分层阿基米德copula(HAC)的结构更具一般性,而相比于椭圆型copula它的待估参数个数更少。用两阶段极大似然法来估计HAC函数,主要的步骤是先估计出每个分量的边际分布,以此为基础再估计copula函数。实证分析中,采取Clayton和Gumbel型的HAC分析四只股票价格序列之间的相关性。在得出HAC的结构和估计其参数之前,运用ARMA-GARCH过程消除了序列的自相关性和条件异方差。通过比较赤迟信息准则,认为完全嵌套的Gumbel型HAC能更好地刻画这种相关性。  相似文献   

15.
We present a time-domain goodness-of-fit (gof) diagnostic test that is based on signal-extraction variances for nonstationary time series. This diagnostic test extends the time-domain gof statistic of Maravall (2003 Maravall, A. (2003). A class of diagnostics in the ARIMA-model-based decomposition of a time series. Memorandum, Bank of Spain. Available at http://www.bde.es/servicio/software/tramo/diagnosticsamb.pdf [Google Scholar]) by taking into account the effects of model parameter uncertainty, utilizing theoretical results of McElroy and Holan (2009 McElroy, T., Holan, S. (2009). A local spectral approach for assessing time series model misspeci?cation. Journal of Multivariate Analysis 100:604621.[Crossref], [Web of Science ®] [Google Scholar]). We demonstrate that omitting this correction results in a severely undersized statistic. Adequate size and power are obtained in Monte Carlo studies for fairly short time series (10 to 15 years of monthly data). Our Monte Carlo studies of finite sample size and power consider different combinations of both signal and noise components using seasonal, trend, and irregular component models obtained via canonical decomposition. Details of the implementation appropriate for SARIMA models are given. We apply the gof diagnostic test statistics to several U.S. Census Bureau time series. The results generally corroborate the output of the automatic model selection procedure of the X-12-ARIMA software, which in contrast to our diagnostic test statistic does not involve hypothesis testing. We conclude that these diagnostic test statistics are a useful supplementary model-checking tool for practitioners engaged in the task of model-based seasonal adjustment.  相似文献   

16.
This paper tackles the issue of economic time-series modeling from a joint time and frequency-domain standpoint, with the objective of estimating the latent trend-cycle component. Since time-series records are data strings over a finite time span, they read as samples of contiguous data drawn from realizations of stochastic processes aligned with the time arrow. This accounts for the interpretation of time series as time-limited signals. Economic time series (up to a disturbance term) result from latent components known as trend, cycle, and seasonality, whose generating stochastic processes are harmonizable on a finite average-power argument. In addition, since trend is associated with long-run regular movements, and cycle with medium-term economic fluctuation, both of these turn out to be band-limited components. Recognizing such a frequency-domain location permits a filter-based approach to component estimation. This is accomplished through a Toeplitz matrix operator with sinc functions as entries, mirroring the ideal low-pass filter impulse response. The notion of virtual transfer function is developed and its closed-form expression derived in order to evaluate the filter features. The paper is completed by applying this filter to quarterly data from Italian industrial production, thus shedding light on the performance of the estimation procedure.  相似文献   

17.
This article describes a new Monte Carlo method for the evaluation of the orthant probabilities by sampling first passage times of a non-singular Gaussian discrete time-series across an absorbing boundary. This procedure makes use of a simulation of several time-series sample paths, aiming to record their first crossing instants. Thus, the computation of the orthant probabilities is traced back to the accurate simulation of a non-singular Gaussian discrete-time series. Moreover, if the simulation is also efficient, this method is shown to be speedier than the others proposed in the literature. As example, we make use of the Davies–Harte algorithm in the evaluation of the orthant probabilities associated to the ARFIMA(0, d, 0) model. Test results are presented that compare this method with currently available software.  相似文献   

18.
Variation of marine temperature at different time scales is a central environmental factor in the life cycle of marine organisms, and may have particular importance for various life stages of anadromous species, for example, Atlantic salmon. To understand the salient features of temperature variation we employ scale space multiresolution analysis, that uses differences of smooths of a time series to decompose it as a sum of scale-dependent components. The number of resolved components can be determined either automatically or by exploring a map that visualizes the structure of the time series. The statistical credibility of the features of the components is established with Bayesian inference. The method was applied to analyze a marine temperature time series measured from the Barents Sea and its correlation with the abundance of Atlantic salmon in three Barents Sea rivers. Besides the annual seasonal variation and a linear trend, the method revealed mid time-scale (~10 years) and long time-scale (~30 years) variation. The 10-year quasi-cyclical component of the temperature time series appears to be connected with a similar feature in Atlantic salmon abundance. These findings can provide information about the environmental factors affecting seasonal and periodic variation in survival and migrations of Atlantic salmon and other migratory fish.  相似文献   

19.
一、引言由于世界经济结构的剧烈动荡,如金融危机、政策变更等,致使经济时间序列中的结构突变时有发生,经济过程的结构突变会影响协整分析的结果,使协整方法论中许多有代表性的检验失去原有的功效,如单位根检验[单位根可能会发生漂移(特征根的取值不稳定),单位根检验统计量也可  相似文献   

20.
ABSTRACT

This article proposes a development of detecting patches of additive outliers in autoregressive time series models. The procedure improves the existing detection methods via Gibbs sampling. We combine the Bayesian method and the Kalman smoother to present some candidate models of outlier patches and the best model with the minimum Bayesian information criterion (BIC) is selected among them. We propose that this combined Bayesian and Kalman method (CBK) can reduce the masking and swamping effects about detecting patches of additive outliers. The correctness of the method is illustrated by simulated data and then by analyzing a real set of observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号