首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In statistical data analysis it is often important to compare, classify, and cluster different time series. For these purposes various methods have been proposed in the literature, but they usually assume time series with the same sample size. In this article, we propose a spectral domain method for handling time series of unequal length. The method make the spectral estimates comparable by producing statistics at the same frequency. The procedure is compared with other methods proposed in the literature by a Monte Carlo simulation study. As an illustrative example, the proposed spectral method is applied to cluster industrial production series of some developed countries.  相似文献   

2.
The paper has its origin in the finding that the frequency-domain estimation of ARh4A models can produce estimates which may be remarkably biased. Both of the frequency-domain estimation methods considered in the paper are based on the frequency-domain likelihood function, which depends on the periodogram ordinates of the time series. It is found that, as estimates of the spectrum ordinates, the corresponding periodogram ordinates may contain a rather remarkable bias, which again causes bias in the estimates of parameters produced by a frequency-domain estimation method of an ARMA model. The bias is reduced by tapering the observed time series. An example is given of estimation experiments for simulated time series from a pure autoregressive process of order two.  相似文献   

3.
It is an important problem to compare two time series in many applications. In this paper, a computational bootstrap procedure is proposed to test if two dependent stationary time series have the same autocovariance structures. The blocks of blocks bootstrap on bivariate time series is employed to estimate the covariance matrix which is necessary in order to construct the proposed test statistic. Without much additional effort, the bootstrap critical values can also be computed as a byproduct from the same bootstrap procedure. The asymptotic distribution of the test statistic under the null hypothesis is obtained. A simulation study is conducted to examine the finite sample performance of the test. The simulation results show that the proposed procedure with the bootstrap critical values performs well empirically and is especially useful when time series are short and non-normal. The proposed test is applied to an analysis of a real data set to understand the relationship between the input and output signals of a chemical process.  相似文献   

4.
Summary In recent years, the bootstrap method has been extended to time series analysis where the observations are serially correlated. Contributions have focused on the autoregressive model producing alternative resampling procedures. In contrast, apart from some empirical applications, very little attention has been paid to the possibility of extending the use of the bootstrap method to pure moving average (MA) or mixed ARMA models. In this paper, we present a new bootstrap procedure which can be applied to assess the distributional properties of the moving average parameters estimates obtained by a least square approach. We discuss the methodology and the limits of its usage. Finally, the performance of the bootstrap approach is compared with that of the competing alternative given by the Monte Carlo simulation. Research partially supported by CNR and MURST.  相似文献   

5.
We investigate and develop methods for structural break detection, considering time series from thermal spraying process monitoring. Since engineers induce technical malfunctions during the processes, the time series exhibit structural breaks at known time points, giving us valuable information to conduct the investigations. First, we consider a recently developed robust online (also real-time) filtering (i.e. smoothing) procedure that comprises a test for local linearity. This test rejects when jumps and trend changes are present, so that it can also be useful to detect such structural breaks online. Second, based on the filtering procedure we develop a robust method for the online detection of ongoing trends. We investigate these two methods as to the online detection of structural breaks by simulations and applications to the time series from the manipulated spraying processes. Third, we consider a recently developed fluctuation test for constant variances that can be applied offline, i.e. after the whole time series has been observed, to control the spraying results. Since this test is not reliable when jumps are present in the time series, we suggest data transformation based on filtering and demonstrate that this transformation makes the test applicable.  相似文献   

6.
This article addresses the problem of estimating the time of apparent death in a binary stochastic process. We show that, when only censored data are available, a fitted logistic regression model may estimate the time of death incorrectly. We improve this estimation by utilizing discrete-event simulation to produce simulated complete time series data. The proposed methodology may be applied to situations where time of death cannot be formally determined and has to be estimated based on prolonged inactivity. As an illustration, we use observed monthly activity patterns from 300 real Open Source Software development projects sampled from Sourceforge.net.  相似文献   

7.
We investigate the power-law scaling behaviors of returns for a financial price process which is developed by the voter interacting dynamic system in comparison with the real financial market index (Shanghai Composite Index). The voter system is a continuous time Markov process, which originally represents a voter's attitude on a particular topic, that is, voters reconsider their opinions at times distributed according to independent exponential random variables. In this paper, the detrended fluctuation analysis method is employed to explore the long range power-law correlations of return time series for different values of parameters in the financial model. The findings show no indication or very weak long-range power-law correlations for the simulated returns but strong long-range dependence for the absolute returns. The multiplier distribution is studied to demonstrate directly the existence of scale invariance in the actual data of the Shanghai Stock Exchange and the simulation data of the model by comparison. Moreover, the Zipf analysis is applied to investigate the statistical behaviors of frequency functions and the distributions of the returns. By a comparative study, the simulation data for our constructed price model exhibits very similar behaviors to the real stock index, this indicates somewhat rationality of our model to the market application.  相似文献   

8.
In this paper, we introduce a new probability model known as Marshall–Olkin q-Weibull distribution. Various properties of the distribution and hazard rate functions are considered. The distribution is applied to model a biostatistical data. The corresponding time series models are developed to illustrate its application in times series modeling. We also develop different types of autoregressive processes with minification structure and max–min structure which can be applied to a rich variety of contexts in real life. Sample path properties are examined and generalization to higher orders are also made. The model is applied to a time series data on daily discharge of Neyyar river in Kerala, India.  相似文献   

9.
10.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

11.
Bayesian dynamic linear models (DLMs) are useful in time series modelling, because of the flexibility that they off er for obtaining a good forecast. They are based on a decomposition of the relevant factors which explain the behaviour of the series through a series of state parameters. Nevertheless, the DLM as developed by West and Harrison depend on additional quantities, such as the variance of the system disturbances, which, in practice, are unknown. These are referred to here as 'hyper-parameters' of the model. In this paper, DLMs with autoregressive components are used to describe time series that show cyclic behaviour. The marginal posterior distribution for state parameters can be obtained by weighting the conditional distribution of state parameters by the marginal distribution of hyper-parameters. In most cases, the joint distribution of the hyperparameters can be obtained analytically but the marginal distributions of the components cannot, so requiring numerical integration. We propose to obtain samples of the hyperparameters by a variant of the sampling importance resampling method. A few applications are shown with simulated and real data sets.  相似文献   

12.
Even though integer-valued time series are common in practice, the methods for their analysis have been developed only in recent past. Several models for stationary processes with discrete marginal distributions have been proposed in the literature. Such processes assume the parameters of the model to remain constant throughout the time period. However, this need not be true in practice. In this paper, we introduce non-stationary integer-valued autoregressive (INAR) models with structural breaks to model a situation, where the parameters of the INAR process do not remain constant over time. Such models are useful while modelling count data time series with structural breaks. The Bayesian and Markov Chain Monte Carlo (MCMC) procedures for the estimation of the parameters and break points of such models are discussed. We illustrate the model and estimation procedure with the help of a simulation study. The proposed model is applied to the two real biometrical data sets.  相似文献   

13.
The empirical Bayes (EB) method is commonly used by transportation safety analysts for conducting different types of safety analyses, such as before–after studies and hotspot analyses. To date, most implementations of the EB method have been applied using a negative binomial (NB) model, as it can easily accommodate the overdispersion commonly observed in crash data. Recent studies have shown that a generalized finite mixture of NB models with K mixture components (GFMNB-K) can also be used to model crash data subjected to overdispersion and generally offers better statistical performance than the traditional NB model. So far, nobody has developed how the EB method could be used with finite mixtures of NB models. The main objective of this study is therefore to use a GFMNB-K model in the calculation of EB estimates. Specifically, GFMNB-K models with varying weight parameters are developed to analyze crash data from Indiana and Texas. The main finding shows that the rankings produced by the NB and GFMNB-2 models for hotspot identification are often quite different, and this was especially noticeable with the Texas dataset. Finally, a simulation study designed to examine which model formulation can better identify the hotspot is recommended as our future research.  相似文献   

14.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

15.
polya后验方法作为一种无信息贝叶斯估计方法,在有限总体抽样中,通过观测的样本,构造一系列的模拟总体,然后进行统计推断。通过统计模拟研究了polya后验方法估计的一些特点,并和Bootstrap方法进行比较。模拟结果显示:polya后验方法能够很好地估计总体的均值,随着样本量的增大,估计值与真值的差距越来越小。采用polya后验方法构造的置信区间区间长度较小,能够很好地覆盖真值。  相似文献   

16.
In this article we present a simple bootstrap method for time series. The proposed method is model-free, and hence it enables us to avoid certain situations where the bootstrap samples may contain impossible values due to resampling from the residuals. The method is easy to implement and can be applied to stationary and nonstationary time series. The simulation results and the application to real time series data show that the method works very well.  相似文献   

17.
In statistics, Fourier series have been used extensively in such areas as time series and stochastic processes. These series; however, to a large degree have been neglected with regard to their use in statistical distribution theory. This omission appears quite striking when one considers that, after the elementary functions, the trigonometric functions are the most important functions in applied mathematics. In this paper a procedure is developed for utilizing Fourier series to represent distribution functions of finite range random variables as Fourier series with coefficients easily expressible (using Chebyshev polynomials) In terms of the moments of the distribution. This method allows the evaluation of probabilities for a wide class of distributions. It is applied to the  相似文献   

18.
Cook距离公式常用于回归模型的异常值诊断,但由于公式中的样本方差■对异常值敏感,导致公式缺乏稳健性,使得诊断效果不理想。基于以上问题,文章选取绝对离差中位数作为样本标准差的稳健估计量,得到了样本方差■的稳健估计量,进而构造出稳健Cook距离公式;借鉴传统Cook距离的回归模型异常值诊断理论,将稳健Cook距离公式应用于时间序列异常值诊断,拓展了传统Cook距离公式的异常值诊断领域。通过选取模拟样本量分别为50、100、200,污染率分别为0、1%、5%、10%的ARMA(1,1)序列及金融时间序列进行实例分析,结果发现:(1)在无污染时,稳健Cook距离法与常规Cook距离法的诊断正确率均为100%,两者没有出现"误诊"现象;(2)在样本量、污染率同时增大时,常规Cook距离诊断正确率急剧下降,当污染率达到5%及以上时,已基本无诊断力,而稳健Cook距离法依然能保持较高的诊断力。稳健Cook距离法不仅能应用于时间序列异常值诊断,也能应用于回归分析的异常值诊断。  相似文献   

19.
This article has the following contributions. First, this article develops a new criterion for identifying whether or not a particular time series variable is a common factor in the conventional approximate factor model. Second, by modeling observed factors as a set of potential factors to be identified, this article reveals how to easily pin down the factor without performing a large number of estimations. This allows the researcher to check whether or not each individual in the panel is the underlying common factor and, from there, identify which individuals best represent the factor space by using a new clustering mechanism. Asymptotically, the developed procedure correctly identifies the factor when N and T jointly approach infinity. The procedure is shown to be quite effective in the finite sample by means of Monte Carlo simulation. The procedure is then applied to an empirical example, demonstrating that the newly developed method identifies the unknown common factors accurately.  相似文献   

20.
This paper presents a method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed. The technique operates by embedding the models in a composite model parameterised by an integer k which identifies a switching time when dynamics change from one model to the other. Likelihood ratios can then be estimated from the posterior density of k using Markov chain methods. The techniques are illustrated by a simulation study involving an immigration-death model and validated using analytic results derived for this case. They are also applied to compare the fit of stochastic epidemic models to historical data on a smallpox epidemic. In addition to estimating likelihood ratios, the method can be used for direct estimation of likelihoods by selecting one of the models in the comparison to have a known likelihood for the observations. Some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号