首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we develop a series estimation method for unknown time-inhomogeneous functionals of Lévy processes involved in econometric time series models. To obtain an asymptotic distribution for the proposed estimators, we establish a general asymptotic theory for partial sums of bivariate functionals of time and nonstationary variables. These results show that the proposed estimators in different situations converge to quite different random variables. In addition, the rates of convergence depend on various factors rather than just the sample size. Finite sample simulations are provided to evaluate the finite sample performance of the proposed model and estimation method.  相似文献   

2.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   

3.
Simulation of stationary Gaussian vector fields   总被引:2,自引:0,他引:2  
Chan  G.  Wood  A. T. A. 《Statistics and Computing》1999,9(4):265-268
In earlier work we described a circulant embedding approach for simulating scalar-valued stationary Gaussian random fields on a finite rectangular grid, with the covariance function prescribed. Here, we explain how the circulant embedding approach can be used to simulate Gaussian vector fields. As in the scalar case, the simulation procedure is theoretically exact if a certain non-negativity condition is satisfied. In the vector setting, this exactness condition takes the form of a nonnegative definiteness condition on a certain set of Hermitian matrices. The main computational tool used is the Fast Fourier Transform. Consequently, when implemented appropriately, the procedure is highly efficient, in terms of both CPU time and storage.  相似文献   

4.
This article proposes an algorithm to generate vector moving average (VMA) processes with a variable spectrum having a fixed condition number across frequencies. This method is based on the theory of multivariate linear spectrum for VMA processes, and is developed in a two-step procedure. Specific examples are provided, and the precision of generated time series is discussed. Such an algorithm is a useful tool to assess the performance of selected multivariate spectral estimators, and it turns out to be particularly appropriated in the Kolmogorov asymptotic estimation framework.  相似文献   

5.
6.
A common practice in time series analysis is to fit a centered model to the mean-corrected data set. For stationary autoregressive moving-average (ARMA) processes, as far as the parameter estimation is concerned, fitting an ARMA model without intercepts to the mean-corrected series is asymptotically equivalent to fitting an ARMA model with intercepts to the observed series. We show that, related to the parameter least squares estimation of periodic ARMA models, the second approach can be arbitrarily more efficient than the mean-corrected counterpart. This property is illustrated by means of a periodic first-order autoregressive model. The asymptotic variance of the estimators for both approaches is derived. Moreover, empirical experiments based on simulations investigate the finite sample properties of the estimators.  相似文献   

7.
The curve of correlation is a measure of local correlation between two random variables X and Y at the point X = x of the support of this variable. This article studies this local measure using the theory of time series for bivariate and univariate stationary stochastic process. We suggest local polynomial estimators for time series observing their consistency both theoretically and through simulations. For this, different sizes of series, bandwidths, and kernels, besides lags and models’ configurations were used. Applications have also been made using the daily returns of two financial series.  相似文献   

8.
The correct and efficient estimation of memory parameters in a stationary Gaussian processes is an important issue, since otherwise, forecasts based on the resulting time series would be misleading. On the other hand, if the memory parameters are suspected to fall in a smaller subspace through some hypothesis restrictions, it becomes a hard decision whether to use estimators based on the restricted spaces or to use unrestricted estimators over the full parameter space. In this article, we propose James-Stein-type estimators of the memory parameters of a stationary Gaussian times series process, which can efficiently incorporate the hypothetical restrictions. We show theoretically that the proposed estimators are more efficient than the usual unrestricted maximum likelihood estimators over the entire parameter space.  相似文献   

9.
This paper is motivated by the pioneering work of Emanuel Parzen wherein he advanced the estimation of (spectral) densities via kernel smoothing and established the role of reproducing kernel Hilbert spaces (RKHS) in field of time series analysis. Here, we consider analysis of power (ANOPOW) for replicated time series collected in an experimental design where the main goals are to estimate, and to detect differences among, group spectra. To accomplish these goals, we obtain smooth estimators of the group spectra by assuming that each spectral density is in some RKHS; we then apply penalized least squares in a smoothing spline ANOPOW. For inference, we obtain simultaneous confidence intervals for the estimated group spectra via bootstrapping.  相似文献   

10.
Asymptotic theory for the Cox semi-Markov illness-death model   总被引:1,自引:1,他引:0  
Irreversible illness-death models are used to model disease processes and in cancer studies to model disease recovery. In most applications, a Markov model is assumed for the multistate model. When there are covariates, a Cox (1972, J Roy Stat Soc Ser B 34:187–220) model is used to model the effect of covariates on each transition intensity. Andersen et al. (2000, Stat Med 19:587–599) proposed a Cox semi-Markov model for this problem. In this paper, we study the large sample theory for that model and provide the asymptotic variances of various probabilities of interest. A Monte Carlo study is conducted to investigate the robustness and efficiency of Markov/Semi-Markov estimators. A real data example from the PROVA (1991, Hepatology 14:1016–1024) trial is used to illustrate the theory.  相似文献   

11.
The cumulative incidence function provides intuitive summary information about competing risks data. Via a mixture decomposition of this function, Chang and Wang (Statist. Sinca 19:391–408, 2009) study how covariates affect the cumulative incidence probability of a particular failure type at a chosen time point. Without specifying the corresponding failure time distribution, they proposed two estimators and derived their large sample properties. The first estimator utilized the technique of weighting to adjust for the censoring bias, and can be considered as an extension of Fine’s method (J R Stat Soc Ser B 61: 817–830, 1999). The second used imputation and extends the idea of Wang (J R Stat Soc Ser B 65: 921–935, 2003) from a nonparametric setting to the current regression framework. In this article, when covariates take only discrete values, we extend both approaches of Chang and Wang (Statist Sinca 19:391–408, 2009) by allowing left truncation. Large sample properties of the proposed estimators are derived, and their finite sample performance is investigated through a simulation study. We also apply our methods to heart transplant survival data.  相似文献   

12.
GARCH models include most of the stylized facts of financial time series and they have been largely used to analyse discrete financial time series. In the last years, continuous-time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behaviour of some NASDAQ time series.  相似文献   

13.
The approximate likelihood function introduced by Whittle has been used to estimate the spectral density and certain parameters of a variety of time series models. In this note we attempt to empirically quantify the loss of efficiency of Whittle's method in nonstandard settings. A recently developed representation of some first-order non-Gaussian stationary autoregressive process allows a direct comparison of the true likelihood function with that of Whittle. The conclusion is that Whittle's likelihood can produce unreliable estimates in the non-Gaussian case, even for moderate sample sizes. Moreover, for small samples, and if the autocorrelation of the process is high, Whittle's approximation is not efficient even in the Gaussian case. While these facts are known to some extent, the present study sheds more light on the degree of efficiency loss incurred by using Whittle's likelihood, in both Gaussian and non-Gaussian cases.  相似文献   

14.
The threshold diffusion model assumes a piecewise linear drift term and a piecewise smooth diffusion term, which constitutes a rich model for analyzing nonlinear continuous-time processes. We consider the problem of testing for threshold nonlinearity in the drift term. We do this by developing a quasi-likelihood test derived under the working assumption of a constant diffusion term, which circumvents the problem of generally unknown functional form for the diffusion term. The test is first developed for testing for one threshold at which the drift term breaks into two linear functions. We show that under some mild regularity conditions, the asymptotic null distribution of the proposed test statistic is given by the distribution of certain functional of some centered Gaussian process. We develop a computationally efficient method for calibrating the p-value of the test statistic by bootstrapping its asymptotic null distribution. The local power function is also derived, which establishes the consistency of the proposed test. The test is then extended to testing for multiple thresholds. We demonstrate the efficacy of the proposed test by simulations. Using the proposed test, we examine the evidence of nonlinearity in the term structure of a long time series of U.S. interest rates.  相似文献   

15.
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range ]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.  相似文献   

16.
In a breakthrough paper, Benjamini and Hochberg (J Roy Stat Soc Ser B 57:289–300, 1995) proposed a new error measure for multiple testing, the FDR; and developed a distribution-free procedure to control it under independence among the test statistics. In this paper we argue by extensive simulation and theoretical considerations that the assumption of independence is not needed. Along the lines of (Ann Stat 32:1035–1061, 2004b), we moreover provide a more powerful method, that exploits an estimator of the number of false nulls among the tests. We propose a whole family of iterative estimators that prove robust under dependence and independence between the test statistics. These estimators can be used to improve also classical multiple testing procedures, and in general to estimate the weight of a known component in a mixture distribution. Innovations are illustrated by simulations.  相似文献   

17.
A frequency domain bootstrap (FDB) is a common technique to apply Efron’s independent and identically distributed resampling technique (Efron, 1979) to periodogram ordinates – especially normalized periodogram ordinates – by using spectral density estimates. The FDB method is applicable to several classes of statistics, such as estimators of the normalized spectral mean, the autocorrelation (but not autocovariance), the normalized spectral density function, and Whittle parameters. While this FDB method has been extensively studied with respect to short-range dependent time processes, there is a dearth of research on its use with long-range dependent time processes. Therefore, we propose an FDB methodology for ratio statistics under long-range dependence, using semi- and nonparametric spectral density estimates as a normalizing factor. It is shown that the FDB approximation allows for valid distribution estimation for a broad class of stationary, long-range (or short-range) dependent linear processes, without any stringent assumptions on the distribution of the underlying process. The results of a large simulation study show that the FDB approximation using a semi- or nonparametric spectral density estimator is often robust for various values of a long-memory parameter reflecting magnitude of dependence. We apply the proposed procedure to two data examples.  相似文献   

18.
We study non-Markov multistage models under dependent censoring regarding estimation of stage occupation probabilities. The individual transition and censoring mechanisms are linked together through covariate processes that affect both the transition intensities and the censoring hazard for the corresponding subjects. In order to adjust for the dependent censoring, an additive hazard regression model is applied to the censoring times, and all observed counting and “at risk” processes are subsequently given an inverse probability of censoring weighted form. We examine the bias of the Datta–Satten and Aalen–Johansen estimators of stage occupation probability, and also consider the variability of these estimators by studying their estimated standard errors and mean squared errors. Results from different simulation studies of frailty models indicate that the Datta–Satten estimator is approximately unbiased, whereas the Aalen–Johansen estimator either under- or overestimates the stage occupation probability due to the dependent nature of the censoring process. However, in our simulations, the mean squared error of the latter estimator tends to be slightly smaller than that of the former estimator. Studies on development of nephropathy among diabetics and on blood platelet recovery among bone marrow transplant patients are used as demonstrations on how the two estimation methods work in practice. Our analyses show that the Datta–Satten estimator performs well in estimating stage occupation probability, but that the censoring mechanism has to be quite selective before a deviation from the Aalen-Johansen estimator is of practical importance. N. Gunnes—Supported by a grant from the Norwegian Cancer Society.  相似文献   

19.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further, recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions, Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters. Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior distributions of the unknown parameters.  相似文献   

20.
Quasi-life tables, in which the data arise from many concurrent, independent, discrete-time renewal processes, were defined by Baxter (1994, Biometrika 81:567–577), who outlined some methods for estimation. The processes are not observed individually; only the total numbers of renewals at each time point are observed. Crowder and Stephens (2003, Lifetime Data Anal 9:345–355) implemented a formal estimating-equation approach that invokes large-sample theory. However, these asymptotic methods fail to yield sensible estimates for smaller samples. In this paper, we implement a Bayesian analysis based on MCMC computation that works equally well for large and small sample sizes. We give three simulated examples, studying the Bayesian results, the impact of changing prior specification, and empirical properties of the Bayesian estimators of the lifetime distribution parameters. We also study the Baxter (1994, Biometrika 81:567–577) data, and uncover structure that has not been commented upon previously.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号