首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

2.
Time series regression models have been widely studied in the literature by several authors. However, statistical analysis of replicated time series regression models has received little attention. In this paper, we study the application of the quasi-least squares method to estimate the parameters in a replicated time series model with errors that follow an autoregressive process of order p. We also discuss two other established methods for estimating the parameters: maximum likelihood assuming normality and the Yule-Walker method. When the number of repeated measurements is bounded and the number of replications n goes to infinity, the regression and the autocorrelation parameters are consistent and asymptotically normal for all three methods of estimation. Basically, the three methods estimate the regression parameter efficiently and differ in how they estimate the autocorrelation. When p=2, for normal data we use simulations to show that the quasi-least squares estimate of the autocorrelation is undoubtedly better than the Yule-Walker estimate. And the former estimate is as good as the maximum likelihood estimate almost over the entire parameter space.  相似文献   

3.
Penalized regression methods have for quite some time been a popular choice for addressing challenges in high dimensional data analysis. Despite their popularity, their application to time series data has been limited. This paper concerns bridge penalized methods in a linear regression time series model. We first prove consistency, sparsity and asymptotic normality of bridge estimators under a general mixing model. Next, as a special case of mixing errors, we consider bridge regression with autoregressive and moving average (ARMA) error models and develop a computational algorithm that can simultaneously select important predictors and the orders of ARMA models. Simulated and real data examples demonstrate the effective performance of the proposed algorithm and the improvement over ordinary bridge regression.  相似文献   

4.
New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic subprocesses and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty and leads to inferences about unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating field of application is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore the analysis of data from the southern oscillation index, one of several series that has been central in recent high profile debates in the atmospheric sciences about recent apparent trends in climatic indicators.  相似文献   

5.
We provide methods to robustly estimate the parameters of stationary ergodic short-memory time series models in the potential presence of additive low-frequency contamination. The types of contamination covered include level shifts (changes in mean) and monotone or smooth time trends, both of which have been shown to bias parameter estimates toward regions of persistence in a variety of contexts. The estimators presented here minimize trimmed frequency domain quasi-maximum likelihood (FDQML) objective functions without requiring specification of the low-frequency contaminating component. When proper sample size-dependent trimmings are used, the FDQML estimators are consistent and asymptotically normal, asymptotically eliminating the presence of any spurious persistence. These asymptotic results also hold in the absence of additive low-frequency contamination, enabling the practitioner to robustly estimate model parameters without prior knowledge of whether contamination is present. Popular time series models that fit into the framework of this article include autoregressive moving average (ARMA), stochastic volatility, generalized autoregressive conditional heteroscedasticity (GARCH), and autoregressive conditional heteroscedasticity (ARCH) models. We explore the finite sample properties of the trimmed FDQML estimators of the parameters of some of these models, providing practical guidance on trimming choice. Empirical estimation results suggest that a large portion of the apparent persistence in certain volatility time series may indeed be spurious. Supplementary materials for this article are available online.  相似文献   

6.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

7.
Due to computational challenges and non-availability of conjugate prior distributions, Bayesian variable selection in quantile regression models is often a difficult task. In this paper, we address these two issues for quantile regression models. In particular, we develop an informative stochastic search variable selection (ISSVS) for quantile regression models that introduces an informative prior distribution. We adopt prior structures which incorporate historical data into the current data by quantifying them with a suitable prior distribution on the model parameters. This allows ISSVS to search more efficiently in the model space and choose the more likely models. In addition, a Gibbs sampler is derived to facilitate the computation of the posterior probabilities. A major advantage of ISSVS is that it avoids instability in the posterior estimates for the Gibbs sampler as well as convergence problems that may arise from choosing vague priors. Finally, the proposed methods are illustrated with both simulation and real data.  相似文献   

8.
We consider a generalized exponential (GEXP) model in the frequency domain for modeling seasonal long-memory time series. This model generalizes the fractional exponential (FEXP) model [Beran, J., 1993. Fitting long-memory models by generalized linear regression. Biometrika 80, 817–822] to allow the singularity in the spectral density occurring at an arbitrary frequency for modeling persistent seasonality and business cycles. Moreover, the short-memory structure of this model is characterized by the Bloomfield [1973. An exponential model for the spectrum of a scalar time series. Biometrika 60, 217–226] model, which has a fairly flexible semiparametric form. The proposed model includes fractionally integrated processes, Bloomfield models, FEXP models as well as GARMA models [Gray, H.L., Zhang, N.-F., Woodward, W.A., 1989. On generalized fractional processes. J. Time Ser. Anal. 10, 233–257] as special cases. We develop a simple regression method for estimating the seasonal long-memory parameter. The asymptotic bias and variance of the corresponding long-memory estimator are derived. Our methodology is applied to a sunspot data set and an Internet traffic data set for illustration.  相似文献   

9.
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material.  相似文献   

10.
In the present study we have evaluated two competing methods for estimation of the impulse response weights used in the identification of transfer function models:a time domain method involving biased regression techniques and a frequency domain method utilizing a discrete Fourier transform of the cross-covariance system of the transfer function model. The algorithms were implemented on a VAX-8800 computer at the Computing Center at Åbo Akademi. The evaluation of the competing methods was carried out by simulations of different transfer function noise model structures. The models are essentially the same as those of Edlund, but we have used a far greater number of replications in the cases tested. Furthermore, we have used actually identified and estimated autoregressive integrated moving-average models of the residuals in the identification procedure of impulse response weights, in contrast with Edlund who only used theoretical noise models in filtering the input and output series. After a shot discussion of the underlying theory, we present the procedures and results of the empirical testing.  相似文献   

11.
We use a class of parametric counting process regression models that are commonly employed in the analysis of failure time data to formulate the subject-specific capture probabilities for removal and recapture studies conducted in continuous time. We estimate the regression parameters by modifying the conventional likelihood score function for left-truncated and right-censored data to accommodate an unknown population size and missing covariates on uncaptured subjects, and we subsequently estimate the population size by a martingale-based estimating function. The resultant estimators for the regression parameters and population size are consistent and asymptotically normal under appropriate regularity conditions. We assess the small sample properties of the proposed estimators through Monte Carlo simulation and we present an application to a bird banding exercise.  相似文献   

12.
Many of the popular nonlinear time series models require a priori the choice of parametric functions which are assumed to be appropriate in specific applications. This approach is mainly used in financial applications, when sufficient knowledge is available about the nonlinear structure between the covariates and the response. One principal strategy to investigate a broader class on nonlinear time series is the Nonlinear Additive AutoRegressive (NAAR) model. The NAAR model estimates the lags of a time series as flexible functions in order to detect non-monotone relationships between current and past observations. We consider linear and additive models for identifying nonlinear relationships. A componentwise boosting algorithm is applied for simultaneous model fitting, variable selection, and model choice. Thus, with the application of boosting for fitting potentially nonlinear models we address the major issues in time series modelling: lag selection and nonlinearity. By means of simulation we compare boosting to alternative nonparametric methods. Boosting shows a strong overall performance in terms of precise estimations of highly nonlinear lag functions. The forecasting potential of boosting is examined on the German industrial production (IP); to improve the model’s forecasting quality we include additional exogenous variables. Thus we address the second major aspect in this paper which concerns the issue of high dimensionality in models. Allowing additional inputs in the model extends the NAAR model to a broader class of models, namely the NAARX model. We show that boosting can cope with large models which have many covariates compared to the number of observations.  相似文献   

13.
We develop reference analysis for matrix-variate dynamic models with unknown observation covariance matrices. Bayesian algorithms for forecasting, estimation, and filtering are derived. This work extends the existing theory of reference analysis for univariate dynamic linear models, and thus it proposes a solution to the specification of the prior distributions for a very wide class of time series models. Subclasses of our models include the widely used multivariate and matrix-variate regression models.  相似文献   

14.
This paper examines prior choice in probit regression through a predictive cross-validation criterion. In particular, we focus on situations where the number of potential covariates is far larger than the number of observations, such as in gene expression data. Cross-validation avoids the tendency of such models to fit perfectly. We choose the scale parameter c in the standard variable selection prior as the minimizer of the log predictive score. Naive evaluation of the log predictive score requires substantial computational effort, and we investigate computationally cheaper methods using importance sampling. We find that K-fold importance densities perform best, in combination with either mixing over different values of c or with integrating over c through an auxiliary distribution.  相似文献   

15.
The recently developed rolling year GEKS procedure makes maximum use of all matches in the data to construct nonrevisable price indexes that are approximately free from chain drift. A potential weakness is that unmatched items are ignored. In this article we use imputation Törnqvist price indexes as inputs into the rolling year GEKS procedure. These indexes account for quality changes by imputing the “missing prices” associated with new and disappearing items. Three imputation methods are discussed. The first method makes explicit imputations using a hedonic regression model which is estimated for each time period. The other two methods make implicit imputations; they are based on time dummy hedonic and time-product dummy regression models and are estimated on bilateral pooled data. We present empirical evidence for New Zealand from scanner data on eight consumer electronics products and find that accounting for quality change can make a substantial difference.  相似文献   

16.
We consider Markov-switching regression models, i.e. models for time series regression analyses where the functional relationship between covariates and response is subject to regime switching controlled by an unobservable Markov chain. Building on the powerful hidden Markov model machinery and the methods for penalized B-splines routinely used in regression analyses, we develop a framework for nonparametrically estimating the functional form of the effect of the covariates in such a regression model, assuming an additive structure of the predictor. The resulting class of Markov-switching generalized additive models is immensely flexible, and contains as special cases the common parametric Markov-switching regression models and also generalized additive and generalized linear models. The feasibility of the suggested maximum penalized likelihood approach is demonstrated by simulation. We further illustrate the approach using two real data applications, modelling (i) how sales data depend on advertising spending and (ii) how energy price in Spain depends on the Euro/Dollar exchange rate.  相似文献   

17.
In statistical analysis, one of the most important subjects is to select relevant exploratory variables that perfectly explain the dependent variable. Variable selection methods are usually performed within regression analysis. Variable selection is implemented so as to minimize the information criteria (IC) in regression models. Information criteria directly affect the power of prediction and the estimation of selected models. There are numerous information criteria in literature such as Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC). These criteria are modified for to improve the performance of the selected models. BIC is extended with alternative modifications towards the usage of prior and information matrix. Information matrix-based BIC (IBIC) and scaled unit information prior BIC (SPBIC) are efficient criteria for this modification. In this article, we proposed a combination to perform variable selection via differential evolution (DE) algorithm for minimizing IBIC and SPBIC in linear regression analysis. We concluded that these alternative criteria are very useful for variable selection. We also illustrated the efficiency of this combination with various simulation and application studies.  相似文献   

18.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

19.
In this article we propose a method called GLLS for the fitting of bilinear time series models. The GLLS procedure is the combination of the LASSO method, the generalized cross-validation method, the least angle regression method, and the stepwise regression method. Compared with the traditional methods such as the repeated residual method and the genetic algorithm, GLLS has the advantage of shrinking the coefficients of the models and saving the computational time. The Monte Carlo simulation studies and a real data example are reported to assess the performance of the proposed GLLS method.  相似文献   

20.
ABSTRACT

The most common measure of dependence between two time series is the cross-correlation function. This measure gives a complete characterization of dependence for two linear and jointly Gaussian time series, but it often fails for nonlinear and non-Gaussian time series models, such as the ARCH-type models used in finance. The cross-correlation function is a global measure of dependence. In this article, we apply to bivariate time series the nonlinear local measure of dependence called local Gaussian correlation. It generally works well also for nonlinear models, and it can distinguish between positive and negative local dependence. We construct confidence intervals for the local Gaussian correlation and develop a test based on this measure of dependence. Asymptotic properties are derived for the parameter estimates, for the test functional and for a block bootstrap procedure. For both simulated and financial index data, we construct confidence intervals and we compare the proposed test with one based on the ordinary correlation and with one based on the Brownian distance correlation. Financial indexes are examined over a long time period and their local joint behavior, including tail behavior, is analyzed prior to, during and after the financial crisis. Supplementary material for this article is available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号