首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we extend SiZer (SIgnificant ZERo crossing of the derivatives) to dependent data for the purpose of goodness-of-fit tests for time series models. Dependent SiZer compares the observed data with a specific null model being tested by adjusting the statistical inference using an assumed autocovariance function. This new approach uses a SiZer type visualization to flag statistically significant differences between the data and a given null model. The power of this approach is demonstrated through some examples of time series of Internet traffic data. It is seen that such time series can have even more burstiness than is predicted by the popular, long- range dependent, Fractional Gaussian Noise model.  相似文献   

2.
The consistency of model selection criterion BIC has been well and widely studied for many nonlinear regression models. However, few of them had considered models with lag variables as regressors and auto-correlated errors in time series settings, which is common in both linear and nonlinear time series modeling. This paper studies a dynamic semi-varying coefficient model with ARMA errors, using an approach based on spectrum analysis of time series. The consistency property of the proposed model selection criteria is established and an implementation procedure of model selection is proposed for practitioners. Simulation studies have also been conducted to numerically show the consistency property.  相似文献   

3.
Autoregressive models are widely employed for predictions and other inferences in many scientific fields. While the determination of their order is in general a difficult and critical step, this task becomes more complicated and crucial when the time series under investigation is realization of a stochastic process characterized by sparsity. In this paper we present a method for order determination of a stationary AR model with a sparse structure, given a set of observations, based upon a bootstrapped version of MAICE procedure [Akaike H. Prediction and entropy. Springer; 1998], in conjunction with a LASSO-type constraining procedure for lag suppression of insignificant lags. Empirical results will be obtained via Monte Carlo simulations. The quality of our method is assessed by comparison with the commonly adopted cross-validation approach and the non bootstrap counterpart of the presented procedure.  相似文献   

4.
The excess of zeros is not a rare feature in count data. Statisticians advocate the Poisson-type hurdle model (among other techniques) as an interesting approach to handle this data peculiarity. However, the frequency of gross errors and the complexity intrinsic to some considered phenomena may render this classical model unreliable and too limiting. In this paper, we develop a robust version of the Poisson hurdle model by extending the robust procedure for GLM of Cantoni and Ronchetti (2001) to the truncated Poisson regression model. The performance of the new robust approach is then investigated via a simulation study, a real data application and a sensitivity analysis. The results show the reliability of the new technique in the neighborhood of the truncated Poisson model. This robust modelling approach is therefore a valuable complement to the classical one, providing a tool for reliable statistical conclusions and to take more effective decisions.  相似文献   

5.
In this paper we present an indirect estimation procedure for (ARFIMA) fractional time series models.The estimation method is based on an ‘incorrect’criterion which does not directly provide a consistent estimator of the parameters of interest,but leads to correct inference by using simulations.

The main steps are the following. First,we consider an auxiliary model which can be easily estimated.Specifically,we choose the finite lag Autoregressive model.Then, this is estimated on the observations and simulated values drawn from the ARFIMA model associated with a given value of the parameters of interest.Finally,the latter is calibrated in order to obtain close values of the two estimators of the auxiliary parameters.

In this article,we describe the estimation procedure and compare the performance of the indirect estimator with some alternative estimators based on the likelihood function by a Monte Carlo study.  相似文献   

6.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

7.
Winters are a difficult period for the National Health Service (NHS) in the United Kingdom (UK), due to the combination of cold weather and the increased likelihood of respiratory infections, especially influenza. In this article we present a proper statistical time series approach for modelling and analysing weekly hospital admissions in the West Midlands in the UK during the period week 15/1990 to week 14/1999. We consider three variables, namely, hospital admissions, general practitioner consultants, and minimum temperature. The autocorrelations of each series are shown to decay hyperbolically. The correlations of hospital admission and the lag of other series also decay hyperbolically but with different speed and directions. One of the main objectives of this paper is to show that each of the three series can be represented by a Fractional Differenced Autoregressive integrated moving average model, (FDA). Further, the hospital admission winter and summer residuals shows significant interdependency, which may be interpreted as hidden periodicities within the last 10-years time interval. The short-range (8 weeks) forecasting of hospital admission of the FDA model and a fourth-order AutoRegressive AR(4) model are quite similar. However, our results reveal that the long-range forecasting of FDA is more realistic. This implies that, using the FDA approach, the respective authority can plan for winter pressure properly.  相似文献   

8.
Many of the popular nonlinear time series models require a priori the choice of parametric functions which are assumed to be appropriate in specific applications. This approach is mainly used in financial applications, when sufficient knowledge is available about the nonlinear structure between the covariates and the response. One principal strategy to investigate a broader class on nonlinear time series is the Nonlinear Additive AutoRegressive (NAAR) model. The NAAR model estimates the lags of a time series as flexible functions in order to detect non-monotone relationships between current and past observations. We consider linear and additive models for identifying nonlinear relationships. A componentwise boosting algorithm is applied for simultaneous model fitting, variable selection, and model choice. Thus, with the application of boosting for fitting potentially nonlinear models we address the major issues in time series modelling: lag selection and nonlinearity. By means of simulation we compare boosting to alternative nonparametric methods. Boosting shows a strong overall performance in terms of precise estimations of highly nonlinear lag functions. The forecasting potential of boosting is examined on the German industrial production (IP); to improve the model’s forecasting quality we include additional exogenous variables. Thus we address the second major aspect in this paper which concerns the issue of high dimensionality in models. Allowing additional inputs in the model extends the NAAR model to a broader class of models, namely the NAARX model. We show that boosting can cope with large models which have many covariates compared to the number of observations.  相似文献   

9.
The yield spread, measured as the difference between long- and short-term interest rates, is widely regarded as one of the strongest predictors of economic recessions. In this paper, we propose an enhanced recession prediction model that incorporates trends in the value of the yield spread. We expect our model to generate stronger recession signals because a steadily declining value of the yield spread typically indicates growing pessimism associated with the reduced future business activity. We capture trends in the yield spread by considering both the level of the yield spread at a lag of 12 months as well as its value at each of the previous two quarters leading up to the forecast origin, and we evaluate its predictive abilities using both logit and artificial neural network models. Our results indicate that models incorporating information from the time series of the yield spread correctly predict future recession periods much better than models only considering the spread value as of the forecast origin. Furthermore, the results are strongest for our artificial neural network model and logistic regression model that includes interaction terms, which we confirm using both a blocked cross-validation technique as well as an expanding estimation window approach.  相似文献   

10.
A large body of literature exists on the techniques for selecting the important variables in linear regression analysis. Many of these techniques are ad hoc in nature and have not been studied from a theoretical viewpoint. In this paper we discuss some of the more commonly used techniques and propose a selection procedure based on the statistical selection and ranking approach. This procedure is easy to compute and apply. The procedure depends on the goodness of fit of the model and the total error associated with it.  相似文献   

11.
A bivariate model of claim frequencies and severities   总被引:1,自引:1,他引:0  
Bivariate claim data come from a population that consists of insureds who may claim either one, both or none of the two types of benefits covered by a policy. In the present paper, we develop a statistical procedure to fit bivariate distributions of claims in presence of covariates. This allows for a more accurate study of insureds' choice and size in the frequency and severity of the two types of claims. A generalised logistic model is employed to examine the frequency probabilities, whilst the three parameter Burr distribution is suggested to model the underlying severity distributions. The bivariate copula model is exploited in such a way that it allows us to adjust for a range of frequency dependence structures; a method for assessing the adequacy of the fitted severity model is outlined. A health claims dataset illustrates the methods; we describe the use of orthogonal polynomials for characterising the relationship between age and the frequency and severity models.  相似文献   

12.
In this paper, we extend the structural probit measurement error model by considering that the unobserved covariate follows a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically which can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques to generate from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which is the case with the naive procedure and it seems to be more efficient than using the structural probit model when the distribution of the covariate (predictor) is skew.  相似文献   

13.
In this paper, we present a statistical inference procedure for the step-stress accelerated life testing (SSALT) model with Weibull failure time distribution and interval censoring via the formulation of generalized linear model (GLM). The likelihood function of an interval censored SSALT is in general too complicated to obtain analytical results. However, by transforming the failure time to an exponential distribution and using a binomial random variable for failure counts occurred in inspection intervals, a GLM formulation with a complementary log-log link function can be constructed. The estimations of the regression coefficients used for the Weibull scale parameter are obtained through the iterative weighted least square (IWLS) method, and the shape parameter is updated by a direct maximum likelihood (ML) estimation. The confidence intervals for these parameters are estimated through bootstrapping. The application of the proposed GLM approach is demonstrated by an industrial example.  相似文献   

14.
Clustered failure time data are commonly encountered in biomedical research where the study subjects from the same cluster (e.g., family) share the common genetic and/or environmental factors such that the failure times within the same cluster are correlated. Two approaches that are commonly used to account for the intra-cluster association are frailty models and marginal models. In this paper, we study the marginal proportional hazards model, where the structure of dependence between individuals within a cluster is unspecified. An estimation procedure is developed based on a pseudo-likelihood approach, and a risk set sampling method is proposed for the formulation of the pseudo-likelihood. The asymptotic properties of the proposed estimators are studied, and the related issues regarding the statistical efficiencies are discussed. The performances of the proposed estimator are demonstrated by the simulation studies. A data example from a child vitamin A supplementation trial in Nepal (Nepal Nutrition Intervention Project-Sarlahi, or NNIPS) is used to illustrate this methodology.  相似文献   

15.
In this paper we extend the structural probit measurement error model by considering the unobserved covariate to follow a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically, and can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques for generating from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which arises with the naive procedure. Moreover, a comparison of predicted and true success probabilities indicates that it seems to be more efficient to use the skew probit model when the distribution of the covariate (predictor) is skew. An application to a real data set is also provided.  相似文献   

16.
Complex biological processes are usually experimented along time among a collection of individuals, longitudinal data are then available. The statistical challenge is to better understand the underlying biological mechanisms. A standard statistical approach is mixed-effects model where the regression function is highly-developed to describe precisely the biological processes (solutions of multi-dimensional ordinary differential equations or of partial differential equation). A classical estimation method relies on coupling a stochastic version of the EM algorithm with a Monte Carlo Markov Chain algorithm. This algorithm requires many evaluations of the regression function. This is clearly prohibitive when the solution is numerically approximated with a time-consuming solver. In this paper a meta-model relying on a Gaussian process emulator is proposed to approximate the regression function, that leads to what is called a mixed meta-model. The uncertainty of the meta-model approximation can be incorporated in the model. A control on the distance between the maximum likelihood estimates of the mixed meta-model and the maximum likelihood estimates of the exact mixed model is guaranteed. Eventually, numerical simulations are performed to illustrate the efficiency of this approach.  相似文献   

17.
18.
A partially linear model is a semiparametric regression model that consists of parametric and nonparametric regression components in an additive form. In this article, we propose a partially linear model using a Gaussian process regression approach and consider statistical inference of the proposed model. Based on the proposed model, the estimation procedure is described by posterior distributions of the unknown parameters and model comparisons between parametric representation and semi- and nonparametric representation are explored. Empirical analysis of the proposed model is performed with synthetic data and real data applications.  相似文献   

19.
This paper develops a space‐time statistical model for local forecasting of surface‐level wind fields in a coastal region with complex topography. The statistical model makes use of output from deterministic numerical weather prediction models which are able to produce forecasts of surface wind fields on a spatial grid. When predicting surface winds at observing stations , errors can arise due to sub‐grid scale processes not adequately captured by the numerical weather prediction model , and the statistical model attempts to correct for these influences. In particular , it uses information from observing stations within the study region as well as topographic information to account for local bias. Bayesian methods for inference are used in the model , with computations carried out using Markov chain Monte Carlo algorithms. Empirical performance of the model is described , illustrating that a structured Bayesian approach to complicated space‐time models of the type considered in this paper can be readily implemented and can lead to improvements in forecasting over traditional methods.  相似文献   

20.
存在自相关时的自相关检验和参数估计是基础计量经济学的一个重要内容,并且存在自相关时的原模型已转化为自回归分布滞后模型。讨论存在自相关时的自相关检验和参数估计问题,提出了一种基于自回归分布滞后模型的自相关检验法,并同时给出了相应的参数估计。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号