首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
The sales promotion data resulting from multiple marketing strategies are usually autocorrelated. Consequently, the characteristics of those data sets can be analyzed using time-series and/or intervention analysis. Traditional time-series intervention analysis focuses on the effects of single or few interventions, and forecasts may be obtained as long as the future interventions can be assured. This study is different from traditional approaches, and considers the cases in which multiple interventions and the uncertainty of future interventions exist in the system. In addition, this study utilizes a set of real sales promotion data to demonstrate the effectiveness of the proposed approach.  相似文献   

2.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

3.
Time series sometimes consist of count data in which the number of events occurring in a given time interval is recorded. Such data are necessarily nonnegative integers, and an assumption of a Poisson or negative binomial distribution is often appropriate. This article sets ups a model in which the level of the process generating the observations changes over time. A recursion analogous to the Kalman filter is used to construct the likelihood function and to make predictions of future observations. Qualitative variables, based on a binomial or multinomial distribution, may be handled in a similar way. The model for count data may be extended to include explanatory variables. This enables nonstochastic slope and seasonal components to be included in the model, as well as permitting intervention analysis. The techniques are illustrated with a number of applications, and an attempt is made to develop a model-selection strategy along the lines of that used for Gaussian structural time series models. The applications include an analysis of the results of international football matches played between England and Scotland and an assessment of the effect of the British seat-belt law on the drivers of light-goods vehicles.  相似文献   

4.
This is the first study that employs the propensity score matching framework to examine the average treatment effect of exchange rate regimes on economic growth. Previous studies examining the effects of different exchange regimes on growth often apply time series or panel data techniques and provide mixed results. This study employs a variety of non-parametric matching methods to address the self-selection problem, which potentially causes a bias in the traditional linear regressions. We evaluate the average treatment effect of the floating exchange rate regime on economic growth in 164 countries. Time period of the quasi experiment starts in 1970, capturing the collapse of the Bretton Woods fixed exchange rate commitment system. Results show that the average treatment effect of floating exchange rate regimes on economic growth is statistically insignificant. Verifying the results with the Rosenbaum's bounds, our findings are strong and robust. The research states that there is no evidence that employing a floating exchange rate regime compared to a fixed one leads to a higher economic growth for the countries that use this particular policy.  相似文献   

5.
The analysis of time-indexed categorical data is important in many fields, e.g., in telecommunication network monitoring, manufacturing process control, ecology, etc. Primary interest is in detecting and measuring serial associations and dependencies in such data. For cardinal time series analysis, autocorrelation is a convenient and informative measure of serial association. Yet, for categorical time series analysis an analogous convenient measure and corresponding concepts of weak stationarity have not been provided. For two categorical variables, several ways of measuring association have been suggested. This paper reviews such measures and investigates their properties in a serial context. We discuss concepts of weak stationarity of a categorical time series, in particular of stationarity in association measures. Serial association and weak stationarity are studied in the class of discrete ARMA processes introduced by Jacobs and Lewis (J. Time Ser. Anal. 4(1):19–36, 1983). An intrinsic feature of a time series is that, typically, adjacent observations are dependent. The nature of this dependence among observations of a time series is of considerable practical interest. Time series analysis is concerned with techniques for the analysis of this dependence. (Box et al. 1994p. 1)  相似文献   

6.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   

7.
In models for predicting financial distress, ranging from traditional statistical models to artificial intelligence models, scholars have primarily paid attention to improving predictive accuracy as well as the progressivism and intellectualization of the prognostic methods. However, the extant models use static or short-term data rather than time-series data to draw inferences on future financial distress. If financial distress occurs at the end of a progressive process, then omitting time series of historical financial ratios from the analysis ignores the cumulative effect of previous financial ratios on the current consequences. This study incorporated the cumulative characteristics of financial distress by using the characteristics of a state space model that is able to perform long-term forecasts to dynamically predict an enterprise's financial distress. Kalman filtering is used to estimate the model parameters. Thus, the model constructed in this paper is a dynamic financial prediction model that has the benefit of forecasting over the long term. Additionally, current data are used to forecast the future annual financial position and to judge whether the establishment will be in financial distress.  相似文献   

8.
The Hilbert–Huang transform uses the empirical mode decomposition (EMD) method to analyze nonlinear and nonstationary data. This method breaks a time series of data into several orthogonal sequences based on differences in frequency. These data components include the intrinsic mode functions (IMFs) and the final residue. Although IMFs have been used in the past as predictors for other variables, very little effort has been devoted to identifying the most effective predictors among IMFs. As lasso is a widely used method for feature selection within complex datasets, the main objective of this article is to present a lasso regression based on the EMD method for choosing decomposed components that exhibit the strongest effects. Both numerical experiments and empirical results show that the proposed modeling process can use time-frequency structure within data to reveal interactions between two variables. This allows for more accurate predictions concerning future events.  相似文献   

9.
ABSTRACT

Singular spectrum analysis (SSA) is a relatively new method for time series analysis and comes as a non-parametric alternative to the classical methods. This methodology has proven to be effective in analysing non-stationary and complex time series since it is a non-parametric method and do not require the classical assumptions over the stationarity or over the normality of the residuals. Although SSA have proved to provide advantages over traditional methods, the challenges that arise when long time series are considered, make the standard SSA very demanding computationally and often not suitable. In this paper we propose the randomized SSA which is an alternative to SSA for long time series without losing the quality of the analysis. The SSA and the randomized SSA are compared in terms of quality of the model fit and forecasting, and computational time. This is done by using Monte Carlo simulations and real data about the daily prices of five of the major world commodities.  相似文献   

10.
We analyze publicly available data to estimate the causal effects of military interventions on the homicide rates in certain problematic regions in Mexico. We use the Rubin causal model to compare the post-intervention homicide rate in each intervened region to the hypothetical homicide rate for that same year had the military intervention not taken place. Because the effect of a military intervention is not confined to the municipality subject to the intervention, a nonstandard definition of units is necessary to estimate the causal effect of the intervention under the standard no-interference assumption of stable-unit treatment value assumption (SUTVA). Donor pools are created for each missing potential outcome under no intervention, thereby allowing for the estimation of unit-level causal effects. A multiple imputation approach accounts for uncertainty about the missing potential outcomes.  相似文献   

11.
Time series data observed at unequal time intervals (irregular data) occur quite often and this usually poses problems in its analysis. A recursive form of the exponentially smoothed estimated is here proposed for a nonlinear model with irregularly observed data and its asymptotic properties are discussed An alternative smoother to that of Wright (1985) is also derived. Numerical comparison is made between the resulting estimates and other smoothed estimates.  相似文献   

12.
The inclusion of linear deterministic effects in a time series model is important to get an appropriate specification. Such effects may be due to calendar variation, outlying observations or interventions. This article proposes a two-step method for estimating an adjusted time series and the parameters of its linear deterministic effects simultaneously. Although the main goal when applying this method in practice might only be to estimate the adjusted series, an important by-product is a substantial increase in efficiency in the estimates of the deterministic effects. Some theoretical examples are presented to demonstrate the intuitive appeal of this proposal. Then the methodology is applied on two real datasets. One of these applications investigates the importance of the 1995 economic crisis on Mexico's industrial production index.  相似文献   

13.
A supersaturated design (SSD) is a design whose run size is not enough for estimating all the main effects. The goal in conducting such a design is to identify, presumably only a few, relatively dominant active effects with a cost as low as possible. However, data analysis of such designs remains primitive: traditional approaches are not appropriate in such a situation and several methods which were proposed in the literature in recent years are effective when used to analyze two-level SSDs. In this paper, we introduce a variable selection procedure, called the PLSVS method, to screen active effects in mixed-level SSDs based on the variable importance in projection which is an important concept in the partial least-squares regression. Simulation studies show that this procedure is effective.  相似文献   

14.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   

15.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

16.
Autoregressive Forecasting of Some Functional Climatic Variations   总被引:4,自引:0,他引:4  
Many variations such as the annual cycle in sea surface temperatures can be considered to be smooth functions and are appropriately described using methods from functional data analysis. This study defines a class of functional autoregressive (FAR) models which can be used as robust predictors for making forecasts of entire smooth functions in the future. The methods are illustrated and compared with pointwise predictors such as SARIMA by applying them to forecasting the entire annual cycle of climatological El Nino–Southern Oscillation (ENSO) time series one year ahead. Forecasts for the period 1987–1996 suggest that the FAR functional predictors show some promising skill, compared to traditional scalar SARIMA forecasts which perform poorly.  相似文献   

17.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

18.
Time series smoothers estimate the level of a time series at time t as its conditional expectation given present, past and future observations, with the smoothed value depending on the estimated time series model. Alternatively, local polynomial regressions on time can be used to estimate the level, with the implied smoothed value depending on the weight function and the bandwidth in the local linear least squares fit. In this article we compare the two smoothing approaches and describe their similarities. Through simulations, we assess the increase in the mean square error that results when approximating the estimated optimal time series smoother with the local regression estimate of the level.  相似文献   

19.
Time series of counts occur in many different contexts, the counts being usually of certain events or objects in specified time intervals. In this paper we introduce a model called parameter-driven state-space model to analyse integer-valued time series data. A key property of such model is that the distribution of the observed count data is independent, conditional on the latent process, although the observations are correlated marginally. Our simulation shows that the Monte Carlo Expectation Maximization (MCEM) algorithm and the particle method are useful for the parameter estimation of the proposed model. In the application to Malaysia dengue data, our model fits better when compared with several other models including that of Yang et al. (2015)  相似文献   

20.
Time series analysis is a tremendous research area in statistics and econometrics. In a previous review, the author was able to break down up 15 key areas of research interest in time series analysis. Nonetheless, the aim of the review in this current paper is not to cover a wide range of somewhat unrelated topics on the subject, but the key strategy of the review in this paper is to begin with a core the ‘curse of dimensionality’ in nonparametric time series analysis, and explore further in a metaphorical domino-effect fashion into other closely related areas in semiparametric methods in nonlinear time series analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号