共查询到20条相似文献,搜索用时 31 毫秒
1.
Eduardo Rossi 《Econometric Reviews》2014,33(7):785-814
A stylized fact is that realized variance has long memory. We show that, when the instantaneous volatility is a long memory process of order d, the integrated variance is characterized by the same long-range dependence. We prove that the spectral density of realized variance is given by the sum of the spectral density of the integrated variance plus that of a measurement error, due to the sparse sampling and market microstructure noise. Hence, the realized volatility has the same degree of long memory as the integrated variance. The additional term in the spectral density induces a finite-sample bias in the semiparametric estimates of the long memory. A Monte Carlo simulation provides evidence that the corrected local Whittle estimator of Hurvich et al. (2005) is much less biased than the standard local Whittle estimator and the empirical application shows that it is robust to the choice of the sampling frequency used to compute the realized variance. Finally, the empirical results suggest that the volatility series are more likely to be generated by a nonstationary fractional process. 相似文献
2.
In this article, we propose a weighted simulated integrated conditional moment (WSICM) test of the validity of parametric specifications of conditional distribution models for stationary time series data, by combining the weighted integrated conditional moment (ICM) test of Bierens (1984) for time series regression models with the simulated ICM test of Bierens and Wang (2012) of conditional distribution models for cross-section data. To the best of our knowledge, no other consistent test for parametric conditional time series distributions has been proposed yet in the literature, despite consistency claims made by some authors. 相似文献
3.
Siti Haslinda Mohd Din Marek Molas Jolanda Luime Emmanuel Lesaffre 《Journal of applied statistics》2014,41(8):1627-1644
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16,21,28] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed. 相似文献
4.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary. 相似文献
5.
Trias Wahyuni Rakhmawati Geert Molenberghs Geert Verbeke Christel Faes 《Journal of applied statistics》2017,44(4):620-641
Since the seminal paper by Cook and Weisberg [9], local influence, next to case deletion, has gained popularity as a tool to detect influential subjects and measurements for a variety of statistical models. For the linear mixed model the approach leads to easily interpretable and computationally convenient expressions, not only highlighting influential subjects, but also which aspect of their profile leads to undue influence on the model's fit [17]. Ouwens et al. [24] applied the method to the Poisson-normal generalized linear mixed model (GLMM). Given the model's nonlinear structure, these authors did not derive interpretable components but rather focused on a graphical depiction of influence. In this paper, we consider GLMMs for binary, count, and time-to-event data, with the additional feature of accommodating overdispersion whenever necessary. For each situation, three approaches are considered, based on: (1) purely numerical derivations; (2) using a closed-form expression of the marginal likelihood function; and (3) using an integral representation of this likelihood. Unlike when case deletion is used, this leads to interpretable components, allowing not only to identify influential subjects, but also to study the cause thereof. The methodology is illustrated in case studies that range over the three data types mentioned. 相似文献
6.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007) method. 相似文献
7.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
8.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010). The estimators proposed here also have fast convergence rates and are simple to implement. 相似文献
9.
Yan Fan 《Journal of applied statistics》2016,43(14):2595-2607
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26] and Shi's non-degenerate tests [21] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests. 相似文献
10.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
11.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
12.
This article presents a new class of realized stochastic volatility model based on realized volatilities and returns jointly. We generalize the traditionally used logarithm transformation of realized volatility to the Box–Cox transformation, a more flexible parametric family of transformations. A two-step maximum likelihood estimation procedure is introduced to estimate this model on the basis of Koopman and Scharth (2013). Simulation results show that the two-step estimator performs well, and the misspecified log transformation may lead to inaccurate parameter estimation and certain excessive skewness and kurtosis. Finally, an empirical investigation on realized volatility measures and daily returns is carried out for several stock indices. 相似文献
13.
Ye Li 《Econometric Reviews》2017,36(1-3):289-353
We consider issues related to inference about locally ordered breaks in a system of equations, as originally proposed by Qu and Perron (2007). These apply when break dates in different equations within the system are not separated by a positive fraction of the sample size. This allows constructing joint confidence intervals of all such locally ordered break dates. We extend the results of Qu and Perron (2007) in several directions. First, we allow the covariates to be any mix of trends and stationary or integrated regressors. Second, we allow for breaks in the variance-covariance matrix of the errors. Third, we allow for multiple locally ordered breaks, each occurring in a different equation within a subset of equations in the system. Via some simulation experiments, we show first that the limit distributions derived provide good approximations to the finite sample distributions. Second, we show that forming confidence intervals in such a joint fashion allows more precision (tighter intervals) compared to the standard approach of forming confidence intervals using the method of Bai and Perron (1998) applied to a single equation. Simulations also indicate that using the locally ordered break confidence intervals yields better coverage rates than using the framework for globally distinct breaks when the break dates are separated by roughly 10% of the total sample size. 相似文献
14.
Coppi et al. [7] applied Yang and Wu's [20] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm. 相似文献
15.
Breitung and Candelon (2006) in Journal of Econometrics proposed a simple statistical testing procedure for the noncausality hypothesis at a given frequency. In their paper, however, they reported some theoretical results indicating that their test severely suffers from quite low power when the noncausality hypothesis is tested at a frequency close to 0 or pi. This paper examines whether or not these results indicate their procedure is useless at such frequencies. 相似文献
16.
Cibele Queiroz da-Silva Eduardo G. Martins Vinícius Bonato Sérgio Furtado dos Reis 《统计学通讯:模拟与计算》2013,42(4):816-828
We develop a series of Bayesian statistical models for estimating survival of a neotropic didelphid marsupial, the Brazilian gracile mouse opossum (Gracilinanus microtarsus). These models are based on the Cormack–Jolly–Seber model (Cormack, 1964; Jolly 1965; Seber 1965) with both survival and recapture rates expressed as a function of covariates using a logit link. The proposed models allow taking into account heterogeneity in capture probability caused by the existence of different groups of individuals in the population. The models were applied to two cohorts (Cohort, 2000, 2001) with the first one including 14 and the second one 15 sampling occasions. The best models for each of the cohorts indicate that G. microtarsus is best described as partially semelparous, a condition in which mortality after the first mating is high but graded over time, with a fraction of males surviving for a second breeding season (Boonstra, 2005). 相似文献
17.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented. 相似文献
18.
Soltani and Mohammadpour (2006) observed that in general the backward and forward moving average coefficients, correspondingly, for the multivariate stationary processes, unlike the univariate processes, are different. This has stimulated researches concerning derivations of forward moving average coefficients in terms of the backward moving average coefficients. In this article we develop a practical procedure whenever the underlying process is a multivariate moving average (or univariate periodically correlated) process of finite order. Our procedure is based on two key observations: order reduction (Li, 2005) and first-order analysis (Mohammadpour and Soltani, 2010). 相似文献
19.
The problem of finding D-optimal designs in the presence of a number of covariates has been considered in the one-way set-up. This is an extension of Dey and Mukerjee (2006) in the sense that for fixed replication numbers of each treatment, an alternative upper bound to the determinant of the information matrix has been found through completely symmetric C-matrices for the regression coefficients; this upper bound includes the upper bound given in Dey and Mukerjee (2006) obtained through diagonal C-matrices. Because of the fact that a smaller class of C-matrices was used at the intermediate stage where the replication numbers were fixed, ultimately some optimal designs remained unidentified there. These designs have been identified here and thereby the conjecture made in Dey and Mukerjee (2006) has been settled. 相似文献
20.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples. 相似文献