首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Aiming at monitoring a time series to detect stationarity as soon as possible, we introduce monitoring procedures based on kernel-weighted sequential Dickey–Fuller (DF) processes, and related stopping times, which may be called weighted DF control charts. Under rather weak assumptions, (functional) central limit theorems are established under the unit root null hypothesis and local-to-unity alternatives. For general dependent and heterogeneous innovation sequences the limit processes depend on a nuisance parameter. In this case of practical interest, one can use estimated control limits obtained from the estimated asymptotic law. Another easy-to-use approach is to transform the DF processes to obtain limit laws which are invariant with respect to the nuisance parameter. We provide asymptotic theory for both approaches and compare their statistical behavior in finite samples by simulation.  相似文献   

2.
Extending previous work on hedge fund return predictability, this paper introduces the idea of modelling the conditional distribution of hedge fund returns using Student's t full-factor multivariate GARCH models. This class of models takes into account the stylized facts of hedge fund return series, that is, heteroskedasticity, fat tails and deviations from normality. For the proposed class of multivariate predictive regression models, we derive analytic expressions for the score and the Hessian matrix, which can be used within classical and Bayesian inferential procedures to estimate the model parameters, as well as to compare different predictive regression models. We propose a Bayesian approach to model comparison which provides posterior probabilities for various predictive models that can be used for model averaging. Our empirical application indicates that accounting for fat tails and time-varying covariances/correlations provides a more appropriate modelling approach of the underlying dynamics of financial series and improves our ability to predict hedge fund returns.  相似文献   

3.
One of the most well-known facts about unit root testing in time series is that the Dickey–Fuller (DF) test based on ordinary least squares (OLS) demeaned data suffers from low power, and that the use of generalized least squares (GLS) demeaning can lead to substantial power gains. Of course, this development has not gone unnoticed in the panel unit root literature. However, while the potential of using GLS demeaning is widely recognized, oddly enough, there are still no theoretical results available to facilitate a formal analysis of such demeaning in the panel data context. The present article can be seen as a reaction to this. The purpose is to evaluate the effect of GLS demeaning when used in conjuncture with the pooled OLS t-test for a unit root, resulting in a panel analog of the time series DF–GLS test. A key finding is that the success of GLS depend critically on the order in which the dependent variable is demeaned and first-differenced. If the variable is demeaned prior to taking first-differences, power is maximized by using GLS demeaning, whereas if the differencing is done first, then OLS demeaning is preferred. Furthermore, even if the former demeaning approach is used, such that GLS is preferred, the asymptotic distribution of the resulting test is independent of the tuning parameters that characterize the local alternative under which the demeaning performed. Hence, the demeaning can just as well be performed under the unit root null hypothesis. In this sense, GLS demeaning under the local alternative is redundant.  相似文献   

4.
Principal component analysis (PCA) is a widely used statistical technique for determining subscales in questionnaire data. As in any other statistical technique, missing data may both complicate its execution and interpretation. In this study, six methods for dealing with missing data in the context of PCA are reviewed and compared: listwise deletion (LD), pairwise deletion, the missing data passive approach, regularized PCA, the expectation-maximization algorithm, and multiple imputation. Simulations show that except for LD, all methods give about equally good results for realistic percentages of missing data. Therefore, the choice of a procedure can be based on the ease of application or purely the convenience of availability of a technique.  相似文献   

5.
6.
We demonstrate the use of auxiliary (or latent) variables for sampling non-standard densities which arise in the context of the Bayesian analysis of non-conjugate and hierarchical models by using a Gibbs sampler. Their strategic use can result in a Gibbs sampler having easily sampled full conditionals. We propose such a procedure to simplify or speed up the Markov chain Monte Carlo algorithm. The strength of this approach lies in its generality and its ease of implementation. The aim of the paper, therefore, is to provide an alternative sampling algorithm to rejection-based methods and other sampling approaches such as the Metropolis–Hastings algorithm.  相似文献   

7.
刘田 《统计研究》2013,30(7):89-96
本文通过理论分析和蒙特卡洛仿真模拟,研究平稳性检验中选用的统计量与数据生成过程不一致时,非线性ESTAR、LSTAR与线性DF检验法能否得出正确的结论.研究表明,二阶LSTAR与ESTAR模型可用相同的检验方法,但前者的非线性特征更强.当数据生成过程为线性AR,或非线性ESTAR、二阶LSTAR模型时,使用DF或ESTAR检验法可得出大致正确的结论,但LSTAR检验法完全失败.数据生成过程的非线性特征越强,ESTAR较DF检验方法的功效增益越高;线性特征越强,DF的功效增益越高.当转移函数F(θ,c,zt)中θ较大导致一阶泰勒近似误差较大或c非0时,标准ESTAR与LSTAR非线性检验法失去应用条件.θ较大或c偏离0较远时,数据生成过程中线性成分增强,用线性DF检验可获得更好的检验结果.  相似文献   

8.
Folded normal distribution originates from the modulus of normal distribution. In the present article, we have formulated the cumulative distribution function (cdf) of a folded normal distribution in terms of standard normal cdf and the parameters of the mother normal distribution. Although cdf values of folded normal distribution were earlier tabulated in the literature, we have shown that those values are valid for very particular situations. We have also provided a simple approach to obtain values of the parameters of the mother normal distribution from those of the folded normal distribution. These results find ample application in practice, for example, in obtaining the so-called upper and lower α-points of folded normal distribution, which, in turn, is useful in testing of the hypothesis relating to folded normal distribution and in designing process capability control chart of some process capability indices. A thorough study has been made to compare the performance of the newly developed theory to the existing ones. Some simulated as well as real-life examples have been discussed to supplement the theory developed in this article. Codes (generated by R software) for the theory developed in this article are also presented for the ease of application.  相似文献   

9.
In this paper we discuss the calibration of models built on mean-reverting processes combined with Markov regime-switching (MRS). We propose a method that greatly reduces the computational burden induced by the introduction of independent regimes and perform a simulation study to test its efficiency. Our method allows for a 100 to over 1000 times faster calibration than in case of a competing approach utilizing probabilities of the last 10 observations. It is also more general and admits any value of γ in the base regime dynamics. Since the motivation for this research comes from a recent stream of literature in energy economics, we apply the new method to sample series of electricity spot prices from the German EEX and Australian NSW markets. The proposed MRS models fit these datasets well and replicate the major stylized facts of electricity spot price dynamics.  相似文献   

10.
The Integrated Nested Laplace Approximation (INLA) has established itself as a widely used method for approximate inference on Bayesian hierarchical models which can be represented as a latent Gaussian model (LGM). INLA is based on producing an accurate approximation to the posterior marginal distributions of the parameters in the model and some other quantities of interest by using repeated approximations to intermediate distributions and integrals that appear in the computation of the posterior marginals. INLA focuses on models whose latent effects are a Gaussian Markov random field. For this reason, we have explored alternative ways of expanding the number of possible models that can be fitted using the INLA methodology. In this paper, we present a novel approach that combines INLA and Markov chain Monte Carlo (MCMC). The aim is to consider a wider range of models that can be fitted with INLA only when some of the parameters of the model have been fixed. We show how new values of these parameters can be drawn from their posterior by using conditional models fitted with INLA and standard MCMC algorithms, such as Metropolis–Hastings. Hence, this will extend the use of INLA to fit models that can be expressed as a conditional LGM. Also, this new approach can be used to build simpler MCMC samplers for complex models as it allows sampling only on a limited number of parameters in the model. We will demonstrate how our approach can extend the class of models that could benefit from INLA, and how the R-INLA package will ease its implementation. We will go through simple examples of this new approach before we discuss more advanced applications with datasets taken from the relevant literature. In particular, INLA within MCMC will be used to fit models with Laplace priors in a Bayesian Lasso model, imputation of missing covariates in linear models, fitting spatial econometrics models with complex nonlinear terms in the linear predictor and classification of data with mixture models. Furthermore, in some of the examples we could exploit INLA within MCMC to make joint inference on an ensemble of model parameters.  相似文献   

11.
The development of control charts for monitoring processes associated with very low rates of nonconformities is increasingly becoming more important as manufacturing processes become more capable. Since the rate of nonconformities can typically be modeled by a simple homogeneous Poisson process, the perspective of monitoring the interarrival times using the exponential distribution becomes an alternative. Gan (1994) developed a CUSUM-based approach for monitoring the exponential mean. In this paper, we propose an alternative CUSUM-based approach based on its ease of implementation. We also provide a study of the relative performance of the two approaches.  相似文献   

12.
Some control charts have been proposed to monitor the mean of a Weibull process with type-I censoring. One type of control charts is to monitor changes in the scale parameter because it indicates changes in the mean. With this approach, we compare different control charts such as Shewhart-type and exponentially weighted moving average (EWMA) charts based on conditional expected value (CEV) and cumulative sum (CUSUM) chart based on likelihood-ratio. A simulation approach is employed to compute control limits and average run lengths. The results show that the CUSUM chart has the best performance. However, the EWMA-CEV chart is recommendable for practitioners with its competitive performance and ease of use advantage. An illustrative example is also provided.  相似文献   

13.
网上银行使用意愿的影响因素研究   总被引:1,自引:0,他引:1  
网上银行已经成为中国银行业服务客户的重要渠道,但存在许多因素制约着网上银行的广泛普及和深度应用。根据网上银行在中国发展和应用的实际情况,建立一个网上银行使用意愿多因素影响路径概念模型,并通过问卷调查和实证分析,对假设进行验证。结果表明,信任、感知易用性和感知有用性是影响使用意愿的主要因素,对信任和风险方面的担忧制约了客户使用网上银行业务,这是网上银行发展的主要障碍。  相似文献   

14.
Summary.  Functional magnetic resonance imaging has become a standard technology in human brain mapping. Analyses of the massive spatiotemporal functional magnetic resonance imaging data sets often focus on parametric or non-parametric modelling of the temporal component, whereas spatial smoothing is based on Gaussian kernels or random fields. A weakness of Gaussian spatial smoothing is underestimation of activation peaks or blurring of high curvature transitions between activated and non-activated regions of the brain. To improve spatial adaptivity, we introduce a class of inhomogeneous Markov random fields with stochastic interaction weights in a space-varying coefficient model. For given weights, the random field is conditionally Gaussian, but marginally it is non-Gaussian. Fully Bayesian inference, including estimation of weights and variance parameters, can be carried out through efficient Markov chain Monte Carlo simulation. Although motivated by the analysis of functional magnetic resonance imaging data, the methodological development is general and can also be used for spatial smoothing and regression analysis of areal data on irregular lattices. An application to stylized artificial data and to real functional magnetic resonance imaging data from a visual stimulation experiment demonstrates the performance of our approach in comparison with Gaussian and robustified non-Gaussian Markov random-field models.  相似文献   

15.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

16.
Abstract

Profile monitoring is applied when the quality of a product or a process can be determined by the relationship between a response variable and one or more independent variables. In most Phase II monitoring approaches, it is assumed that the process parameters are known. However, it is obvious that this assumption is not valid in many real-world applications. In fact, the process parameters should be estimated based on the in-control Phase I samples. In this study, the effect of parameter estimation on the performance of four Phase II control charts for monitoring multivariate multiple linear profiles is evaluated. In addition, since the accuracy of the parameter estimation has a significant impact on the performance of Phase II control charts, a new cluster-based approach is developed to address this effect. Moreover, we evaluate and compare the performance of the proposed approach with a previous approach in terms of two metrics, average of average run length and its standard deviation, which are used for considering practitioner-to-practitioner variability. In this approach, it is not necessary to know the distribution of the chart statistic. Therefore, in addition to ease of use, the proposed approach can be applied to other type of profiles. The superior performance of the proposed method compared to the competing one is shown in terms of all metrics. Based on the results obtained, our method yields less bias with small-variance Phase I estimates compared to the competing approach.  相似文献   

17.
The paper provides a novel application of the probabilistic reduction (PR) approach to the analysis of multi-categorical outcomes. The PR approach, which systematically takes account of heterogeneity and functional form concerns, can improve the specification of binary regression models. However, its utility for systematically enriching the specification of and inference from models of multi-categorical outcomes has not been examined, while multinomial logistic regression models are commonly used for inference and, increasingly, prediction. Following a theoretical derivation of the PR-based multinomial logistic model (MLM), we compare functional specification and marginal effects from a traditional specification and a PR-based specification in a model of post-stroke hospital discharge disposition and find that the traditional MLM is misspecified. Results suggest that the impact on the reliability of substantive inferences from a misspecified model may be significant, even when model fit statistics do not suggest a strong lack of fit compared with a properly specified model using the PR approach. We identify situations under which a PR-based MLM specification can be advantageous to the applied researcher.  相似文献   

18.
Inspired by reliability issues in electric transmission networks, we use a probabilistic approach to study the occurrence of large failures in a stylized cascading line failure model. Such models capture the phenomenon where an initial line failure potentially triggers massive knock-on effects. Under certain critical conditions, the probability that the number of line failures exceeds a large threshold obeys a power-law distribution, a distinctive property observed in empiric blackout data. In this paper, we examine the robustness of the power-law behavior by exploring under which conditions this behavior prevails.  相似文献   

19.
Information from multiple informants is frequently used to assess psychopathology. We consider marginal regression models with multiple informants as discrete predictors and a time to event outcome. We fit these models to data from the Stirling County Study; specifically, the models predict mortality from self report of psychiatric disorders and also predict mortality from physician report of psychiatric disorders. Previously, Horton et al. found little relationship between self and physician reports of psychopathology, but that the relationship of self report of psychopathology with mortality was similar to that of physician report of psychopathology with mortality. Generalized estimating equations (GEE) have been used to fit marginal models with multiple informant covariates; here we develop a maximum likelihood (ML) approach and show how it relates to the GEE approach. In a simple setting using a saturated model, the ML approach can be constructed to provide estimates that match those found using GEE. We extend the ML technique to consider multiple informant predictors with missingness and compare the method to using inverse probability weighted (IPW) GEE. Our simulation study illustrates that IPW GEE loses little efficiency compared with ML in the presence of monotone missingness. Our example data has non-monotone missingness; in this case, ML offers a modest decrease in variance compared with IPW GEE, particularly for estimating covariates in the marginal models. In more general settings, e.g., categorical predictors and piecewise exponential models, the likelihood parameters from the ML technique do not have the same interpretation as the GEE. Thus, the GEE is recommended to fit marginal models for its flexibility, ease of interpretation and comparable efficiency to ML in the presence of missing data.  相似文献   

20.
ARCH/GARCH representations of financial series usually attempt to model the serial correlation structure of squared returns. Although it is undoubtedly true that squared returns are correlated, there is increasing empirical evidence of stronger correlation in the absolute returns than in squared returns. Rather than assuming an explicit form for volatility, we adopt an approximation approach; we approximate the γth power of volatility by an asymmetric GARCH function with the power index γ chosen so that the approximation is optimum. Asymptotic normality is established for both the quasi-maximum likelihood estimator (qMLE) and the least absolute deviations estimator (LADE) in our approximation setting. A consequence of our approach is a relaxation of the usual stationarity condition for GARCH models. In an application to real financial datasets, the estimated values for γ are found to be close to one, consistent with the stylized fact that the strongest autocorrelation is found in the absolute returns. A simulation study illustrates that the qMLE is inefficient for models with heavy-tailed errors, whereas the LADE is more robust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号