首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
近期金融危机频繁发生,国际金融市场之间的动态联动性成为一个重要的研究课题。以往学者大都直接研究金融市场间的相关性,而忽略了外生金融变量对金融市场间相关性的影响。本文将对上述问题进行研究,借鉴Silvennoinen和Terasvirta(2015) STCC模型的思想,假定Copula参数受外生变量的影响,建立时变动态Copula模型——ST-VCopula模型,并基于该模型探究市场波动率(VIX指数)对股票市场之间相关性的影响,进而对几个国家的股票指数数据进行了实证分析。实证结果表明VIX指数对股票市场间联动性产生了显著的影响。VIX指数的获取简单便捷且更为直观,为市场间动态联动性的研究提供了另一种途径,可以为投资者在进行分散投资等金融活动时提供一定的指导和建议。  相似文献   

2.
ABSTRACT

In this article, a two-parameter generalized inverse Lindley distribution capable of modeling a upside-down bathtub-shaped hazard rate function is introduced. Some statistical properties of proposed distribution are explicitly derived here. The method of maximum likelihood, least square, and maximum product spacings are used for estimating the unknown model parameters and also compared through the simulation study. The approximate confidence intervals, based on a normal and a log-normal approximation, are also computed. Two algorithms are proposed for generating a random sample from the proposed distribution. A real data set is modeled to illustrate its applicability, and it is shown that our distribution fits much better than some other existing inverse distributions.  相似文献   

3.
ABSTRACT

Latent variable modeling is commonly used in behavioral, social, and medical science research. The models used in such analysis relate all observed variables to latent common factors. In many applications, the observations are highly non normal or discrete, e.g., polytomous responses or counts. The existing approaches for non normal observations can be considered lacking in several aspects, especially for multi-group samples situations. We propose a generalized linear model approach for multi-sample latent variable analysis that can handle a broad class of non normal and discrete observations, and that furnishes meaningful interpretation and inference in multi-group studies through maximum likelihood analysis. A Monte Carlo EM algorithm is proposed for parameter estimation. The convergence assessment and standard error estimation is addressed. Simulation studies are reported to show the usefulness of the our approach. An example from a substance abuse prevention study is also presented.  相似文献   

4.
The Heston-STAR model is a new class of stochastic volatility models defined by generalizing the Heston model to allow the volatility of the volatility process as well as the correlation between asset log-returns and variance shocks to change across different regimes via smooth transition autoregressive (STAR) functions. The form of the STAR functions is very flexible, much more so than the functions introduced in Jones (J Econom 116:181–224, 2003), and provides the framework for a wide range of stochastic volatility models. A Bayesian inference approach using data augmentation techniques is used for the parameters of our model. We also explore goodness of fit of our Heston-STAR model. Our analysis of the S&P 500 and VIX index demonstrates that the Heston-STAR model is more capable of dealing with large market fluctuations (such as in 2008) compared to the standard Heston model.  相似文献   

5.
ABSTRACT

The varying-coefficient single-index model (VCSIM) is a very general and flexible tool for exploring the relationship between a response variable and a set of predictors. Popular special cases include single-index models and varying-coefficient models. In order to estimate the index-coefficient and the non parametric varying-coefficients in the VCSIM, we propose a two-stage composite quantile regression estimation procedure, which integrates the local linear smoothing method and the information of quantile regressions at a number of conditional quantiles of the response variable. We establish the asymptotic properties of the proposed estimators for the index-coefficient and varying-coefficients when the error is heterogeneous. When compared with the existing mean-regression-based estimation method, our simulation results indicate that our proposed method has comparable performance for normal error and is more robust for error with outliers or heavy tail. We illustrate our methodologies with a real example.  相似文献   

6.
Abstract

In this paper, we propose an outlier-detection approach that uses the properties of an intercept estimator in a difference-based regression model (DBRM) that we first introduce. This DBRM uses multiple linear regression, and invented it to detect outliers in a multiple linear regression. Our outlier-detection approach uses only the intercept; it does not require estimates for the other parameters in the DBRM. In this paper, we first employed a difference-based intercept estimator to study the outlier-detection problem in a multiple regression model. We compared our approach with several existing methods in a simulation study and the results suggest that our approach outperformed the others. We also demonstrated the advantage of our approach using a real data application. Our approach can extend to nonparametric regression models for outliers detection.  相似文献   

7.
In this paper, we develop a semiparametric regression model for longitudinal skewed data. In the new model, we allow the transformation function and the baseline function to be unknown. The proposed model can provide a much broader class of models than the existing additive and multiplicative models. Our estimators for regression parameters, transformation function and baseline function are asymptotically normal. Particularly, the estimator for the transformation function converges to its true value at the rate n ? 1 ∕ 2, the convergence rate that one could expect for a parametric model. In simulation studies, we demonstrate that the proposed semiparametric method is robust with little loss of efficiency. Finally, we apply the new method to a study on longitudinal health care costs.  相似文献   

8.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

9.
ABSTRACT

We consider semiparametric inference on the partially linearsingle-index model (PLSIM). The generalized likelihood ratio (GLR) test is proposed to examine whether or not a family of new semiparametric models fits adequately our given data in the PLSIM. A new GLR statistic is established to deal with the testing of the index parameter α0 in the PLSIM. The newly proposed statistic is shown to asymptotically follow a χ2-distribution with the scale constant and the degrees of freedom being independent of the nuisance parameters or function. Some finite sample simulations and a real example are used to illustrate our proposed methodology.  相似文献   

10.
Abstract

We propose a simple procedure based on an existing “debiased” l1-regularized method for inference of the average partial effects (APEs) in approximately sparse probit and fractional probit models with panel data, where the number of time periods is fixed and small relative to the number of cross-sectional observations. Our method is computationally simple and does not suffer from the incidental parameters problems that come from attempting to estimate as a parameter the unobserved heterogeneity for each cross-sectional unit. Furthermore, it is robust to arbitrary serial dependence in underlying idiosyncratic errors. Our theoretical results illustrate that inference concerning APEs is more challenging than inference about fixed and low-dimensional parameters, as the former concerns deriving the asymptotic normality for sample averages of linear functions of a potentially large set of components in our estimator when a series approximation for the conditional mean of the unobserved heterogeneity is considered. Insights on the applicability and implications of other existing Lasso-based inference procedures for our problem are provided. We apply the debiasing method to estimate the effects of spending on test pass rates. Our results show that spending has a positive and statistically significant average partial effect; moreover, the effect is comparable to found using standard parametric methods.  相似文献   

11.
Abstract

In our previous research, we proposed a speedy double bootstrap method for assessing the reliability of statistical models with maximum log-likelihood criterion. It can provide 3rd order accurate probabilities. In this study, our focus switches to the mathematical proof. We propose an alternative proof of the third order accuracy in the context of the multivariate normal model. Our proof is based on tube formula differential geometric methodology and an Taylor series approach to the asymptotic analysis of the bootstrap method.  相似文献   

12.
Abstract

Occupancy models are used in statistical ecology to estimate species dispersion. The two components of an occupancy model are the detection and occupancy probabilities, with the main interest being in the occupancy probabilities. We show that for the homogeneous occupancy model there is an orthogonal transformation of the parameters that gives a natural two-stage inference procedure based on a conditional likelihood. We then extend this to a partial likelihood that gives explicit estimators of the model parameters. By allowing the separate modeling of the detection and occupancy probabilities, the extension of the two-stage approach to more general models has the potential to simplify the computational routines used there.  相似文献   

13.
We present a Bayesian analysis framework for matrix-variate normal data with dependency structures induced by rows and columns. This framework of matrix normal models includes prior specifications, posterior computation using Markov chain Monte Carlo methods, evaluation of prediction uncertainty, model structure search, and extensions to multidimensional arrays. Compared with Bayesian probabilistic matrix factorization, which integrates a Gaussian prior for single row of the data matrix, our proposed model, namely Bayesian hierarchical kernelized probabilistic matrix factorization, imposes Gaussian Process priors over multiple rows of the matrix. Hence, the learned model explicitly captures the underlying correlation among the rows and the columns. In addition, our method requires no specific assumptions like independence of latent factors for rows and columns, which obtains more flexibility for modeling real data compared to existing works. Finally, the proposed framework can be adapted to a wide range of applications, including multivariate analysis, times series, and spatial modeling. Experiments highlight the superiority of the proposed model in handling model uncertainty and model optimization.  相似文献   

14.
ABSTRACT

A simple and efficient goodness-of-fit test for exponentiality is developed by exploiting the characterization of the exponential distribution using the probability integral transformation. We adopted the empirical likelihood methodology in constructing the test statistic. The proposed test statistic has a chi-square limiting distribution. For small to moderate sample sizes Monte-Carlo simulations revealed that our proposed tests are much more superior under increasing failure rate (IFR) and bathtub decreasing-increasing failure rate (BFR) alternatives. Real data examples were used to demonstrate the robustness and applicability of our proposed tests in practice.  相似文献   

15.
Markov chain Monte Carlo techniques have revolutionized the field of Bayesian statistics. Their power is so great that they can even accommodate situations in which the structure of the statistical model itself is uncertain. However, the analysis of such trans-dimensional (TD) models is not easy and available software may lack the flexibility required for dealing with the complexities of real data, often because it does not allow the TD model to be simply part of some bigger model. In this paper we describe a class of widely applicable TD models that can be represented by a generic graphical model, which may be incorporated into arbitrary other graphical structures without significantly affecting the mechanism of inference. We also present a decomposition of the reversible jump algorithm into abstract and problem-specific components, which provides infrastructure for applying the method to all models in the class considered. These developments represent a first step towards a context-free method for implementing TD models that will facilitate their use by applied scientists for the practical exploration of model uncertainty. Our approach makes use of the popular WinBUGS framework as a sampling engine and we illustrate its use via two simple examples in which model uncertainty is a key feature.  相似文献   

16.
Abstract

We propose a method to determine the order q of a model in a general class of time series models. For the subset of linear moving average models (MA(q)), our method is compared with that of the sample autocorrelations. Since the sample autocorrelation is meant to detect a linear structure of dependence between random variables, it turns out to be more suitable for the linear case. However, our method presents a competitive option in that case, and for nonlinear models (NLMA(q)) it is shown to work better. The main advantages of our approach are that it does not make assumptions on the existence of moments and on the distribution of the noise involved in the moving average models. We also include an example with real data corresponding to the daily returns of the exchange rate process of mexican pesos and american dollars.  相似文献   

17.
Abstract

This article focuses on reducing the additional variance due to randomization of the responses. The idea of additive scrambling and its inverse has been used along with (i) split sample approach and (ii) double response approach. Specifically, our proposal is based on Gupta et al. (2006) randomized response model. We selected this model for improvement because it provides estimator of mean and sensitivity level of a sensitive variable and is better than all of its competitors proposed earlier to it and even Gupta et al. (2006) sensitivity estimator is better than that of Gupta et al. (2010). Our suggested estimators are unbiased estimators and perform better than Gupta et al. (2006) estimator. The issue of privacy protection is also discussed.  相似文献   

18.
We provide general conditions to ensure the valid Laplace approximations to the marginal likelihoods under model misspecification, and derive the Bayesian information criteria including all terms of order Op(1). Under conditions in theorem 1 of Lv and Liu [J. R. Statist. Soc. B, 76, (2014), 141–167] and a continuity condition for prior densities, asymptotic expansions with error terms of order op(1) are derived for the log-marginal likelihoods of possibly misspecified generalized linear models. We present some numerical examples to illustrate the finite sample performance of the proposed information criteria in misspecified models.  相似文献   

19.

Structural change in any time series is practically unavoidable, and thus correctly detecting breakpoints plays a pivotal role in statistical modelling. This research considers segmented autoregressive models with exogenous variables and asymmetric GARCH errors, GJR-GARCH and exponential-GARCH specifications, which utilize the leverage phenomenon to demonstrate asymmetry in response to positive and negative shocks. The proposed models incorporate skew Student-t distribution and prove the advantages of the fat-tailed skew Student-t distribution versus other distributions when structural changes appear in financial time series. We employ Bayesian Markov Chain Monte Carlo methods in order to make inferences about the locations of structural change points and model parameters and utilize deviance information criterion to determine the optimal number of breakpoints via a sequential approach. Our models can accurately detect the number and locations of structural change points in simulation studies. For real data analysis, we examine the impacts of daily gold returns and VIX on S&P 500 returns during 2007–2019. The proposed methods are able to integrate structural changes through the model parameters and to capture the variability of a financial market more efficiently.

  相似文献   

20.
Abstract

We consider the classification of high-dimensional data under the strongly spiked eigenvalue (SSE) model. We create a new classification procedure on the basis of the high-dimensional eigenstructure in high-dimension, low-sample-size context. We propose a distance-based classification procedure by using a data transformation. We also prove that our proposed classification procedure has consistency property for misclassification rates. We discuss performances of our classification procedure in simulations and real data analyses using microarray data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号