首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

This note discusses the approach of specifying a Gaussian Markov random field (GMRF) by the Cholesky triangle of the precision matrix. A such representation can be made extremely sparse using numerical techniques for incomplete sparse Cholesky factorization, and provide very computational efficient representation for simulating from the GMRF. However, we provide theoretical and empirical justification showing that the sparse Cholesky triangle representation is fragile when conditioning a GMRF on a subset of the variables or observed data, meaning that the computational cost increases.  相似文献   

2.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

3.
The solution of the generalized symmetric eigenproblem Ax = λBx is required in many multivariate statistical models, viz. canonical correlation, discriminant analysis, multivariate linear model, limited information maximum likelihoods. The problem can be solved by two efficient numerical algorithms: Cholesky decomposition or singular value decomposition. Practical considerations for implementation are also discussed.  相似文献   

4.
In this work, the asymptotic unbiasedness and the asymptotic uncorrelatedness of periodograms for the periodically correlated spatial processes are given. This will be done using the time dependent spectral representation of periodically correlated spatial processes and Cholesky factorization of the spectral density. A graphical method is also proposed to detect the period of periodically correlated spatial processes. In order to support the theory, a simulation study and a real data example are performed.  相似文献   

5.
Many applications require efficient sampling from Gaussian distributions. The method of choice depends on the dimension of the problem as well as the structure of the covariance- (Σ) or precision matrix (Q). The most common black-box routine for computing a sample is based on Cholesky factorization. In high dimensions, computing the Cholesky factor of Σ or Q may be prohibitive due to accumulation of more non-zero entries in the factor than is possible to store in memory. We compare different methods for computing the samples iteratively adapting ideas from numerical linear algebra. These methods assume that matrix vector products, Qv, are fast to compute. We show that some of the methods are competitive and faster than Cholesky sampling and that a parallel version of one method on a Graphical Processing Unit (GPU) using CUDA can introduce a speed-up of up to 30x. Moreover, one method is used to sample from the posterior distribution of petroleum reservoir parameters in a North Sea field, given seismic reflection data on a large 3D grid.  相似文献   

6.
Single index models are natural extensions of linear models and overcome the so-called curse of dimensionality. They are very useful for longitudinal data analysis. In this paper, we develop a new efficient estimation procedure for single index models with longitudinal data, based on Cholesky decomposition and local linear smoothing method. Asymptotic normality for the proposed estimators of both the parametric and nonparametric parts will be established. Monte Carlo simulation studies show excellent finite sample performance. Furthermore, we illustrate our methods with a real data example.  相似文献   

7.
In longitudinal data analysis, efficient estimation of regression coefficients requires a correct specification of certain covariance structure, and efficient estimation of covariance matrix requires a correct specification of mean regression model. In this article, we propose a general semiparametric model for the mean and the covariance simultaneously using the modified Cholesky decomposition. A regression spline-based approach within the framework of generalized estimating equations is proposed to estimate the parameters in the mean and the covariance. Under regularity conditions, asymptotic properties of the resulting estimators are established. Extensive simulation is conducted to investigate the performance of the proposed estimator and in the end a real data set is analysed using the proposed approach.  相似文献   

8.
Instantaneous dependence among several asset returns is the main reason for the computational and statistical complexities in working with full multivariate GARCH models. Using the Cholesky decomposition of the covariance matrix of such returns, we introduce a broad class of multivariate models where univariate GARCH models are used for variances of individual assets and parsimonious models for the time-varying unit lower triangular matrices. This approach, while reducing the number of parameters and severity of the positive-definiteness constraint, has several advantages compared to the traditional orthogonal and related GARCH models. Its major drawback is the potential need for an a priori ordering or grouping of the stocks in a portfolio, which through a case study we show can be taken advantage of so far as reducing the forecast error of the volatilities and the dimension of the parameter space are concerned. Moreover, the Cholesky decomposition, unlike its competitors, decompose the normal likelihood function as a product of univariate normal likelihoods with independent parameters, resulting in fast estimation algorithms. Gaussian maximum likelihood methods of estimation of the parameters are developed. The methodology is implemented for a real financial dataset with seven assets, and its forecasting power is compared with other existing models.  相似文献   

9.
The Cholesky decomposition is given for the inverse of a variance matrix occurring in repeated measures problems where observations have a correlation structure both within and between experimental units. The use of this decomposition is outlined for ML and REML estimation procedures.  相似文献   

10.
We consider a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows us to use Bayesian variable selection methods for covariance selection. We search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors. With this method we are able to learn from the data for each effect whether it is random or not, and whether covariances among random effects are zero. An application in marketing shows a substantial reduction of the number of free elements in the variance-covariance matrix.  相似文献   

11.
Riemann manifold Hamiltonian Monte Carlo (RMHMC) has the potential to produce high-quality Markov chain Monte Carlo output even for very challenging target distributions. To this end, a symmetric positive definite scaling matrix for RMHMC is proposed. The scaling matrix is obtained by applying a modified Cholesky factorization to the potentially indefinite negative Hessian of the target log-density. The methodology is able to exploit the sparsity of the Hessian, stemming from conditional independence modeling assumptions, and thus admit fast implementation of RMHMC even for high-dimensional target distributions. Moreover, the methodology can exploit log-concave conditional target densities, often encountered in Bayesian hierarchical models, for faster sampling and more straightforward tuning. The proposed methodology is compared to alternatives for some challenging targets and is illustrated by applying a state-space model to real data.  相似文献   

12.
In the present study we compare three state rotation methods in modelling the impact of the US economy on the Finnish economy, i.e. Schur decomposition, eigenvalue analysis and singular value decomposition. Singular value decomposition is seen to provide a robust approximation of the state rotation in most cases studied, irrespective of whether the characteristic roots of the state transition matrix are complex. Thus, singular value decomposition seems to be a viable computational device not only in estimating the system matrices of the state space model, but also in state rotation, as compared to the more involved techniques based on eigenvalue analysis or Schur decomposition.  相似文献   

13.
In this paper, we propose a multivariate tt regression model with its mean and scale covariance modeled jointly for the analysis of longitudinal data. A modified Cholesky decomposition is adopted to factorize the dependence structure in terms of unconstrained autoregressive and scale innovation parameters. We present three distinct representations of the log-likelihood function of the model and study the associated properties. A computationally efficient Fisher scoring algorithm is developed for carrying out maximum likelihood estimation. The technique for the prediction of future responses in this context is also investigated. The implementation of the proposed methodology is illustrated through two real-life examples and extensive simulation studies.  相似文献   

14.
Justice (1977) has presented a Leviiison-typc- solution for two dimensional Wiener filtering problem. Since the solution is based on the Szego polynomials, it is necessary to calculate the bivariate Szego polynomials to obtain it In this paper, an alternative solution of the problem is proposed, which is based on the block Cholesky decomposition of the inverse of a Hermi-tian block Toeplitz matrix. Since the block Choiesky decomposition can be accomplished through the Whittle algorithm, the new solution is easy to implement into a computer program.  相似文献   

15.
Varying-coefficient models are very useful for longitudinal data analysis. In this paper, we focus on varying-coefficient models for longitudinal data. We develop a new estimation procedure using Cholesky decomposition and profile least squares techniques. Asymptotic normality for the proposed estimators of varying-coefficient functions has been established. Monte Carlo simulation studies show excellent finite-sample performance. We illustrate our methods with a real data example.  相似文献   

16.
We propose a new algorithm for carrying out all possible subset regressions using the triangular decomposition. This algorithm is simple but very efficient computationally when compared with the existing algorithms based on sweepings or the QR decomposition.  相似文献   

17.
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations.  相似文献   

18.
The authors address the problem of likelihood‐based inference for correlated diffusions. Such a task presents two issues; the positive definite constraints of the diffusion matrix and the likelihood intractability. The first issue is handled by using the Cholesky factorization on the diffusion matrix. To deal with the likelihood unavailability, a generalization of the data augmentation framework of Roberts and Stramer [Roberts and Stramer (2001) Biometrika 88(3), 603–621] to d‐dimensional correlated diffusions, including multivariate stochastic volatility models, is given. The methodology is illustrated through simulated and real data sets. The Canadian Journal of Statistics 39: 52–72; 2011 © 2011 Statistical Society of Canada  相似文献   

19.
This paper presents a fully Bayesian approach to multivariate t regression models whose mean vector and scale covariance matrix are modelled jointly for analyzing longitudinal data. The scale covariance structure is factorized in terms of unconstrained autoregressive and scale innovation parameters through a modified Cholesky decomposition. A computationally flexible data augmentation sampler coupled with the Metropolis-within-Gibbs scheme is developed for computing the posterior distributions of parameters. The Bayesian predictive inference for the future response vector is also investigated. The proposed methodologies are illustrated through a real example from a sleep dose–response study.  相似文献   

20.
This article proposes a dynamic framework for modeling and forecasting of realized covariance matrices using vine copulas to allow for more flexible dependencies between assets. Our model automatically guarantees positive definiteness of the forecast through the use of a Cholesky decomposition of the realized covariance matrix. We explicitly account for long-memory behavior by using fractionally integrated autoregressive moving average (ARFIMA) and heterogeneous autoregressive (HAR) models for the individual elements of the decomposition. Furthermore, our model incorporates non-Gaussian innovations and GARCH effects, accounting for volatility clustering and unconditional kurtosis. The dependence structure between assets is studied using vine copula constructions, which allow for nonlinearity and asymmetry without suffering from an inflexible tail behavior or symmetry restrictions as in conventional multivariate models. Further, the copulas have a direct impact on the point forecasts of the realized covariances matrices, due to being computed as a nonlinear transformation of the forecasts for the Cholesky matrix. Beside studying in-sample properties, we assess the usefulness of our method in a one-day-ahead forecasting framework, comparing recent types of models for the realized covariance matrix based on a model confidence set approach. Additionally, we find that in Value-at-Risk (VaR) forecasting, vine models require less capital requirements due to smoother and more accurate forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号