首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider the problem of learning a Gaussian variational approximation to the posterior distribution for a high-dimensional parameter, where we impose sparsity in the precision matrix to reflect appropriate conditional independence structure in the model. Incorporating sparsity in the precision matrix allows the Gaussian variational distribution to be both flexible and parsimonious, and the sparsity is achieved through parameterization in terms of the Cholesky factor. Efficient stochastic gradient methods that make appropriate use of gradient information for the target distribution are developed for the optimization. We consider alternative estimators of the stochastic gradients, which have lower variation and are more stable. Our approach is illustrated using generalized linear mixed models and state-space models for time series.  相似文献   

2.
In this paper the problem of selecting the best of several normal populations in terms of Mahalanobis distance (MD) whenpopulation variance-covariance matrices are equal and unknown is discussed. The selection rule enunciated is shown to ap-proximately satisfy the usual requirement of a minimum guaranteed probability of correct selection. Methods of computing tables required for application of the rule are discussed.  相似文献   

3.
This paper is concerned with obtaining an expression for the conditional variance-covariance matrix when the random vector is gamma scaled of a multivariate normal distribution. We show that the conditional variance is not degenerate as in the multivariate normal distribution, but depends upon a positive function for which various asymptotic properties are derived. A discussion section is included commenting on the usefulness of these results  相似文献   

4.
In this paper, we discuss the selection of random effects within the framework of generalized linear mixed models (GLMMs). Based on a reparametrization of the covariance matrix of random effects in terms of modified Cholesky decomposition, we propose to add a shrinkage penalty term to the penalized quasi-likelihood (PQL) function of the variance components for selecting effective random effects. The shrinkage penalty term is taken as a function of the variance of random effects, initiated by the fact that if the variance is zero then the corresponding variable is no longer random (with probability one). The proposed method takes the advantage of a convenient computation for the PQL estimation and appealing properties for certain shrinkage penalty functions such as LASSO and SCAD. We propose to use a backfitting algorithm to estimate the fixed effects and variance components in GLMMs, which also selects effective random effects simultaneously. Simulation studies show that the proposed approach performs quite well in selecting effective random effects in GLMMs. Real data analysis is made using the proposed approach, too.  相似文献   

5.
the estimation of variance components of heteroscedastic random model is discussed in this paper. Maximum Likelihood (ML) is described for one-way heteroscedastic random models. The proportionality condition that cell variance is proportional to the cell sample size, is used to eliminate the efffect of heteroscedasticity. The algebraic expressions of the estimators are obtained for the model. It is seen that the algebraic expressions of the estimators depend mainly on the inverse of the variance-covariance matrix of the observation vector. So, the variance-covariance matrix is obtained and the formulae for the inversions are given. A Monte Carlo study is conducted. Five different variance patterns with different numbers of cells are considered in this study. For each variance pattern, 1000 Monte Carlo samples are drawn. Then the Monte Carlo biases and Monte Carlo MSE’s of the estimators of variance components are calculated. In respect of both bias and MSE, the Maximum Likelihood (ML) estimators of variance components are found to be sufficiently good.  相似文献   

6.
As researchers increasingly rely on linear mixed models to characterize longitudinal data, there is a need for improved techniques for selecting among this class of models which requires specification of both fixed and random effects via a mean model and variance-covariance structure. The process is further complicated when fixed and/or random effects are non nested between models. This paper explores the development of a hypothesis test to compare non nested linear mixed models based on extensions of the work begun by Sir David Cox. We assess the robustness of this approach for comparing models containing correlated measures of body fat for predicting longitudinal cardiometabolic risk.  相似文献   

7.
In this article we present a technique for implementing large-scale optimal portfolio selection. We use high-frequency daily data to capture valuable statistical information in asset returns. We describe several statistical issues involved in quantitative approaches to portfolio selection. Our methodology applies to large-scale portfolio-selection problems in which the number of possible holdings is large relative to the estimation period provided by historical data. We illustrate our approach on an equity database that consists of stocks from the Standard and Poor's index, and we compare our portfolios to this benchmark index. Our methodology differs from the usual quadratic programming approach to portfolio selection in three ways: (1) We employ informative priors on the expected returns and variance-covariance matrices, (2) we use daily data for estimation purposes, with upper and lower holding limits for individual securities, and (3) we use a dynamic asset-allocation approach that is based on reestimating and then rebalancing the portfolio weights on a prespecified time window. The key inputs to the optimization process are the predictive distributions of expected returns and the predictive variance-covariance matrix. We describe the statistical issues involved in modeling these inputs for high-dimensional portfolio problems in which our data frequency is daily. In our application, we find that our optimal portfolio outperforms the underlying benchmark.  相似文献   

8.
ABSTRACT

This note discusses the approach of specifying a Gaussian Markov random field (GMRF) by the Cholesky triangle of the precision matrix. A such representation can be made extremely sparse using numerical techniques for incomplete sparse Cholesky factorization, and provide very computational efficient representation for simulating from the GMRF. However, we provide theoretical and empirical justification showing that the sparse Cholesky triangle representation is fragile when conditioning a GMRF on a subset of the variables or observed data, meaning that the computational cost increases.  相似文献   

9.
We consider the problem of estimating the two parameters of the discrete Good distribution. We first show that the sufficient statistics for the parameters are the arithmetic and the geometric means. The maximum likelihood estimators (MLE's) of the parameters are obtained by solving numerically a system of equations involving the Lerch zeta function and the sufficient statistics. We find an expression for the asymptotic variance-covariance matrix of the MLE's, which can be evaluated numerically. We show that the probability mass function satisfies a simple recurrence equation linear in the two parameters, and propose the quadratic distance estimator (QDE) which can be computed with an ineratively reweighted least-squares algorithm. the QDE is easy to calculate and admits a simple expression for its asymptotic variance-covariance matrix. We compute this matrix for the MLE's and the QDE for various values of the parameters and see that the QDE has very high asymptotic efficiency. Finally, we present a numerical example.  相似文献   

10.
In this paper, we introduce non-centered and partially non-centered MCMC algorithms for stochastic epidemic models. Centered algorithms previously considered in the literature perform adequately well for small data sets. However, due to the high dependence inherent in the models between the missing data and the parameters, the performance of the centered algorithms gets appreciably worse when larger data sets are considered. Therefore non-centered and partially non-centered algorithms are introduced and are shown to out perform the existing centered algorithms.  相似文献   

11.
This paper demonstrates how certain statistics, computed from a sample of size n (from almost any distribution) may be simulated by using a sequence of substantially less than n random normal variates. Many statistics, θ, including almost all maximum likelihood estimates, can be expressed in terms of the sample trigonometric moments, STM. The STM are asymptotically multivariate normal with a mean vector and variance-covariance matrix easily expressible in terms of equally spaced characteristic function evaluations. Thus one only needs to know the Fourier transform or equivalently the characteristic function associated with elements of any moderate to large i. i. d. sample and have access to a normal random number generator to generate a sequence of STM with distributional properties almost identical to those of STM computed from that sample. These STM can in turn be used to compute the desired statistic θ.  相似文献   

12.
An unknown graph is partially observed by selecting a vertex sample and observing the edges in the subgraph induced by the sample. The sample is selected by either simple random sampling or Bernoulli sampling. We consider the problem of estimating the numbers of vertices of different degrees in the unknown graph by using the sample information. Unbiased estimators are given and their variance-covariance matrix is shown to depend on a set of intrinsic graph parameters which can hardly be satisfactorily estimated from the sample information without further assumptions. In particular, the problem of estimating the number of isolates (vertices of degree zero) is considered in some detail.  相似文献   

13.
This article proposes a dynamic framework for modeling and forecasting of realized covariance matrices using vine copulas to allow for more flexible dependencies between assets. Our model automatically guarantees positive definiteness of the forecast through the use of a Cholesky decomposition of the realized covariance matrix. We explicitly account for long-memory behavior by using fractionally integrated autoregressive moving average (ARFIMA) and heterogeneous autoregressive (HAR) models for the individual elements of the decomposition. Furthermore, our model incorporates non-Gaussian innovations and GARCH effects, accounting for volatility clustering and unconditional kurtosis. The dependence structure between assets is studied using vine copula constructions, which allow for nonlinearity and asymmetry without suffering from an inflexible tail behavior or symmetry restrictions as in conventional multivariate models. Further, the copulas have a direct impact on the point forecasts of the realized covariances matrices, due to being computed as a nonlinear transformation of the forecasts for the Cholesky matrix. Beside studying in-sample properties, we assess the usefulness of our method in a one-day-ahead forecasting framework, comparing recent types of models for the realized covariance matrix based on a model confidence set approach. Additionally, we find that in Value-at-Risk (VaR) forecasting, vine models require less capital requirements due to smoother and more accurate forecasts.  相似文献   

14.
We propose a Bayesian stochastic search approach to selecting restrictions on multivariate regression models where the errors exhibit deterministic or stochastic conditional volatilities. We develop a Markov chain Monte Carlo (MCMC) algorithm that generates posterior restrictions on the regression coefficients and Cholesky decompositions of the covariance matrix of the errors. Numerical simulations with artificially generated data show that the proposed method is effective in selecting the data-generating model restrictions and improving the forecasting performance of the model. Applying the method to daily foreign exchange rate data, we conduct stochastic search on a VAR model with stochastic conditional volatilities.  相似文献   

15.
The Paper considers estimation of the p(> 3)-variate normal mean when the variance-covariance matrix is diagonal with unknown diagonal elements. A class of James-Stein estimators is developed, and is compared with the sample mean under an empirical minimax stopping rule. Asymptotic risk expansions are provided for both the sequential sample mean and the sequential James-Stein estimators. It is shown that the James-Stein estimators dominate the sample mean in a certain asymptotic sense.  相似文献   

16.
We consider the linear feature selection problem of obtaining a nonzero 1 × n matrix B which minimizes the probability of misclassification based on the Bayes decision rule applied to the random variable Y = BX, where X is a random n-vector arising from one of m Gaussian populations with equal covariances and equal apriori probabilities. It is shown that the optimal B satisfies a fixed point equation B = F(B) which can be solved by successive substitution.  相似文献   

17.
We consider fast lattice approximation methods for a solution of a certain stochastic non‐local pseudodifferential operator equation. This equation defines a Matérn class random field. We approximate the pseudodifferential operator with truncated Taylor expansion, spectral domain error functional minimization and rounding approximations. This allows us to construct Gaussian Markov random field approximations. We construct lattice approximations with finite‐difference methods. We show that the solutions can be constructed with overdetermined systems of stochastic matrix equations with sparse matrices, and we solve the system of equations with a sparse Cholesky decomposition. We consider convergence of the truncated Taylor approximation by studying band‐limited Matérn fields. We consider the convergence of the discrete approximations to the continuous limits. Finally, we study numerically the accuracy of different approximation methods with an interpolation problem.  相似文献   

18.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

19.
In this study, using maximum likelihood estimation, a considerably effective change point model is proposed for the generalized variance control chart in which the required statistics are calculated with its distributional properties. The procedure, when used with generalized variance control charts, would be helpful for practitioners both controlling the multivariate process dispersion and detecting the time of the change in variance-covariance matrix of a process. The procedure starts after the chart issues a signal. Several structural changes for the variance-covariance matrix are considered and the precision and the accuracy of the proposed method is discussed.  相似文献   

20.
A useful parameterization of the exponential failure model with imperfect signalling, under random censoring scheme, is considered to accommodate covariates. Simple sufficient conditions for the existence, uniqueness, consistency, and asymptotic normality of maximum likelihood estimators for the parameters in these models are given. The results are then applied to derive the asymptotic properties of the likelihood ratio test for a difference between failure signalling proportions between groups in a ‘one-way’ classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号