首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
The estimation of the covariance matrix is important in the analysis of bivariate longitudinal data. A good estimator for the covariance matrix can improve the efficiency of the estimators of the mean regression coefficients. Furthermore, the covariance estimation itself is also of interest, but it is a challenging job to model the covariance matrix of bivariate longitudinal data due to the complex structure and positive definite constraint. In addition, most of existing approaches are based on the maximum likelihood, which is very sensitive to outliers or heavy-tail error distributions. In this article, an adaptive robust estimation method is proposed for bivariate longitudinal data. Unlike the existing likelihood-based methods, the proposed method can adapt to different error distributions. Specifically, at first, we utilize the modified Cholesky block decomposition to parameterize the covariance matrices. Secondly, we apply the bounded Huber's score function to develop a set of robust generalized estimating equations to estimate the parameters both in the mean and the covariance models simultaneously. A data-driven approach is presented to select the parameter c in the Huber's score function, which can ensure that the proposed method is robust and efficient. A simulation study and a real data analysis are conducted to illustrate the robustness and efficiency of the proposed approach.  相似文献   

2.
宋鹏等 《统计研究》2020,37(7):116-128
高维协方差矩阵的估计问题现已成为大数据统计分析中的基本问题,传统方法要求数据满足正态分布假定且未考虑异常值影响,当前已无法满足应用需要,更加稳健的估计方法亟待被提出。针对高维协方差矩阵,一种稳健的基于子样本分组的均值-中位数估计方法被提出且简单易行,然而此方法估计的矩阵并不具备正定稀疏特性。基于此问题,本文引进一种中心正则化算法,弥补了原始方法的缺陷,通过在求解过程中对估计矩阵的非对角元素施加L1范数惩罚,使估计的矩阵具备正定稀疏的特性,显著提高了其应用价值。在数值模拟中,本文所提出的中心正则稳健估计有着更高的估计精度,同时更加贴近真实设定矩阵的稀疏结构。在后续的投资组合实证分析中,与传统样本协方差矩阵估计方法、均值-中位数估计方法和RA-LASSO方法相比,基于中心正则稳健估计构造的最小方差投资组合收益率有着更低的波动表现。  相似文献   

3.
王斌会 《统计研究》2007,24(8):72-76
传统的多元统计分析方法,如主成分分析方法和因子分析方法等的共同点是计算样本的均值向量和协方差矩阵,并在这两者的基础上计算其他统计量。当样本数据中没有离群值时,这些方法都能得到优良的结果。但是当样本数据中包括离群值时,计算结果就会很容易受到这些离群值的影响,这是因为传统的均值向量和协方差矩阵都不是稳健的统计量。本文对目前较流行的FAST-MCD方法的算法进行研究,构造了稳健的均值向量和稳健的协方差矩阵,应用到主成分分析中,并针对其不足之处提出改进方法。从模拟和实证的结果来看,改进后的的方法和新的稳健估计量确实能够对离群值起到很好的抵抗作用,大幅度地降低它们对计算结果的影响。  相似文献   

4.
This paper considers the problem of selecting a robust threshold of wavelet shrinkage. Previous approaches reported in literature to handle the presence of outliers mainly focus on developing a robust procedure for a given threshold; this is related to solving a nontrivial optimization problem. The drawback of this approach is that the selection of a robust threshold, which is crucial for the resulting fit is ignored. This paper points out that the best fit can be achieved by a robust wavelet shrinkage with a robust threshold. We propose data-driven selection methods for a robust threshold. These approaches are based on a coupling of classical wavelet thresholding rules with pseudo data. The concept of pseudo data has influenced the implementation of the proposed methods, and provides a fast and efficient algorithm. Results from a simulation study and a real example demonstrate the promising empirical properties of the proposed approaches.  相似文献   

5.
In this paper, we propose a novel robust principal component analysis (PCA) for high-dimensional data in the presence of various heterogeneities, in particular strong tailing and outliers. A transformation motivated by the characteristic function is constructed to improve the robustness of the classical PCA. The suggested method has the distinct advantage of dealing with heavy-tail-distributed data, whose covariances may be non-existent (positively infinite, for instance), in addition to the usual outliers. The proposed approach is also a case of kernel principal component analysis (KPCA) and employs the robust and non-linear properties via a bounded and non-linear kernel function. The merits of the new method are illustrated by some statistical properties, including the upper bound of the excess error and the behaviour of the large eigenvalues under a spiked covariance model. Additionally, using a variety of simulations, we demonstrate the benefits of our approach over the classical PCA. Finally, using data on protein expression in mice of various genotypes in a biological study, we apply the novel robust PCA to categorise the mice and find that our approach is more effective at identifying abnormal mice than the classical PCA.  相似文献   

6.
Bien and Tibshirani (Biometrika, 98(4):807–820, 2011) have proposed a covariance graphical lasso method that applies a lasso penalty on the elements of the covariance matrix. This method is definitely useful because it not only produces sparse and positive definite estimates of the covariance matrix but also discovers marginal independence structures by generating exact zeros in the estimated covariance matrix. However, the objective function is not convex, making the optimization challenging. Bien and Tibshirani (Biometrika, 98(4):807–820, 2011) described a majorize-minimize approach to optimize it. We develop a new optimization method based on coordinate descent. We discuss the convergence property of the algorithm. Through simulation experiments, we show that the new algorithm has a number of advantages over the majorize-minimize approach, including its simplicity, computing speed and numerical stability. Finally, we show that the cyclic version of the coordinate descent algorithm is more efficient than the greedy version.  相似文献   

7.
In this paper, nonparametric estimation of conditional quantiles of a nonlinear time series model is formulated as a nonsmooth optimization problem involving an asymmetric loss function. This asymmetric loss function is nonsmooth and is of the same structure as the so-called lopsided absolute value function. Using an effective smoothing approximation method introduced for this lopsided absolute value function, we obtain a sequence of approximate smooth optimization problems. Some important convergence properties of the approximation are established. Each of these smooth approximate optimization problems is solved by an optimization algorithm based on a sequential quadratic programming approximation with active set strategy. Within the framework of locally linear conditional quantiles, the proposed approach is compared with three other approaches, namely, an approach proposed by Yao and Tong (1996), the Iteratively Reweighted Least Squares method and the Interior-Point method, through some empirical numerical studies using simulated data and the classic lynx pelt series. In particular, the empirical performance of the proposed approach is almost identical with that of the Interior-Point method, both methods being slightly better than the Iteratively Reweighted Least Squares method. The Yao-Tong approach is comparable with the other methods in the ideal cases for the Yao-Tong method, but otherwise it is outperformed by other approaches. An important merit of the proposed approach is that it is conceptually simple and can be readily applied to parametrically nonlinear conditional quantile estimation.  相似文献   

8.
This paper introduces a Markov model in Phase II profile monitoring with autocorrelated binary response variable. In the proposed approach, a logistic regression model is extended to describe the within-profile autocorrelation. The likelihood function is constructed and then a particle swarm optimization algorithm (PSO) is tuned and utilized to estimate the model parameters. Furthermore, two control charts are extended in which the covariance matrix is derived based on the Fisher information matrix. Simulation studies are conducted to evaluate the detecting capability of the proposed control charts. A numerical example is also given to illustrate the application of the proposed method.  相似文献   

9.
In this article, we employ a regression formulation to estimate the high-dimensional covariance matrix for a given network structure. Using prior information contained in the network relationships, we model the covariance as a polynomial function of the symmetric adjacency matrix. Accordingly, the problem of estimating a high-dimensional covariance matrix is converted to one of estimating low dimensional coefficients of the polynomial regression function, which we can accomplish using ordinary least squares or maximum likelihood. The resulting covariance matrix estimator based on the maximum likelihood approach is guaranteed to be positive definite even in finite samples. Under mild conditions, we obtain the theoretical properties of the resulting estimators. A Bayesian information criterion is also developed to select the order of the polynomial function. Simulation studies and empirical examples illustrate the usefulness of the proposed methods.  相似文献   

10.
In this paper, we investigate empirical likelihood (EL) inferences via weighted composite quantile regression for non linear models. Under regularity conditions, we establish that the proposed empirical log-likelihood ratio is asymptotically chi-squared, and then the confidence intervals for the regression coefficients are constructed. The proposed method avoids estimating the unknown error density function involved in the asymptotic covariance matrix of the estimators. Simulations suggest that the proposed EL procedure is more efficient and robust, and a real data analysis is used to illustrate the performance.  相似文献   

11.
In this article, we consider a robust method of estimating a realized covariance matrix calculated as the sum of cross products of intraday high-frequency returns. According to recent articles in financial econometrics, the realized covariance matrix is essentially contaminated with market microstructure noise. Although techniques for removing noise from the matrix have been studied since the early 2000s, they have primarily investigated a low-dimensional covariance matrix with statistically significant sample sizes. We focus on noise-robust covariance estimation under converse circumstances, that is, a high-dimensional covariance matrix possibly with a small sample size. For the estimation, we utilize a statistical hypothesis test based on the characteristic that the largest eigenvalue of the covariance matrix asymptotically follows a Tracy–Widom distribution. The null hypothesis assumes that log returns are not pure noises. If a sample eigenvalue is larger than the relevant critical value, then we fail to reject the null hypothesis. The simulation results show that the estimator studied here performs better than others as measured by mean squared error. The empirical analysis shows that our proposed estimator can be adopted to forecast future covariance matrices using real data.  相似文献   

12.
This work is concerned with robustness in Principal Component Analysis (PCA). The approach, which we adopt here, is to replace the criterion of least squares by another criterion based on a convex and sufficiently differentiable loss function ρ. Using this criterion we propose a robust estimate of the location vector and introduce an orthogonality with respect to (w.r.t.) ρ in order to define the different steps of a PCA. The influence functions of a vector mean and principal vectors are developed in order to provide method for obtaining a robust PCA. The practical procedure is based on an alternative-steps algorithm.  相似文献   

13.
We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study.  相似文献   

14.
In this paper, an unstructured principal fitted response reduction approach is proposed. The new approach is mainly different from two existing model-based approaches, because a required condition is assumed in a covariance matrix of the responses instead of that of a random error. Also, it is invariant under one of popular ways of standardizing responses with its sample covariance equal to the identity matrix. According to numerical studies, the proposed approach yields more robust estimation than the two existing methods, in the sense that its asymptotic performances are not severely sensitive to various situations. So, it can be recommended that the proposed method should be used as a default model-based method.  相似文献   

15.
We propose a method that integrates bootstrap into the forward search algorithm in the construction of robust confidence intervals for elements of the eigenvectors of the correlation matrix in the presence of outliers. Coverage probability of the bootstrap simultaneous confidence intervals was compared to the coverage probabilities of regular asymptotic confidence region and asymptotic confidence region based on the minimum covariance determinant (MCD) approach through a simulation study. The method produced more stable coverage probabilities for datasets with or without outliers and across several sample sizes compared to approaches based on asymptotic confidence regions.  相似文献   

16.
In this paper, we study estimation of linear models in the framework of longitudinal data with dropouts. Under the assumptions that random errors follow an elliptical distribution and all the subjects share the same within-subject covariance matrix which does not depend on covariates, we develop a robust method for simultaneous estimation of mean and covariance. The proposed method is robust against outliers, and does not require to model the covariance and missing data process. Theoretical properties of the proposed estimator are established and simulation studies show its good performance. In the end, the proposed method is applied to a real data analysis for illustration.  相似文献   

17.
A criterion for robust estimation of location and covariance matrix is considered, and its application in outlier labeling is discussed. This method, unlike the methods based on MVE and MCD, is applicable to large and high-dimension data sets. The method proposed here is also robust and has the same breakdown point as the MVE- and MCD-based methods. Furthermore, the computational complexity of the proposed method is significantly smaller than that of other methods.  相似文献   

18.
We review the Fisher scoring and EM algorithms for incomplete multivariate data from an estimating function point of view, and examine the corresponding quasi-score functions under second-moment assumptions. A bias-corrected REML-type estimator for the covariance matrix is derived, and the Fisher, Godambe and empirical sandwich information matrices are compared. We make a numerical investigation of the two algorithms, and compare with a hybrid algorithm, where Fisher scoring is used for the mean vector and the EM algorithm for the covariance matrix.  相似文献   

19.
We propose an 1-regularized likelihood method for estimating the inverse covariance matrix in the high-dimensional multivariate normal model in presence of missing data. Our method is based on the assumption that the data are missing at random (MAR) which entails also the completely missing at random case. The implementation of the method is non-trivial as the observed negative log-likelihood generally is a complicated and non-convex function. We propose an efficient EM algorithm for optimization with provable numerical convergence properties. Furthermore, we extend the methodology to handle missing values in a sparse regression context. We demonstrate both methods on simulated and real data.  相似文献   

20.
In this article, we propose to use sparse sufficient dimension reduction as a novel method for Markov blanket discovery of a target variable, where we do not take any distributional assumption on the variables. By assuming sparsity on the basis of the central subspace, we developed a penalized loss function estimate on the high-dimensional covariance matrix. A coordinate descent algorithm based on an inverse regression is used to get the sparse basis of the central subspace. Finite sample behavior of the proposed method is explored by simulation study and real data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号