首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

In this paper, we investigate the objective function and deflation process for sparse Partial Least Squares (PLS) regression with multiple components. While many have considered variations on the objective for sparse PLS, the deflation process for sparse PLS has not received as much attention. Our work highlights a flaw in the Statistically Inspired Modification of Partial Least Squares (SIMPLS) deflation method when applied in sparse PLS regression. We also consider the Nonlinear Iterative Partial Least Squares (NIPALS) deflation in sparse PLS regression. To remedy the flaw in the SIMPLS method, we propose a new sparse PLS method wherein the direction vectors are constrained to be sparse and lie in a chosen subspace. We give insight into this new PLS procedure and show through examples and simulation studies that the proposed technique can outperform alternative sparse PLS techniques in coefficient estimation. Moreover, our analysis reveals a simple renormalization step that can be used to improve the estimation of sparse PLS direction vectors generated using any convex relaxation method.  相似文献   

2.
Many sparse linear discriminant analysis (LDA) methods have been proposed to overcome the major problems of the classic LDA in high‐dimensional settings. However, the asymptotic optimality results are limited to the case with only two classes. When there are more than two classes, the classification boundary is complicated and no explicit formulas for the classification errors exist. We consider the asymptotic optimality in the high‐dimensional settings for a large family of linear classification rules with arbitrary number of classes. Our main theorem provides easy‐to‐check criteria for the asymptotic optimality of a general classification rule in this family as dimensionality and sample size both go to infinity and the number of classes is arbitrary. We establish the corresponding convergence rates. The general theory is applied to the classic LDA and the extensions of two recently proposed sparse LDA methods to obtain the asymptotic optimality.  相似文献   

3.
In this paper, we propose the hard thresholding regression (HTR) for estimating high‐dimensional sparse linear regression models. HTR uses a two‐stage convex algorithm to approximate the ?0‐penalized regression: The first stage calculates a coarse initial estimator, and the second stage identifies the oracle estimator by borrowing information from the first one. Theoretically, the HTR estimator achieves the strong oracle property over a wide range of regularization parameters. Numerical examples and a real data example lend further support to our proposed methodology.  相似文献   

4.
Censored median regression has proved useful for analyzing survival data in complicated situations, say, when the variance is heteroscedastic or the data contain outliers. In this paper, we study the sparse estimation for censored median regression models, which is an important problem for high dimensional survival data analysis. In particular, a new procedure is proposed to minimize an inverse-censoring-probability weighted least absolute deviation loss subject to the adaptive LASSO penalty and result in a sparse and robust median estimator. We show that, with a proper choice of the tuning parameter, the procedure can identify the underlying sparse model consistently and has desired large-sample properties including root-n consistency and the asymptotic normality. The procedure also enjoys great advantages in computation, since its entire solution path can be obtained efficiently. Furthermore, we propose a resampling method to estimate the variance of the estimator. The performance of the procedure is illustrated by extensive simulations and two real data applications including one microarray gene expression survival data.  相似文献   

5.
We propose two new procedures based on multiple hypothesis testing for correct support estimation in high‐dimensional sparse linear models. We conclusively prove that both procedures are powerful and do not require the sample size to be large. The first procedure tackles the atypical setting of ordered variable selection through an extension of a testing procedure previously developed in the context of a linear hypothesis. The second procedure is the main contribution of this paper. It enables data analysts to perform support estimation in the general high‐dimensional framework of non‐ordered variable selection. A thorough simulation study and applications to real datasets using the R package mht shows that our non‐ordered variable procedure produces excellent results in terms of correct support estimation as well as in terms of mean square errors and false discovery rate, when compared to common methods such as the Lasso, the SCAD penalty, forward regression or the false discovery rate procedure (FDR).  相似文献   

6.
We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high-dimensional setting where p ? n, LDA is not appropriate for two reasons. First, the standard estimate for the within-class covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule obtained from LDA, since it involves all p features. We propose penalized LDA, a general approach for penalizing the discriminant vectors in Fisher's discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization-maximization approach in order to efficiently optimize it when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L(1) and fused lasso penalties. Our proposal is equivalent to recasting Fisher's discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high-dimensional setting, and explore their relationships with our proposal.  相似文献   

7.
Summary.  We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO model of Lin and Zhang but decouple smoothing and sparsity, enabling the use of arbitrary non-parametric smoothers. We give an analysis of the theoretical properties of sparse additive models and present empirical results on synthetic and real data, showing that they can be effective in fitting sparse non-parametric models in high dimensional data.  相似文献   

8.
Abstract. We review and extend some statistical tools that have proved useful for analysing functional data. Functional data analysis primarily is designed for the analysis of random trajectories and infinite‐dimensional data, and there exists a need for the development of adequate statistical estimation and inference techniques. While this field is in flux, some methods have proven useful. These include warping methods, functional principal component analysis, and conditioning under Gaussian assumptions for the case of sparse data. The latter is a recent development that may provide a bridge between functional and more classical longitudinal data analysis. Besides presenting a brief review of functional principal components and functional regression, we develop some concepts for estimating functional principal component scores in the sparse situation. An extension of the so‐called generalized functional linear model to the case of sparse longitudinal predictors is proposed. This extension includes functional binary regression models for longitudinal data and is illustrated with data on primary biliary cirrhosis.  相似文献   

9.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.  相似文献   

10.
This paper proposes a high dimensional factor multivariate stochastic volatility (MSV) model in which factor covariance matrices are driven by Wishart random processes. The framework allows for unrestricted specification of intertemporal sensitivities, which can capture the persistence in volatilities, kurtosis in returns, and correlation breakdowns and contagion effects in volatilities. The factor structure allows addressing high dimensional setups used in portfolio analysis and risk management, as well as modeling conditional means and conditional variances within the model framework. Owing to the complexity of the model, we perform inference using Markov chain Monte Carlo simulation from the posterior distribution. A simulation study is carried out to demonstrate the efficiency of the estimation algorithm. We illustrate our model on a data set that includes 88 individual equity returns and the two Fama–French size and value factors. With this application, we demonstrate the ability of the model to address high dimensional applications suitable for asset allocation, risk management, and asset pricing.  相似文献   

11.
This paper proposes a high dimensional factor multivariate stochastic volatility (MSV) model in which factor covariance matrices are driven by Wishart random processes. The framework allows for unrestricted specification of intertemporal sensitivities, which can capture the persistence in volatilities, kurtosis in returns, and correlation breakdowns and contagion effects in volatilities. The factor structure allows addressing high dimensional setups used in portfolio analysis and risk management, as well as modeling conditional means and conditional variances within the model framework. Owing to the complexity of the model, we perform inference using Markov chain Monte Carlo simulation from the posterior distribution. A simulation study is carried out to demonstrate the efficiency of the estimation algorithm. We illustrate our model on a data set that includes 88 individual equity returns and the two Fama-French size and value factors. With this application, we demonstrate the ability of the model to address high dimensional applications suitable for asset allocation, risk management, and asset pricing.  相似文献   

12.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

13.
Maximum likelihood (ML) estimation of relative risks via log-binomial regression requires a restricted parameter space. Computation via non linear programming is simple to implement and has high convergence rate. We show that the optimization problem is well posed (convex domain and convex objective) and provide a variance formula along with a methodology for obtaining standard errors and prediction intervals which account for estimates on the boundary of the parameter space. We performed simulations under several scenarios already used in the literature in order to assess the performance of ML and of two other common estimation methods.  相似文献   

14.
Sample covariance matrices play a central role in numerous popular statistical methodologies, for example principal components analysis, Kalman filtering and independent component analysis. However, modern random matrix theory indicates that, when the dimension of a random vector is not negligible with respect to the sample size, the sample covariance matrix demonstrates significant deviations from the underlying population covariance matrix. There is an urgent need to develop new estimation tools in such cases with high‐dimensional data to recover the characteristics of the population covariance matrix from the observed sample covariance matrix. We propose a novel solution to this problem based on the method of moments. When the parametric dimension of the population spectrum is finite and known, we prove that the proposed estimator is strongly consistent and asymptotically Gaussian. Otherwise, we combine the first estimation method with a cross‐validation procedure to select the unknown model dimension. Simulation experiments demonstrate the consistency of the proposed procedure. We also indicate possible extensions of the proposed estimator to the case where the population spectrum has a density.  相似文献   

15.
This paper considers variable and factor selection in factor analysis. We treat the factor loadings for each observable variable as a group, and introduce a weighted sparse group lasso penalty to the complete log-likelihood. The proposal simultaneously selects observable variables and latent factors of a factor analysis model in a data-driven fashion; it produces a more flexible and sparse factor loading structure than existing methods. For parameter estimation, we derive an expectation-maximization algorithm that optimizes the penalized log-likelihood. The tuning parameters of the procedure are selected by a likelihood cross-validation criterion that yields satisfactory results in various simulation settings. Simulation results reveal that the proposed method can better identify the possibly sparse structure of the true factor loading matrix with higher estimation accuracy than existing methods. A real data example is also presented to demonstrate its performance in practice.  相似文献   

16.
In this article, we propose to use sparse sufficient dimension reduction as a novel method for Markov blanket discovery of a target variable, where we do not take any distributional assumption on the variables. By assuming sparsity on the basis of the central subspace, we developed a penalized loss function estimate on the high-dimensional covariance matrix. A coordinate descent algorithm based on an inverse regression is used to get the sparse basis of the central subspace. Finite sample behavior of the proposed method is explored by simulation study and real data examples.  相似文献   

17.
Decision making is often supported by decision models. This study suggests that the negative impact of poor data quality (DQ) on decision making is often mediated by biased model estimation. To highlight this perspective, we develop an analytical framework that links three quality levels – data, model, and decision. The general framework is first developed at a high-level, and then extended further toward understanding the effect of incomplete datasets on Linear Discriminant Analysis (LDA) classifiers. The interplay between the three quality levels is evaluated analytically – initially for a one-dimensional case, and then for multiple dimensions. The impact is then further analyzed through several simulative experiments with artificial and real-world datasets. The experiment results support the analytical development and reveal nearly-exponential decline in the decision error as the completeness level increases. To conclude, we discuss the framework and the empirical findings, elaborate on the implications of our model on the data quality management, and the use of data for decision-models estimation.  相似文献   

18.
Summary.  The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis.  相似文献   

19.
In this paper, we propose a two-stage functional principal component analysis method in age–period–cohort (APC) analysis. The first stage of the method considers the age–period effect with the fitted values treated as an offset; and the second stage of the method considers the residual age–cohort effect conditional on the already estimated age-period effect. An APC version of the model in functional data analysis provides an improved fit to the data, especially when the data are sparse and irregularly spaced. We demonstrate the effectiveness of the proposed method using body mass index data stratified by gender and ethnicity.  相似文献   

20.
Abstract. This article presents a novel estimation procedure for high‐dimensional Archimedean copulas. In contrast to maximum likelihood estimation, the method presented here does not require derivatives of the Archimedean generator. This is computationally advantageous for high‐dimensional Archimedean copulas in which higher‐order derivatives are needed but are often difficult to obtain. Our procedure is based on a parameter‐dependent transformation of the underlying random variables to a one‐dimensional distribution where a minimum‐distance method is applied. We show strong consistency of the resulting minimum‐distance estimators to the case of known margins as well as to the case of unknown margins when pseudo‐observations are used. Moreover, we conduct a simulation comparing the performance of the proposed estimation procedure with the well‐known maximum likelihood approach according to bias and standard deviation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号