首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
We derive an identity for nonparametric maximum likelihood estimators (NPMLE) and regularized MLEs in censored data models which expresses the standardized maximum likelihood estimator in terms of the standardized empirical process. This identity provides an effective starting point in proving both consistency and efficiency of NPMLE and regularized MLE. The identity and corresponding method for proving efficiency is illustrated for the NPMLE in the univariate right-censored data model, the regularized MLE in the current status data model and for an implicit NPMLE based on a mixture of right-censored and current status data. Furthermore, a general algorithm for estimation of the limiting variance of the NPMLE is provided. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

2.
ABSTRACT

In this paper, we develop an efficient wavelet-based regularized linear quantile regression framework for coefficient estimations, where the responses are scalars and the predictors include both scalars and function. The framework consists of two important parts: wavelet transformation and regularized linear quantile regression. Wavelet transform can be used to approximate functional data through representing it by finite wavelet coefficients and effectively capturing its local features. Quantile regression is robust for response outliers and heavy-tailed errors. In addition, comparing with other methods it provides a more complete picture of how responses change conditional on covariates. Meanwhile, regularization can remove small wavelet coefficients to achieve sparsity and efficiency. A novel algorithm, Alternating Direction Method of Multipliers (ADMM) is derived to solve the optimization problems. We conduct numerical studies to investigate the finite sample performance of our method and applied it on real data from ADHD studies.  相似文献   

3.
Quantile regression (QR) is a natural alternative for depicting the impact of covariates on the conditional distributions of a outcome variable instead of the mean. In this paper, we investigate Bayesian regularized QR for the linear models with autoregressive errors. LASSO-penalized type priors are forced on regression coefficients and autoregressive parameters of the model. Gibbs sampler algorithm is employed to draw the full posterior distributions of unknown parameters. Finally, the proposed procedures are illustrated by some simulation studies and applied to a real data analysis of the electricity consumption.  相似文献   

4.
This article considers the adaptive elastic net estimator for regularized mean regression from a Bayesian perspective. Representing the Laplace distribution as a mixture of Bartlett–Fejer kernels with a Gamma mixing density, a Gibbs sampling algorithm for the adaptive elastic net is developed. By introducing slice variables, it is shown that the mixture representation provides a Gibbs sampler that can be accomplished by sampling from either truncated normal or truncated Gamma distribution. The proposed method is illustrated using several simulation studies and analyzing a real dataset. Both simulation studies and real data analysis indicate that the proposed approach performs well.  相似文献   

5.
Principal component analysis (PCA) is a widely used statistical technique for determining subscales in questionnaire data. As in any other statistical technique, missing data may both complicate its execution and interpretation. In this study, six methods for dealing with missing data in the context of PCA are reviewed and compared: listwise deletion (LD), pairwise deletion, the missing data passive approach, regularized PCA, the expectation-maximization algorithm, and multiple imputation. Simulations show that except for LD, all methods give about equally good results for realistic percentages of missing data. Therefore, the choice of a procedure can be based on the ease of application or purely the convenience of availability of a technique.  相似文献   

6.
We develop our previous works concerning the identification of the collection of significant factors determining some, in general, nonbinary random response variable. Such identification is important, e.g., in biological and medical studies. Our approach is to examine the quality of response variable prediction by functions in (certain part of) the factors. The prediction error estimation requires some cross-validation procedure, certain prediction algorithm, and estimation of the penalty function. Using simulated data, we demonstrate the efficiency of our method. We prove a new central limit theorem for introduced regularized estimates under some natural conditions for arrays of exchangeable random variables.  相似文献   

7.
Abstract

In this paper we introduce continuous tree mixture model that is the mixture of undirected graphical models with tree structured graphs and is considered as multivariate analysis with a non parametric approach. We estimate its parameters, the component edge sets and mixture proportions through regularized maximum likalihood procedure. Our new algorithm, which uses expectation maximization algorithm and the modified version of Kruskal algorithm, simultaneosly estimates and prunes the mixture component trees. Simulation studies indicate this method performs better than the alternative Gaussian graphical mixture model. The proposed method is also applied to water-level data set and is compared with the results of Gaussian mixture model.  相似文献   

8.
9.
Abstract

Variable selection in finite mixture of regression (FMR) models is frequently used in statistical modeling. The majority of applications of variable selection in FMR models use a normal distribution for regression error. Such assumptions are unsuitable for a set of data containing a group or groups of observations with heavy tails and outliers. In this paper, we introduce a robust variable selection procedure for FMR models using the t distribution. With appropriate selection of the tuning parameters, the consistency and the oracle property of the regularized estimators are established. To estimate the parameters of the model, we develop an EM algorithm for numerical computations and a method for selecting tuning parameters adaptively. The parameter estimation performance of the proposed model is evaluated through simulation studies. The application of the proposed model is illustrated by analyzing a real data set.  相似文献   

10.
This paper addresses the problem of simultaneous variable selection and estimation in the random-intercepts model with the first-order lag response. This type of model is commonly used for analyzing longitudinal data obtained through repeated measurements on individuals over time. This model uses random effects to cover the intra-class correlation, and the first lagged response to address the serial correlation, which are two common sources of dependency in longitudinal data. We demonstrate that the conditional likelihood approach by ignoring correlation among random effects and initial responses can lead to biased regularized estimates. Furthermore, we demonstrate that joint modeling of initial responses and subsequent observations in the structure of dynamic random-intercepts models leads to both consistency and Oracle properties of regularized estimators. We present theoretical results in both low- and high-dimensional settings and evaluate regularized estimators' performances by conducting simulation studies and analyzing a real dataset. Supporting information is available online.  相似文献   

11.
The common principal components (CPC) model provides a way to model the population covariance matrices of several groups by assuming a common eigenvector structure. When appropriate, this model can provide covariance matrix estimators of which the elements have smaller standard errors than when using either the pooled covariance matrix or the per group unbiased sample covariance matrix estimators. In this article, a regularized CPC estimator under the assumption of a common (or partially common) eigenvector structure in the populations is proposed. After estimation of the common eigenvectors using the Flury–Gautschi (or other) algorithm, the off-diagonal elements of the nearly diagonalized covariance matrices are shrunk towards zero and multiplied with the orthogonal common eigenvector matrix to obtain the regularized CPC covariance matrix estimates. The optimal shrinkage intensity per group can be estimated using cross-validation. The efficiency of these estimators compared to the pooled and unbiased estimators is investigated in a Monte Carlo simulation study, and the regularized CPC estimator is applied to a real dataset to demonstrate the utility of the method.  相似文献   

12.
Regularization methods for simultaneous variable selection and coefficient estimation have been shown to be effective in quantile regression in improving the prediction accuracy. In this article, we propose the Bayesian bridge for variable selection and coefficient estimation in quantile regression. A simple and efficient Gibbs sampling algorithm was developed for posterior inference using a scale mixture of uniform representation of the Bayesian bridge prior. This is the first work to discuss regularized quantile regression with the bridge penalty. Both simulated and real data examples show that the proposed method often outperforms quantile regression without regularization, lasso quantile regression, and Bayesian lasso quantile regression.  相似文献   

13.
Learning the kernel function has recently received considerable attention in machine learning. In this paper, we consider the multi-kernel regularized regression (MKRR) algorithm associated with least square loss over reproducing kernel Hilbert spaces. We provide an error analysis for the MKRR algorithm based on the Rademacher chaos complexity and iteration techniques. The main result is an explicit learning rate for the MKRR algorithm. Two examples are given to illustrate that the learning rates are much improved compared to those in the literature.  相似文献   

14.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   

15.
We address the issue of recovering the structure of large sparse directed acyclic graphs from noisy observations of the system. We propose a novel procedure based on a specific formulation of the \(\ell _1\)-norm regularized maximum likelihood, which decomposes the graph estimation into two optimization sub-problems: topological structure and node order learning. We provide convergence inequalities for the graph estimator, as well as an algorithm to solve the induced optimization problem, in the form of a convex program embedded in a genetic algorithm. We apply our method to various data sets (including data from the DREAM4 challenge) and show that it compares favorably to state-of-the-art methods. This algorithm is available on CRAN as the R package GADAG.  相似文献   

16.
A method of regularized discriminant analysis for discrete data, denoted DRDA, is proposed. This method is related to the regularized discriminant analysis conceived by Friedman (1989) in a Gaussian framework for continuous data. Here, we are concerned with discrete data and consider the classification problem using the multionomial distribution. DRDA has been conceived in the small-sample, high-dimensional setting. This method has a median position between multinomial discrimination, the first-order independence model and kernel discrimination. DRDA is characterized by two parameters, the values of which are calculated by minimizing a sample-based estimate of future misclassification risk by cross-validation. The first parameter is acomplexity parameter which provides class-conditional probabilities as a convex combination of those derived from the full multinomial model and the first-order independence model. The second parameter is asmoothing parameter associated with the discrete kernel of Aitchison and Aitken (1976). The optimal complexity parameter is calculated first, then, holding this parameter fixed, the optimal smoothing parameter is determined. A modified approach, in which the smoothing parameter is chosen first, is discussed. The efficiency of the method is examined with other classical methods through application to data.  相似文献   

17.
Regularized variable selection is a powerful tool for identifying the true regression model from a large number of candidates by applying penalties to the objective functions. The penalty functions typically involve a tuning parameter that controls the complexity of the selected model. The ability of the regularized variable selection methods to identify the true model critically depends on the correct choice of the tuning parameter. In this study, we develop a consistent tuning parameter selection method for regularized Cox's proportional hazards model with a diverging number of parameters. The tuning parameter is selected by minimizing the generalized information criterion. We prove that, for any penalty that possesses the oracle property, the proposed tuning parameter selection method identifies the true model with probability approaching one as sample size increases. Its finite sample performance is evaluated by simulations. Its practical use is demonstrated in The Cancer Genome Atlas breast cancer data.  相似文献   

18.
A tutorial on spectral clustering   总被引:33,自引:0,他引:33  
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.  相似文献   

19.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   

20.
史兴杰等 《统计研究》2020,37(9):95-105
对于实证研究中经常遇到变量维数高和存在异常值的二分类问题,探索稳健的高维二分类方法显得尤为重要。本文提出基于Lasso惩罚的光滑0-1损失函数二分类法,并利用Fabs 算法高效地解决了变量选择和参数估计问题。数值模拟的结果表明,在不同异常值比例下该方法均具有良好的稳健性。基于CHIP 2013年度数据,利用该方法对农民工子女高中入学决定的影响因素进行了实证研究。分析发现,农民工父母的教育水平、教育水平与家庭经济状况的交互作用、农民工子女性别、性别与民族的交互作用均对农民工子女的入学决定有重要影响。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号