首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   69篇
  免费   2篇
  国内免费   1篇
管理学   2篇
综合类   1篇
统计学   69篇
  2021年   2篇
  2020年   1篇
  2019年   7篇
  2018年   10篇
  2017年   13篇
  2016年   8篇
  2014年   5篇
  2013年   11篇
  2012年   10篇
  2011年   1篇
  2010年   3篇
  2009年   1篇
排序方式: 共有72条查询结果,搜索用时 15 毫秒
61.
The least absolute shrinkage and selection operator (LASSO) is a prominent estimator which selects significant (under some sense) features and kills insignificant ones. Indeed the LASSO shrinks features larger than a noise level to zero. In this article, we force LASSO to be shrunken more by proposing a Stein-type shrinkage estimator emanating from the LASSO, namely the Stein-type LASSO. The newly proposed estimator proposes good performance in risk sense numerically. Variants of this estimator have smaller relative MSE and prediction error, compared to the LASSO, in the analysis of prostate cancer dataset.  相似文献   
62.
This paper concerns model selection for autoregressive time series when the observations are contaminated with trend. We propose an adaptive least absolute shrinkage and selection operator (LASSO) type model selection method, in which the trend is estimated by B-splines, the detrended residuals are calculated, and then the residuals are used as if they were observations to optimize an adaptive LASSO type objective function. The oracle properties of such an adaptive LASSO model selection procedure are established; that is, the proposed method can identify the true model with probability approaching one as the sample size increases, and the asymptotic properties of estimators are not affected by the replacement of observations with detrended residuals. The intensive simulation studies of several constrained and unconstrained autoregressive models also confirm the theoretical results. The method is illustrated by two time series data sets, the annual U.S. tobacco production and annual tree ring width measurements.  相似文献   
63.
The linear regression models with the autoregressive moving average (ARMA) errors (REGARMA models) are often considered, in order to reflect a serial correlation among observations. In this article, we focus on an adaptive least absolute shrinkage and selection operator (LASSO) (ALASSO) method for the variable selection of the REGARMA models and extend it to the linear regression models with the ARMA-generalized autoregressive conditional heteroskedasticity (ARMA-GARCH) errors (REGARMA-GARCH models). This attempt is an extension of the existing ALASSO method for the linear regression models with the AR errors (REGAR models) proposed by Wang et al. in 2007 Wang, H., Li, G., Tsai, C. (2007). Regression coefficient and autoregressive order shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B 69:6378. [Google Scholar]. New ALASSO algorithms are proposed to determine important predictors for the REGARMA and REGARMA-GARCH models. Finally, we provide the simulation results and real data analysis to illustrate our findings.  相似文献   
64.
In this article, we consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors, in the presence of outliers. Since the LASSO is a special case of the penalized least-square regression with L1 penalty function, it suffers from the heavy-tailed errors and/or outliers in data. Recently, Least Absolute Deviation (LAD) and the LASSO methods have been combined (the LAD-LASSO regression method) to carry out robust parameter estimation and variable selection simultaneously for a multiple linear regression model. However, variable selection of the functional predictors based on LASSO fails since multiple parameters exist for a functional predictor. Therefore, group LASSO is used for selecting functional predictors since group LASSO selects grouped variables rather than individual variables. In this study, we propose a robust functional predictor selection method, the LAD-group LASSO, for a functional linear regression model with a scalar response and functional predictors. We illustrate the performance of the LAD-group LASSO on both simulated and real data.  相似文献   
65.
In this article we propose a method called GLLS for the fitting of bilinear time series models. The GLLS procedure is the combination of the LASSO method, the generalized cross-validation method, the least angle regression method, and the stepwise regression method. Compared with the traditional methods such as the repeated residual method and the genetic algorithm, GLLS has the advantage of shrinking the coefficients of the models and saving the computational time. The Monte Carlo simulation studies and a real data example are reported to assess the performance of the proposed GLLS method.  相似文献   
66.
Selection of the important variables is one of the most important model selection problems in statistical applications. In this article, we address variable selection in finite mixture of generalized semiparametric models. To overcome computational burden, we introduce a class of variable selection procedures for finite mixture of generalized semiparametric models using penalized approach for variable selection. Estimation of nonparametric component will be done via multivariate kernel regression. It is shown that the new method is consistent for variable selection and the performance of proposed method will be assessed via simulation.  相似文献   
67.
Molecular markers combined with powerful statistical tools have made it possible to detect and analyze multiple loci on the genome that are responsible for the phenotypic variation in quantitative traits. The objectives of the study presented in this paper are to identify a subset of single nucleotide polymorphism (SNP) markers that are associated with a particular trait and to construct a model that can best predict the value of the trait given the genotypic information of the SNPs using a three-step strategy. In the first step, a genome-wide association test is performed to screen SNPs that are associated with the quantitative trait of interest. SNPs with p-values of less than 5% are then analyzed in the second step. In the second step, a large number of randomly selected models, each consisting of a fixed number of randomly selected SNPs, are analyzed using the least angle regression method. This step will further remove redundant SNPs due to the complicated association among SNPs. A subset of SNPs that are shown to have a significant effect on the response trait more often than by chance are considered for the third step. In the third step, two alternative methods are considered: the least angle shrinkage and selection operation and sparse partial least squares regression. For both methods, the predictive ability of the fitted model is evaluated by an independent test set. The performance of the proposed method is illustrated by the analysis of a real data set on Canadian Holstein cattle.  相似文献   
68.
In this paper we address the problem of estimating a vector of regression parameters in the Weibull censored regression model. Our main objective is to provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors may or may not be associated with the response. In the context of two competing Weibull censored regression models (full model and candidate submodel), we consider an adaptive shrinkage estimation strategy that shrinks the full model maximum likelihood estimate in the direction of the submodel maximum likelihood estimate. We develop the properties of these estimators using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have higher efficiency than the classical estimators for a wide class of models. Further, we consider a LASSO type estimation strategy and compare the relative performance with the shrinkage estimators. Monte Carlo simulations reveal that when the true model is close to the candidate submodel, the shrinkage strategy performs better than the LASSO strategy when, and only when, there are many inactive predictors in the model. Shrinkage and LASSO strategies are applied to a real data set from Veteran's administration (VA) lung cancer study to illustrate the usefulness of the procedures in practice.  相似文献   
69.
70.
There is an emerging need to advance linear mixed model technology to include variable selection methods that can simultaneously choose and estimate important effects from a potentially large number of covariates. However, the complex nature of variable selection has made it difficult for it to be incorporated into mixed models. In this paper we extend the well known class of penalties and show that they can be integrated succinctly into a linear mixed model setting. Under mild conditions, the estimator obtained from this mixed model penalised likelihood is shown to be consistent and asymptotically normally distributed. A simulation study reveals that the extended family of penalties achieves varying degrees of estimator shrinkage depending on the value of one of its parameters. The simulation study also shows there is a link between the number of false positives detected and the number of true coefficients when using the same penalty. This new mixed model variable selection (MMVS) technology was applied to a complex wheat quality data set to determine significant quantitative trait loci (QTL).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号