首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   3篇
管理学   4篇
综合类   5篇
统计学   94篇
  2023年   1篇
  2022年   3篇
  2021年   1篇
  2020年   2篇
  2019年   8篇
  2018年   12篇
  2017年   11篇
  2016年   15篇
  2015年   6篇
  2014年   7篇
  2013年   11篇
  2012年   8篇
  2011年   3篇
  2010年   4篇
  2009年   6篇
  2008年   1篇
  2007年   1篇
  2005年   2篇
  2003年   1篇
排序方式: 共有103条查询结果,搜索用时 324 毫秒
51.
This article considers Bayesian variable selection problems for binary responses via stochastic search variable selection and Bayesian Lasso. To avoid matrix inversion in the corresponding Markov chain Monte Carlo implementations, the componentwise Gibbs sampler (CGS) idea is adopted. Moreover, we also propose automatic hyperparameter tuning rules for the proposed approaches. Simulation studies and a real example are used to demonstrate the performances of the proposed approaches. These results show that CGS approaches do not only have good performances in variable selection but also have the lower batch mean standard error values than those of original methods, especially for large number of covariates.  相似文献   
52.
We study confidence intervals based on hard-thresholding, soft-thresholding, and adaptive soft-thresholding in a linear regression model where the number of regressors k may depend on and diverge with sample size n. In addition to the case of known error variance, we define and study versions of the estimators when the error variance is unknown. In the known-variance case, we provide an exact analysis of the coverage properties of such intervals in finite samples. We show that these intervals are always larger than the standard interval based on the least-squares estimator. Asymptotically, the intervals based on the thresholding estimators are larger even by an order of magnitude when the estimators are tuned to perform consistent variable selection. For the unknown-variance case, we provide nontrivial lower bounds and a small numerical study for the coverage probabilities in finite samples. We also conduct an asymptotic analysis where the results from the known-variance case can be shown to carry over asymptotically if the number of degrees of freedom n ? k tends to infinity fast enough in relation to the thresholding parameter.  相似文献   
53.
To improve the out-of-sample performance of the portfolio, Lasso regularization is incorporated to the Mean Absolute Deviance (MAD)-based portfolio selection method. It is shown that such a portfolio selection problem can be reformulated as a constrained Least Absolute Deviance problem with linear equality constraints. Moreover, we propose a new descent algorithm based on the ideas of ‘nonsmooth optimality conditions’ and ‘basis descent direction set’. The resulting MAD-Lasso method enjoys at least two advantages. First, it does not involve the estimation of covariance matrix that is difficult particularly in the high-dimensional settings. Second, sparsity is encouraged. This means that assets with weights close to zero in the Markovwitz's portfolio are driven to zero automatically. This reduces the management cost of the portfolio. Extensive simulation and real data examples indicate that if the Lasso regularization is incorporated, MAD portfolio selection method is consistently improved in terms of out-of-sample performance, measured by Sharpe ratio and sparsity. Moreover, simulation results suggest that the proposed descent algorithm is more time-efficient than interior point method and ADMM algorithm.  相似文献   
54.
Feature selection often constitutes one of the central aspects of many scientific investigations. Among the methodologies for feature selection in penalized regression, the smoothly clipped and absolute deviation seems to be very useful because it satisfies the oracle property. However, its estimation algorithms such as the local quadratic approximation and the concave–convex procedure are not computationally efficient. In this paper, we propose an efficient penalization path algorithm. Through numerical examples on real and simulated data, we illustrate that our path algorithm can be useful for feature selection in regression problems.  相似文献   
55.
多图模型表示来自于不同类的同一组随机变量间的相关关系,结点表示随机变量,边表示变量之间的直接联系,各类的图模型反映了各自相关结构特征和类间共同的信息。用多图模型联合估计方法,将来自不同个体的数据按其特征分类,假设每类中各变量间的相依结构服从同一个高斯图模型,应用组Lasso方法和图Lasso方法联合估计每类的图模型结构。数值模拟验证了多图模型联合估计方法的有效性。用多图模型和联合估计方法对中国15个省份13个宏观经济指标进行相依结构分析,结果表明,不同经济发展水平省份的宏观经济变量间存在共同的相关联系,反映了中国现阶段经济发展的特征;每一类的相关结构反映了各类省份经济发展独有的特征。  相似文献   
56.
A new class of probability distributions, the so-called connected double truncated gamma distribution, is introduced. We show that using this class as the error distribution of a linear model leads to a generalized quantile regression model that combines desirable properties of both least-squares and quantile regression methods: robustness to outliers and differentiable loss function.  相似文献   
57.
ABSTRACT

In this paper, we investigate the objective function and deflation process for sparse Partial Least Squares (PLS) regression with multiple components. While many have considered variations on the objective for sparse PLS, the deflation process for sparse PLS has not received as much attention. Our work highlights a flaw in the Statistically Inspired Modification of Partial Least Squares (SIMPLS) deflation method when applied in sparse PLS regression. We also consider the Nonlinear Iterative Partial Least Squares (NIPALS) deflation in sparse PLS regression. To remedy the flaw in the SIMPLS method, we propose a new sparse PLS method wherein the direction vectors are constrained to be sparse and lie in a chosen subspace. We give insight into this new PLS procedure and show through examples and simulation studies that the proposed technique can outperform alternative sparse PLS techniques in coefficient estimation. Moreover, our analysis reveals a simple renormalization step that can be used to improve the estimation of sparse PLS direction vectors generated using any convex relaxation method.  相似文献   
58.
In this work, we present a computational method to approximate the occurrence of the change-points in a temporal series consisting of independent and normally distributed observations, with equal mean and two possible variance values. This type of temporal series occurs in the investigation of electric signals associated to rhythmic activity patterns of nerves and muscles of animals, in which the change-points represent the actual moments when the electrical activity passes from a phase of silence to one of activity, or vice versa. We confront the hypothesis that there is no change-point in the temporal series, against the alternative hypothesis that there exists at least one change-point, employing the corresponding likelihood ratio as the test statistic; a computational implementation of the technique of quadratic penalization is employed in order to approximate the quotient of the logarithmic likelihood associated to the set of hypotheses. When the null hypothesis is rejected, the method provides estimations of the localization of the change-points in the temporal series. Moreover, the method proposed in this work employs a posteriori processing in order to avoid the generation of relatively short periods of silence or activity. The method is applied to the determination of change-points in both experimental and synthetic data sets; in either case, the results of our computations are more than satisfactory.  相似文献   
59.
黄金作为重要的避险资产,对其价格波动的定量描述和预测对于各类投资者的风险管理决策意义重大。基于标准回归预测模型,采用主成分分析、组合预测和两种主流的模型缩减方法(Elastic net 和Lasso)构建新的波动率预测模型,探究哪种方法能够更有效地利用多个预测因子信息。进一步,运用模型信度集合(model confidence set,MCS)、样本外R2和方向测试(Direction-of-Change,DoC)三种评价方法检验新模型的样本外预测精度。实证结果显示:不论是基于哪一种评价方法,相比其它竞争模型,两种缩减模型的样本外预测精度均为最优,可以为我国黄金期货价格的波动率预测提供可靠保障。  相似文献   
60.
Summary.  The lasso penalizes a least squares regression by the sum of the absolute values ( L 1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L 1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N , the sample size. The technique is also extended to the 'hinge' loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号