首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   134篇
  免费   3篇
丛书文集   1篇
综合类   4篇
统计学   132篇
  2021年   2篇
  2020年   1篇
  2019年   8篇
  2018年   7篇
  2017年   7篇
  2016年   5篇
  2014年   5篇
  2013年   40篇
  2012年   8篇
  2011年   4篇
  2010年   4篇
  2009年   11篇
  2008年   6篇
  2007年   5篇
  2006年   1篇
  2005年   3篇
  2004年   6篇
  2003年   4篇
  2002年   2篇
  2001年   3篇
  2000年   1篇
  1999年   2篇
  1996年   1篇
  1985年   1篇
排序方式: 共有137条查询结果,搜索用时 281 毫秒
121.
The penalized likelihood approach of Fan and Li (2001 Fan, J., Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Association 96:13481360.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 2002 Fan, J., Li, R. (2002). Variable selection for Cox’s proportional hazards model and frailty model. The Annals of Statistics 30:7499.[Crossref], [Web of Science ®] [Google Scholar]) differs from the traditional variable selection procedures in that it deletes the non-significant variables by estimating their coefficients as zero. Nevertheless, the desirable performance of this shrinkage methodology relies heavily on an appropriate selection of the tuning parameter which is involved in the penalty functions. In this work, new estimates of the norm of the error are firstly proposed through the use of Kantorovich inequalities and, subsequently, applied to the frailty models framework. These estimates are used in order to derive a tuning parameter selection procedure for penalized frailty models and clustered data. In contrast with the standard methods, the proposed approach does not depend on resampling and therefore results in a considerable gain in computational time. Moreover, it produces improved results. Simulation studies are presented to support theoretical findings and two real medical data sets are analyzed.  相似文献   
122.
The problem of ill-conditioning in generalized linear regression is investigated. Besides collinearity among the explanatory variables, we define another type of ill-conditioning, namely ML-collinearity, which has similar detrimental effects on the covariance matrix, e.g. inflation of some of the estimated standard errors of the regression coefficients. For either situation there is collinearity among the columns of the matrix of the weighted variables. We present both methods to detect, as well as practical examples to illustrate, the difference between these two types of ill-conditioning. Also the applicability of alternative regression methods will be reviewed.  相似文献   
123.
Penalized likelihood method has been developed previously for hazard function estimation using standard left-truncated, right-censored lifetime data with covariates, and the functional ANOVA structures built into the log hazard allows for versatile nonparametric modeling in the setting. The computation of the method can be time-consuming in the presence of continuous covariates; however, due to the repeated numerical integrations involved. Adapting a device developed by Jeon and Lin [An effective method for high dimensional log-density ANOVA estimation, with application to nonparametric graphical model building. Statist. Sinica 16, 353–374] for penalized likelihood density estimation, we explore an alternative approach to hazard estimation where the log likelihood is replaced by some computationally less demanding pseudo-likelihood. An assortment of issues are addressed concerning the practical implementations of the approach including the selection of smoothing parameters, and extensive simulations are presented to assess the inferential efficiency of the “pseudo” method as compared to the “real” one. Also noted is an asymptotic theory concerning the convergence rates of the estimates parallel to that for the original penalized likelihood estimation.  相似文献   
124.
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A “newbie” algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a smoothing spline ANOVA penalized likelihood model, a support vector machine, or any model that will admit reproducing kernel Hilbert space components, for nonparametric regression, supervised learning, or semisupervised learning. Future work and open questions are discussed. The papers are:  相似文献   
125.
The penalized quasi-likelihood (PQL) approach is the most common estimation procedure for the generalized linear mixed model (GLMM). However, it has been noticed that the PQL tends to underestimate variance components as well as regression coefficients in the previous literature. In this article, we numerically show that the biases of variance component estimates by PQL are systematically related to the biases of regression coefficient estimates by PQL, and also show that the biases of variance component estimates by PQL increase as random effects become more heterogeneous.  相似文献   
126.
Supersaturated designs is a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the confounding involved in the statistical analysis. In this article, we propose a method for analyzing data using a specific type of supersaturated designs. This method heavily uses the special block orthogonal structure of the supersaturated designs given by Tang and Wu (1997 Tang , B. , Wu , C. F. J. ( 1997 ). A method for constructing supersaturated designs and its Es 2-optimality . Canadian J. Statist. 25 : 191201 .[Crossref], [Web of Science ®] [Google Scholar]). Also, we compare our method with several known statistical analysis methods by using some of the existing supersaturated designs. The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. The results are presented in tables and the discussion to follow.  相似文献   
127.
Cubic B-splines are used to estimate the nonparametric component of a semiparametric generalized linear model. A penalized log-likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the nonparametric function. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one df. The smoothing parameter is determined by giving a specified value for its asymptotically expected value under the null hypothesis. A simulation study is conducted to evaluate its power performance; a real-life dataset is used to illustrate its practical use.  相似文献   
128.
Quadratic programming is a versatile tool for calculating estimates in penalized regression. It can be used to produce estimates based on L 1 roughness penalties, as in total variation denoising. In particular, it can calculate estimates when the roughness penalty is the total variation of a derivative of the estimate. Combining two roughness penalties, the total variation and total variation of the third derivative, results in an estimate with continuous second derivative but controls the number of spurious local extreme values. A multiresolution criterion may be included in a quadratic program to achieve local smoothing without having to specify smoothing parameters.  相似文献   
129.
ABSTRACT

We study partial linear models where the linear covariates are endogenous and cause an over-identified problem. We propose combining the profile principle with local linear approximation and the generalized moment methods (GMM) to estimate the parameters of interest. We show that the profiled GMM estimators are root? n consistent and asymptotically normally distributed. By appropriately choosing the weight matrix, the estimators can attain the efficiency bound. We further consider variable selection by using the moment restrictions imposed on endogenous variables when the dimension of the covariates may be diverging with the sample size, and propose a penalized GMM procedure, which is shown to have the sparsity property. We establish asymptotic normality of the resulting estimators of the nonzero parameters. Simulation studies have been presented to assess the finite-sample performance of the proposed procedure.  相似文献   
130.
The skew-normal and the skew-t distributions are parametric families which are currently under intense investigation since they provide a more flexible formulation compared to the classical normal and t distributions by introducing a parameter which regulates their skewness. While these families enjoy attractive formal properties from the probability viewpoint, a practical problem with their usage in applications is the possibility that the maximum likelihood estimate of the parameter which regulates skewness diverges. This situation has vanishing probability for increasing sample size, but for finite samples it occurs with non-negligible probability, and its occurrence has unpleasant effects on the inferential process. Methods for overcoming this problem have been put forward both in the classical and in the Bayesian formulation, but their applicability is restricted to simple situations. We formulate a proposal based on the idea of penalized likelihood, which has connections with some of the existing methods, but it applies more generally, including the multivariate case.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号