首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2741篇
  免费   76篇
  国内免费   34篇
管理学   293篇
民族学   13篇
人口学   49篇
丛书文集   151篇
理论方法论   55篇
综合类   966篇
社会学   129篇
统计学   1195篇
  2024年   2篇
  2023年   19篇
  2022年   36篇
  2021年   39篇
  2020年   50篇
  2019年   81篇
  2018年   107篇
  2017年   112篇
  2016年   102篇
  2015年   91篇
  2014年   133篇
  2013年   466篇
  2012年   226篇
  2011年   156篇
  2010年   139篇
  2009年   112篇
  2008年   117篇
  2007年   142篇
  2006年   107篇
  2005年   100篇
  2004年   98篇
  2003年   94篇
  2002年   76篇
  2001年   51篇
  2000年   45篇
  1999年   30篇
  1998年   22篇
  1997年   23篇
  1996年   10篇
  1995年   7篇
  1994年   8篇
  1993年   6篇
  1992年   12篇
  1991年   6篇
  1990年   4篇
  1989年   6篇
  1988年   3篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   2篇
  1977年   2篇
排序方式: 共有2851条查询结果,搜索用时 0 毫秒
71.
Summary.  We propose covariance-regularized regression, a family of methods for prediction in high dimensional settings that uses a shrunken estimate of the inverse covariance matrix of the features to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing the log-likelihood of the data, under a multivariate normal model, subject to a penalty; it is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyse gene expression data sets with multiple class and survival outcomes.  相似文献   
72.
Abstract.  This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proportional model is, however, that there is no simple likelihood to work with. We here study a least squares criterion with desirable properties and show how this criterion can be interpreted as a prediction error. Given this criterion, we define ridge and Lasso estimators as well as an adaptive Lasso and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare the Dantzig and adaptive Lasso for a moderate to small number of covariates. The methods are applied to a breast cancer data set with gene expression recordings and to the primary biliary cirrhosis clinical data.  相似文献   
73.
74.
We deal with parametric inference and selection problems for jump components in discretely observed diffusion processes with jumps. We prepare several competing parametric models for the Lévy measure that might be misspecified, and select the best model from the aspect of information criteria. We construct quasi-information criteria (QIC), which are approximations of the information criteria based on continuous observations.  相似文献   
75.
When VAR models are used to predict future outcomes, the forecast error can be substantial. Through imposition of restrictions on the off-diagonal elements of the parameter matrix, however, the information in the process may be condensed to the marginal processes. In particular, if the cross-autocorrelations in the system are small and only a small sample is available, then such a restriction may reduce the forecast mean squared error considerably.

In this paper, we propose three different techniques to decide whether to use the restricted or unrestricted model, i.e. the full VAR(1) model or only marginal AR(1) models. In a Monte Carlo simulation study, all three proposed tests have been found to behave quite differently depending on the parameter setting. One of the proposed tests stands out, however, as the preferred one and is shown to outperform other estimators for a wide range of parameter settings.  相似文献   

76.
77.
In the context of an objective Bayesian approach to the multinomial model, Dirichlet(a, …, a) priors with a < 1 have previously been shown to be inadequate in the presence of zero counts, suggesting that the uniform prior (a = 1) is the preferred candidate. In the presence of many zero counts, however, this prior may not be satisfactory either. A model selection approach is proposed, allowing for the possibility of zero parameters corresponding to zero count categories. This approach results in a posterior mixture of Dirichlet distributions and marginal mixtures of beta distributions, which seem to avoid the problems that potentially result from the various proposed Dirichlet priors, in particular in the context of extreme data with zero counts.  相似文献   
78.
郑丽萍 《兰州学刊》2009,(10):198-202
墓志的性质在宋代由哀悼文学转向传记文学,更多地记载墓主个人的事迹。本文通过墓志资料考察宋代士人家庭的择偶行为,进一步了解长辈为子女择偶的状况以及当时社会流行的择偶价值观。  相似文献   
79.
Abstract

In this article, we propose a penalized local log-likelihood method to locally select the number of components in non parametric finite mixture of regression models via proportion shrinkage method. Mean functions and variance functions are estimated simultaneously. We show that the number of components can be estimated consistently, and further establish asymptotic normality of functional estimates. We use a modified EM algorithm to estimate the unknown functions. Simulations are conducted to demonstrate the performance of the proposed method. We illustrate our method via an empirical analysis of the housing price index data of United States.  相似文献   
80.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号