首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4874篇
  免费   139篇
  国内免费   18篇
管理学   226篇
民族学   8篇
人口学   57篇
丛书文集   60篇
理论方法论   33篇
综合类   671篇
社会学   56篇
统计学   3920篇
  2024年   10篇
  2023年   29篇
  2022年   44篇
  2021年   62篇
  2020年   78篇
  2019年   172篇
  2018年   210篇
  2017年   333篇
  2016年   189篇
  2015年   158篇
  2014年   197篇
  2013年   1296篇
  2012年   390篇
  2011年   163篇
  2010年   164篇
  2009年   157篇
  2008年   176篇
  2007年   122篇
  2006年   102篇
  2005年   121篇
  2004年   113篇
  2003年   82篇
  2002年   79篇
  2001年   69篇
  2000年   73篇
  1999年   71篇
  1998年   76篇
  1997年   45篇
  1996年   26篇
  1995年   34篇
  1994年   24篇
  1993年   28篇
  1992年   25篇
  1991年   11篇
  1990年   14篇
  1989年   15篇
  1988年   12篇
  1987年   7篇
  1986年   3篇
  1985年   10篇
  1984年   6篇
  1983年   10篇
  1982年   4篇
  1981年   4篇
  1980年   6篇
  1979年   1篇
  1978年   3篇
  1977年   6篇
  1975年   1篇
排序方式: 共有5031条查询结果,搜索用时 109 毫秒
111.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
112.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
113.
114.
Recent research indicates that CEOs’ temporal focus (the degree to which individuals attend to the past, present, and future) is a critical predictor for strategic outcomes. Building on paradox theory and the attention-based view, we examine the implications of CEOs’ past and future focus for strategic change. Results from polynomial regression analysis reveal that CEOs who cognitively embrace both the past and the future at the same time engage more in strategic change. In addition, our results reveal that the positive strategic change−firm performance relationship is enhanced when CEOs’ past focus is high, whereas CEOs’ future focus mitigates the translation of strategic change into firm performance (when their past focus is low at the same time). In addition, supplemental analyses indicate that the impact of CEOs’ temporal focus turns out differently in stable and dynamic environments. Our study thus extends the literature on both individual’s temporal focus and strategic change.  相似文献   
115.
In a recent issue of this journal, Holgersson et al. [Dummy variables vs. category-wise models, J. Appl. Stat. 41(2) (2014), pp. 233–241, doi:10.1080/02664763.2013.838665] compared the use of dummy coding in regression analysis to the use of category-wise models (i.e. estimating separate regression models for each group) with respect to estimating and testing group differences in intercept and in slope. They presented three objections against the use of dummy variables in a single regression equation, which could be overcome by the category-wise approach. In this note, I first comment on each of these three objections and next draw attention to some other issues in comparing these two approaches. This commentary further clarifies the differences and similarities between dummy variable and category-wise approaches.  相似文献   
116.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
117.
Focusing on the model selection problems in the family of Poisson mixture models (including the Poisson mixture regression model with random effects and zero‐inflated Poisson regression model with random effects), the current paper derives two conditional Akaike information criteria. The criteria are the unbiased estimators of the conditional Akaike information based on the conditional log‐likelihood and the conditional Akaike information based on the joint log‐likelihood, respectively. The derivation is free from the specific parametric assumptions about the conditional mean of the true data‐generating model and applies to different types of estimation methods. Additionally, the derivation is not based on the asymptotic argument. Simulations show that the proposed criteria have promising estimation accuracy. In addition, it is found that the criterion based on the conditional log‐likelihood demonstrates good model selection performance under different scenarios. Two sets of real data are used to illustrate the proposed method.  相似文献   
118.
Mixtures of factor analyzers is a useful model-based clustering method which can avoid the curse of dimensionality in high-dimensional clustering. However, this approach is sensitive to both diverse non-normalities of marginal variables and outliers, which are commonly observed in multivariate experiments. We propose mixtures of Gaussian copula factor analyzers (MGCFA) for clustering high-dimensional clustering. This model has two advantages; (1) it allows different marginal distributions to facilitate fitting flexibility of the mixture model, (2) it can avoid the curse of dimensionality by embedding the factor-analytic structure in the component-correlation matrices of the mixture distribution.An EM algorithm is developed for the fitting of MGCFA. The proposed method is free of the curse of dimensionality and allows any parametric marginal distribution which fits best to the data. It is applied to both synthetic data and a microarray gene expression data for clustering and shows its better performance over several existing methods.  相似文献   
119.
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.  相似文献   
120.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号