首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5790篇
  免费   275篇
  国内免费   110篇
管理学   385篇
民族学   13篇
人才学   1篇
人口学   109篇
丛书文集   234篇
理论方法论   126篇
综合类   1450篇
社会学   324篇
统计学   3533篇
  2024年   10篇
  2023年   86篇
  2022年   104篇
  2021年   139篇
  2020年   169篇
  2019年   260篇
  2018年   301篇
  2017年   378篇
  2016年   263篇
  2015年   220篇
  2014年   301篇
  2013年   1035篇
  2012年   391篇
  2011年   248篇
  2010年   215篇
  2009年   213篇
  2008年   227篇
  2007年   232篇
  2006年   201篇
  2005年   206篇
  2004年   176篇
  2003年   138篇
  2002年   100篇
  2001年   110篇
  2000年   94篇
  1999年   64篇
  1998年   64篇
  1997年   44篇
  1996年   27篇
  1995年   31篇
  1994年   26篇
  1993年   16篇
  1992年   19篇
  1991年   11篇
  1990年   11篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1981年   1篇
  1980年   1篇
排序方式: 共有6175条查询结果,搜索用时 15 毫秒
991.
大数据的广泛应用对社会产生了深远影响,也对政府治理变革起着重要推动作用,大数据将推动政府治理理念、治理体系、治理方式的创新。政府治理的目标就是应用大数据实现法治政府、创新政府、廉洁政府和服务型政府。在运用大数据推动政府治理创新的过程中,应通过数据共享优化政府治理结构,通过政务形态信息化调整政府治理关系,通过政务平台技术化重塑政府流程,借用大数据来提升政府创新能力,通过数据应用法制化提升政府法治水平。所以,政府必须要主动适应信息公开和数据共享的大趋势,以此推动政府治理的变革与创新,进一步提高政府治理能力。  相似文献   
992.
社会科学的发展经历了从分析到综合再到复杂性科学三个阶段,与之相伴,科学方法论从还原论到整体论再到复杂适应论持续演进。复杂适应系统理论把被经典科学理性所简化、排除的多样性、无序性、个体性因素重新带回科学的视野,深刻了社会科学家对研究对象的本质的认识,指明了社会科学方法体系创新和突破的方向。但由于缺乏能够满足复杂适应系统研究要求的数据条件和技术手段,使其实际取得的成果远未达到人们的期待。大数据时代带来的空前丰富的数据资源与先进的数据处理技术,为社会科学实证研究开辟出新的路径,计算社会科学已经形成大数据的获取与分析、多主体社会模拟、互联网社会科学实验三大方法体系,为社会调查、统计分析、社会实验等研究方法增添了全新的内容。复杂适应系统研究的技术实现正在成为可能,社会科学正处在一次突破性进展的前夜。在新一轮的信息技术革命浪潮中,中国已经跻身于世界先进行列,这为中国社会科学实现“弯道超车”准备了极为有利的条件。中国社会科学界应当敏锐地觉察和把握这一重大机遇,积极推动计算社会科学的发展,使之在实现国际学术话语体系重构上发挥重要作用。  相似文献   
993.
大数据作为一场深刻的社会变革产生了巨大影响,相关关系与混杂性的凸显,海量信息与交换方式的多样化等替代了因果关系,弱化了精确性、客观性以及真理的唯一性,改变了人们的认知模式和交往方式,消解了人们对于主流意识形态的认同。不仅如此,大数据的应用促进了互联网的开放性和各种自媒体的发展,为多元思潮、多元文化和多元意识形态的发展与竞争提供了更大的空间,给我国主流意识形态及其安全带来了很大挑战。认真研究大数据产生的影响,强化意识形态观念,利用大数据及各种新兴媒体与互联网,加强我国主流意识形态建设,以马克思主义意识形态和社会主义核心价值观整合和引领多元思潮、多元文化和多元意识形态,增强主流意识形态的感染力与指导作用,为习近平新时代社会主义事业提供坚强思想基础和政治保障,是一项必须完成的重要任务。  相似文献   
994.
995.
Two-phase case–control studies cope with the problem of confounding by obtaining required additional information for a subset (phase 2) of all individuals (phase 1). Nowadays, studies with rich phase 1 data are available where only few unmeasured confounders need to be obtained in phase 2. The extended conditional maximum likelihood (ECML) approach in two-phase logistic regression is a novel method to analyse such data. Alternatively, two-phase case–control studies can be analysed by multiple imputation (MI), where phase 2 information for individuals included in phase 1 is treated as missing. We conducted a simulation of two-phase studies, where we compared the performance of ECML and MI in typical scenarios with rich phase 1. Regarding exposure effect, MI was less biased and more precise than ECML. Furthermore, ECML was sensitive against misspecification of the participation model. We therefore recommend MI to analyse two-phase case–control studies in situations with rich phase 1 data.  相似文献   
996.
We often rely on the likelihood to obtain estimates of regression parameters but it is not readily available for generalized linear mixed models (GLMMs). Inferences for the regression coefficients and the covariance parameters are key in these models. We presented alternative approaches for analyzing binary data from a hierarchical structure that do not rely on any distributional assumptions: a generalized quasi-likelihood (GQL) approach and a generalized method of moments (GMM) approach. These are alternative approaches to the typical maximum-likelihood approximation approach in Statistical Analysis System (SAS) such as Laplace approximation (LAP). We examined and compared the performance of GQL and GMM approaches with multiple random effects to the LAP approach as used in PROC GLIMMIX, SAS. The GQL approach tends to produce unbiased estimates, whereas the LAP approach can lead to highly biased estimates for certain scenarios. The GQL approach produces more accurate estimates on both the regression coefficients and the covariance parameters with smaller standard errors as compared to the GMM approach. We found that both GQL and GMM approaches are less likely to result in non-convergence as opposed to the LAP approach. A simulation study was conducted and a numerical example was presented for illustrative purposes.  相似文献   
997.
This paper investigates the quantile residual life regression based on semi-competing risk data. Because the terminal event time dependently censors the non-terminal event time, the inference on the non-terminal event time is not available without extra assumption. Therefore, we assume that the non-terminal event time and the terminal event time follow an Archimedean copula. Then, we apply the inverse probability weight technique to construct an estimating equation of quantile residual life regression coefficients. But, the estimating equation may not be continuous in coefficients. Thus, we apply the generalized solution approach to overcome this problem. Since the variance estimation of the proposed estimator is difficult to obtain, we use the bootstrap resampling method to estimate it. From simulations, it shows the performance of the proposed method is well. Finally, we analyze the Bone Marrow Transplant data for illustrations.  相似文献   
998.
The logratio methodology is not applicable when rounded zeros occur in compositional data. There are many methods to deal with rounded zeros. However, some methods are not suitable for analyzing data sets with high dimensionality. Recently, related methods have been developed, but they cannot balance the calculation time and accuracy. For further improvement, we propose a method based on regression imputation with Q-mode clustering. This method forms the groups of parts and builds partial least squares regression with these groups using centered logratio coordinates. We also prove that using centered logratio coordinates or isometric logratio coordinates in the response of partial least squares regression have the equivalent results for the replacement of rounded zeros. Simulation study and real example are conducted to analyze the performance of the proposed method. The results show that the proposed method can reduce the calculation time in higher dimensions and improve the quality of results.  相似文献   
999.
In this paper, we propose a new semiparametric heteroscedastic regression model allowing for positive and negative skewness and bimodal shapes using the B-spline basis for nonlinear effects. The proposed distribution is based on the generalized additive models for location, scale and shape framework in order to model any or all parameters of the distribution using parametric linear and/or nonparametric smooth functions of explanatory variables. We motivate the new model by means of Monte Carlo simulations, thus ignoring the skewness and bimodality of the random errors in semiparametric regression models, which may introduce biases on the parameter estimates and/or on the estimation of the associated variability measures. An iterative estimation process and some diagnostic methods are investigated. Applications to two real data sets are presented and the method is compared to the usual regression methods.  相似文献   
1000.
This article studies computation problem in the context of estimating parameters of linear mixed model for massive data. Our algorithms combine the factored spectrally transformed linear mixed model method with a sequential singular value decomposition calculation algorithm. This combination solves the operation limitation of the method and also makes this algorithm feasible to big dataset, especially when the data has a tall and thin design matrix. Our simulation studies show that our algorithms make the calculation of linear mixed model feasible for massive data on ordinary desktop and have same estimating accuracy with the method based on the whole data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号