首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   30954篇
  免费   894篇
  国内免费   15篇
管理学   4162篇
民族学   159篇
人才学   2篇
人口学   2918篇
丛书文集   215篇
教育普及   2篇
理论方法论   2846篇
现状及发展   1篇
综合类   932篇
社会学   15589篇
统计学   5037篇
  2023年   145篇
  2021年   223篇
  2020年   474篇
  2019年   637篇
  2018年   798篇
  2017年   1075篇
  2016年   800篇
  2015年   598篇
  2014年   765篇
  2013年   4899篇
  2012年   1079篇
  2011年   1016篇
  2010年   813篇
  2009年   685篇
  2008年   809篇
  2007年   833篇
  2006年   827篇
  2005年   758篇
  2004年   660篇
  2003年   601篇
  2002年   660篇
  2001年   799篇
  2000年   754篇
  1999年   695篇
  1998年   510篇
  1997年   449篇
  1996年   447篇
  1995年   443篇
  1994年   420篇
  1993年   433篇
  1992年   503篇
  1991年   464篇
  1990年   435篇
  1989年   417篇
  1988年   429篇
  1987年   410篇
  1986年   368篇
  1985年   412篇
  1984年   416篇
  1983年   379篇
  1982年   346篇
  1981年   290篇
  1980年   259篇
  1979年   300篇
  1978年   264篇
  1977年   231篇
  1976年   217篇
  1975年   253篇
  1974年   194篇
  1973年   164篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation–maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set.  相似文献   
992.
Abstract

This research develops a model of relationships among components of Total-JIT, including JIT-information, JIT-manufacturing, JIT-purchasing, and JIT-selling, to establish an implementation hierarchy based on relative importance. The data collected relates to the relationships among JIT components and two performance measures, supply chain competency and organizational performance. Two groups are used in the research, one group of five operations management academics and another group of 30 practicing operations managers working in U.S. manufacturing firms. An interpretive structural modelling methodology is used to develop alternative structural models. The academics’ data show JIT-information emerging as lynchpin of relationships, directly impacting all other JIT practices and both performance measures. The practitioners’ data indicates that all JIT practices and performance measures are interactive as components and outcomes. This study is the first to apply interpretive structural modelling to investigate the interplay among total-JIT components and the performance measures of supply chain competency and organizational performance.  相似文献   
993.
This paper analyses how network embeddedness affects the exploration and exploitation of R&D project performance. By developing joint projects, partners and projects are linked to one another and form a network that generates social capital. We examine how the location, which determines the access to information and knowledge within a network of relationships, affects the performance of projects. We consider this question in the setup of exploration and exploitation projects, using a database built from an EU framework. We find that each of the structural embeddedness dimensions (degree, betweenness and eigenvector centrality) have a different impact on the exploration and exploitation project performance. Our empirical analysis extends to project management literature and social capital theory, by including the effect that the acquisition of external knowledge has on the performance of the project.  相似文献   
994.
Public Organization Review - The purpose of this study is to explore how servant leadership affects public sector employee engagement, organisational ethical climate, and public sector reform, of...  相似文献   
995.
休闲已成为现代人生活、工作所不可或缺的组成部分。作为经济转型时期的特殊社会群体,新生代女性农民工的休闲生活现状值得关注。研究以甘肃省定西市为例,从休闲时间、休闲消费、休闲认知三个维度对本地区新生代女性农民工与男性农民工、同龄城市女青年休闲生活进行对比研究,得出结论:贫困地区新生代女性农民工休闲时间严重不足,主要休闲消费方式仍处于消极型的休闲层次类型;相对于上一代农民工而言,她们已经认识到休闲的重要性,但由于受封闭的休闲空间及拮据的经济条件所限,她们的休闲动机仍停留在马斯洛的五个需求层次中前两层的初级需要。针对此现状,文章提出了相应的对策与建议。   相似文献   
996.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
997.
998.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   
999.
This paper introduces a finite mixture of canonical fundamental skew \(t\) (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed (in: Lee and McLachlan, arXiv:1401.8182 [statME], 2014b). The family of CFUST distributions includes the restricted multivariate skew \(t\) and unrestricted multivariate skew \(t\) distributions as special cases. In recent years, a few versions of the multivariate skew \(t\) (MST) mixture model have been put forward, together with various EM-type algorithms for parameter estimation. These formulations adopted either a restricted or unrestricted characterization for their MST densities. In this paper, we examine a natural generalization of these developments, employing the CFUST distribution as the parametric family for the component distributions, and point out that the restricted and unrestricted characterizations can be unified under this general formulation. We show that an exact implementation of the EM algorithm can be achieved for the CFUST distribution and mixtures of this distribution, and present some new analytical results for a conditional expectation involved in the E-step.  相似文献   
1000.
The accelerated failure time (AFT) models have proved useful in many contexts, though heavy censoring (as for example in cancer survival) and high dimensionality (as for example in microarray data) cause difficulties for model fitting and model selection. We propose new approaches to variable selection for censored data, based on AFT models optimized using regularized weighted least squares. The regularized technique uses a mixture of \(\ell _1\) and \(\ell _2\) norm penalties under two proposed elastic net type approaches. One is the adaptive elastic net and the other is weighted elastic net. The approaches extend the original approaches proposed by Ghosh (Adaptive elastic net: an improvement of elastic net to achieve oracle properties, Technical Reports 2007) and Hong and Zhang (Math Model Nat Phenom 5(3):115–133 2010), respectively. We also extend the two proposed approaches by adding censoring observations as constraints into their model optimization frameworks. The approaches are evaluated on microarray and by simulation. We compare the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号