首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   28631篇
  免费   768篇
  国内免费   1篇
管理学   3926篇
民族学   115篇
人才学   1篇
人口学   2610篇
丛书文集   134篇
教育普及   5篇
理论方法论   2577篇
现状及发展   1篇
综合类   351篇
社会学   14512篇
统计学   5168篇
  2021年   166篇
  2020年   395篇
  2019年   546篇
  2018年   633篇
  2017年   884篇
  2016年   678篇
  2015年   508篇
  2014年   675篇
  2013年   5001篇
  2012年   875篇
  2011年   873篇
  2010年   635篇
  2009年   553篇
  2008年   635篇
  2007年   645篇
  2006年   673篇
  2005年   630篇
  2004年   552篇
  2003年   522篇
  2002年   572篇
  2001年   734篇
  2000年   720篇
  1999年   661篇
  1998年   480篇
  1997年   428篇
  1996年   482篇
  1995年   465篇
  1994年   467篇
  1993年   481篇
  1992年   536篇
  1991年   497篇
  1990年   496篇
  1989年   461篇
  1988年   454篇
  1987年   393篇
  1986年   379篇
  1985年   424篇
  1984年   412篇
  1983年   359篇
  1982年   326篇
  1981年   265篇
  1980年   259篇
  1979年   286篇
  1978年   253篇
  1977年   230篇
  1976年   205篇
  1975年   222篇
  1974年   167篇
  1973年   157篇
  1972年   133篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
921.
Shadow Insurance     
Life insurers use reinsurance to move liabilities from regulated and rated companies that sell policies to shadow reinsurers, which are less regulated and unrated off‐balance‐sheet entities within the same insurance group. U.S. life insurance and annuity liabilities ceded to shadow reinsurers grew from $11 billion in 2002 to $364 billion in 2012. Life insurers using shadow insurance, which capture half of the market share, ceded 25 cents of every dollar insured to shadow reinsurers in 2012, up from 2 cents in 2002. By relaxing capital requirements, shadow insurance could reduce the marginal cost of issuing policies and thereby improve retail market efficiency. However, shadow insurance could also reduce risk‐based capital and increase expected loss for the industry. We model and quantify these effects based on publicly available data and plausible assumptions.  相似文献   
922.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   
923.
This study uses a service operations management (SOM) strategy lens to investigate chain store retailers' strategic design responsiveness (SDR)—a term that captures the degree to which retailers dynamically coordinate investments in human and structural capital with the complexity of their service and product offerings. Labor force and physical capital are respectively used as proxies for investments in human capital and structural capital, whereas gross margins are proxies for product/service offering complexity. Consequently, SDR broadly reflects three salient complementary choices of SOM design strategy. We test the effects of “brick and mortar” chain store retailers' SDR on current and future firm performance using publically available panel data collected from Compustat and the University of Michigan American Customer Satisfaction Index databases for the period 1996–2011. We find that retailers that fail to keep pace with investments in both structural and human capital exhibit short‐term financial benefits, but have worse ongoing operational performance. These findings corroborate the importance of managers strategically maintaining the complementarity of design‐related choices for improving and maintaining business performance.  相似文献   
924.
In this paper, we analyze the ethical issues of using honesty and integrity tests in employment screening. Our focus will be on the United States context: legal requirements related to applicant privacy differ in other countries, but we posit that our proposed balancing test is broadly applicable. We start by discussing why companies have ethical and legal obligations, based on a stakeholder analysis, to assess the integrity of potential employees. We then move to a consideration of how companies currently use background checks as a pre‐employment screening tool, noting their limitations. We then take up honesty and integrity testing, focusing particularly on the problems of false positives and due process. We offer a balancing test for the use of honesty and integrity testing that takes in three factors: (1) the potential harm posed by a dishonest employee in a particular job, (2) the linkage between the test and the assessment process, and (3) the accuracy and validity of the honesty and integrity test. We conclude with implications for practice and future research.  相似文献   
925.
926.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation–maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set.  相似文献   
927.
This paper analyses how network embeddedness affects the exploration and exploitation of R&D project performance. By developing joint projects, partners and projects are linked to one another and form a network that generates social capital. We examine how the location, which determines the access to information and knowledge within a network of relationships, affects the performance of projects. We consider this question in the setup of exploration and exploitation projects, using a database built from an EU framework. We find that each of the structural embeddedness dimensions (degree, betweenness and eigenvector centrality) have a different impact on the exploration and exploitation project performance. Our empirical analysis extends to project management literature and social capital theory, by including the effect that the acquisition of external knowledge has on the performance of the project.  相似文献   
928.
Public Organization Review - The purpose of this study is to explore how servant leadership affects public sector employee engagement, organisational ethical climate, and public sector reform, of...  相似文献   
929.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
930.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号