首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8461篇
  免费   289篇
管理学   1257篇
民族学   52篇
人才学   3篇
人口学   769篇
丛书文集   55篇
理论方法论   1024篇
综合类   62篇
社会学   4469篇
统计学   1059篇
  2023年   37篇
  2021年   41篇
  2020年   142篇
  2019年   198篇
  2018年   178篇
  2017年   256篇
  2016年   241篇
  2015年   174篇
  2014年   254篇
  2013年   1439篇
  2012年   259篇
  2011年   248篇
  2010年   224篇
  2009年   231篇
  2008年   268篇
  2007年   264篇
  2006年   245篇
  2005年   276篇
  2004年   298篇
  2003年   256篇
  2002年   266篇
  2001年   219篇
  2000年   172篇
  1999年   179篇
  1998年   160篇
  1997年   151篇
  1996年   125篇
  1995年   140篇
  1994年   122篇
  1993年   133篇
  1992年   102篇
  1991年   127篇
  1990年   86篇
  1989年   78篇
  1988年   82篇
  1987年   67篇
  1986年   60篇
  1985年   81篇
  1984年   87篇
  1983年   81篇
  1982年   82篇
  1981年   59篇
  1980年   58篇
  1979年   57篇
  1978年   55篇
  1977年   62篇
  1976年   57篇
  1975年   52篇
  1974年   41篇
  1973年   32篇
排序方式: 共有8750条查询结果,搜索用时 140 毫秒
71.
72.
The importance of client beliefs in career counseling depends on their ability to add unique information about the client over and above that contributed by aptitudes and interests. The Career Beliefs Inventory was administered to 200 Australian students in grade 10 together with measures of Holland's RIASEC interest themes and scholastic aptitudes. The correlations between scales from the three domains showed clearly that beliefs made a contribution distinct from that provided by aptitudes and interests. Even though the results may reflect possible sampling or method variance, career beliefs in this sample added unique information to that traditionally used in career counseling.  相似文献   
73.
A theory of policy differentiation in single issue electoral politics   总被引:1,自引:0,他引:1  
Voter preferences are characterized by a parameter s (say, income) distributed on a set S according to a probability measure F. There is a single issue (say, a tax rate) whose level, b, is to be politically decided. There are two parties, each of which is a perfect agent of some constituency of voters, voters with a given value of s. An equilibrium of the electoral game is a pair of policies, b 1 and b 2, proposed by the two parties, such that b i maximizes the expected utility of the voters whom party i represents, given the policy proposed by the opposition. Under reasonable assumptions, the unique electoral equilibrium consists in both parties proposing the favorite policy of the median voter. What theory can explain why, historically, we observe electoral equilibria where the ‘right’ and ‘left’ parties propose different policies? Uncertainty concerning the distribution of voters is introduced. Let {F(t)} t ε T be a class of probability measures on S; all voters and parties share a common prior that the distribution of t is described by a probability measure H on T. If H has finite support, there is in general no electoral equilibrium. However, if H is continuous, then electoral equilibrium generally exists, and in equilibrium the parties propose different policies. Convergence of equilibrium to median voter politics is proved as uncertainty about the distribution of voter traits becomes small.  相似文献   
74.
The latest advances in artificial intelligence software (neural networking) have finally made it possible for qualitative researchers to apply the grounded theory method to the study of complex quantitative databases in a manner consistent with the postpositivistic, neopragmatic assumptions of most symbolic interactionists. The strength of neural networking for the study of quantitative data is twofold: it blurs the boundaries between qualitative and quantitative analysis, and it allows grounded theorists to embrace the complexity of quantitative data. The specific technique most useful to grounded theory is the Self‐Organizing Map (SOM). To demonstrate the utility of the SOM we (1) provide a brief review of grounded theory, focusing on how it was originally intended as a comparative method applicable to both quantitative and qualitative data; (2) examine how the SOM is compatible with the traditional techniques of grounded theory; and (3) demonstrate how the SOM assists grounded theory by applying it to an example based on our research.  相似文献   
75.
Abstract Despite a high prevalence of poverty among minorities in nonmetropolitan areas, research and policy concerns regarding poverty have continued focusing on metropolitan minorities. This study uses a model integrating individual, household, and structural factors to examine poverty among Latinos, blacks, and Anglos in nonmetropolitan and, for comparative purposes, metropolitan areas, using data from the 1985 Special Texas Census TDHS 1987. The findings show that minorities in nonmetropolitan areas tend to have the highest poverty rates. In addition, consistent as well as divergent patterns exist among the six ethnic-resident groups with respect to the relationships among the various individual, household, and structural factors and poverty.  相似文献   
76.
We present a case study based on a depression study that will illustrate the use of Bayesian statistics in the economic evaluation of cost‐effectiveness data, demonstrate the benefits of the Bayesian approach (whilst honestly recognizing any deficiencies) with respect to frequentist methods, and provide details of using the methods, including computer code where appropriate. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
77.
78.
Finding optimal, or at least good, maintenance and repair policies is crucial in reliability engineering. Likewise, describing life phases of human mortality is important when determining social policy or insurance premiums. In these tasks, one searches for distributions to fit data and then makes inferences about the population(s). In the present paper, we focus on bathtub‐type distributions and provide a view of certain problems, methods and solutions, and a few challenges, that can be encountered in reliability engineering, survival analysis, demography and actuarial science.  相似文献   
79.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   
80.
In conjunction with TIMET at Waunarlwydd (Swansea, UK) a model has been developed that will optimise the scheduling of various blooms to their eight furnaces so as to minimise the time taken to roll these blooms into the finished mill products. This production scheduling model requires reliable data on times taken for the various furnaces that heat the slabs and blooms to reach the temperatures required for rolling. These times to temperature are stochastic in nature and this paper identifies the distributional form for these times using the generalised F distribution as a modelling framework. The times to temperature were found to be similarly distributed over all furnaces. The identified distributional forms were incorporated into the scheduling model to optimise a particular campaign that was run at TIMET Swansea. Amongst other conclusion it was found that, compared to the actual campaign, the model produced a schedule that reduced the makespan by some 35%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号