首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13442篇
  免费   386篇
管理学   1958篇
民族学   65篇
人才学   2篇
人口学   1331篇
丛书文集   74篇
教育普及   1篇
理论方法论   1237篇
现状及发展   1篇
综合类   147篇
社会学   6560篇
统计学   2452篇
  2023年   90篇
  2021年   74篇
  2020年   201篇
  2019年   283篇
  2018年   334篇
  2017年   447篇
  2016年   357篇
  2015年   241篇
  2014年   338篇
  2013年   2285篇
  2012年   457篇
  2011年   377篇
  2010年   344篇
  2009年   321篇
  2008年   366篇
  2007年   348篇
  2006年   335篇
  2005年   325篇
  2004年   310篇
  2003年   285篇
  2002年   313篇
  2001年   343篇
  2000年   328篇
  1999年   312篇
  1998年   227篇
  1997年   216篇
  1996年   192篇
  1995年   207篇
  1994年   163篇
  1993年   181篇
  1992年   191篇
  1991年   183篇
  1990年   181篇
  1989年   194篇
  1988年   166篇
  1987年   148篇
  1986年   159篇
  1985年   182篇
  1984年   184篇
  1983年   174篇
  1982年   137篇
  1981年   112篇
  1980年   109篇
  1979年   145篇
  1978年   94篇
  1977年   95篇
  1976年   94篇
  1975年   86篇
  1974年   83篇
  1973年   74篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
Ranked set sampling is a sampling approach that leads to improved statistical inference in situations where the units to be sampled can be ranked relative to each other prior to formal measurement. This ranking may be done either by subjective judgment or according to an auxiliary variable, and it need not be completely accurate. In fact, results in the literature have shown that no matter how poor the quality of the ranking, procedures based on ranked set sampling tend to be at least as efficient as procedures based on simple random sampling. However, efforts to quantify the gains in efficiency for ranked set sampling procedures have been hampered by a shortage of available models for imperfect rankings. In this paper, we introduce a new class of models for imperfect rankings, and we provide a rigorous proof that essentially any reasonable model for imperfect rankings is a limit of models in this class. We then describe a specific, easily applied method for selecting an appropriate imperfect rankings model from the class.  相似文献   
92.
93.
Finding optimal, or at least good, maintenance and repair policies is crucial in reliability engineering. Likewise, describing life phases of human mortality is important when determining social policy or insurance premiums. In these tasks, one searches for distributions to fit data and then makes inferences about the population(s). In the present paper, we focus on bathtub‐type distributions and provide a view of certain problems, methods and solutions, and a few challenges, that can be encountered in reliability engineering, survival analysis, demography and actuarial science.  相似文献   
94.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   
95.
In conjunction with TIMET at Waunarlwydd (Swansea, UK) a model has been developed that will optimise the scheduling of various blooms to their eight furnaces so as to minimise the time taken to roll these blooms into the finished mill products. This production scheduling model requires reliable data on times taken for the various furnaces that heat the slabs and blooms to reach the temperatures required for rolling. These times to temperature are stochastic in nature and this paper identifies the distributional form for these times using the generalised F distribution as a modelling framework. The times to temperature were found to be similarly distributed over all furnaces. The identified distributional forms were incorporated into the scheduling model to optimise a particular campaign that was run at TIMET Swansea. Amongst other conclusion it was found that, compared to the actual campaign, the model produced a schedule that reduced the makespan by some 35%.  相似文献   
96.
97.
98.
It is well known that the unimodal maximum likelihood estimator of a density is consistent everywhere but at the mode. The authors review various ways to solve this problem and propose a new estimator that is concave over an interval containing the mode; this interval may be chosen by the user or through an algorithm. The authors show how to implement their solution and compare it to other approaches through simulations. They show that the new estimator is consistent everywhere and determine its rate of convergence in the Hellinger metric.  相似文献   
99.
Most discussions of popular music on the Internet focus on the utopian potential of new file-sharing technologies, yet applications that reproduce existing inequalities also deserve attention. Webcasting is the streaming (transmission) of digital video to multiple recipients in cyberspace. Paul McCartney's Webcast from the Cavern was a landmark case. It was a digital package staged for reproduction, and yet it felt live; in this article I explore why and offer a two-part explanation. The first part is that "live'-ness is based on an increasingly false opposition to recording, but, because that opposition still remains, Little Big Gig could seem live by adopting some trappings associated with it. The second part is that Internet use is mediated by daily life and computer users brought their own desire to the Webcast, in particular desires to see a Beatle play in the Cavern. Webcasting is an unanticipated use of the Internet that is being used to support corporate interests. With its widespread publicity, Little Big Gig helped naturalize that process.  相似文献   
100.
Summary.  To investigate the variability in energy output from a network of photovoltaic cells, solar radiation was recorded at 10 sites every 10 min in the Pentland Hills to the south of Edinburgh. We identify spatiotemporal auto-regressive moving average models as the most appropriate to address this problem. Although previously considered computationally prohibitive to work with, we show that by approximating using toroidal space and fitting by matching auto-correlations, calculations can be substantially reduced. We find that a first-order spatiotemporal auto-regressive (STAR(1)) process with a first-order neighbourhood structure and a Matern noise process provide an adequate fit to the data, and we demonstrate its use in simulating realizations of energy output.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号