首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16534篇
  免费   452篇
  国内免费   206篇
管理学   1440篇
劳动科学   2篇
民族学   195篇
人才学   1篇
人口学   234篇
丛书文集   1894篇
理论方法论   866篇
综合类   10312篇
社会学   1553篇
统计学   695篇
  2024年   34篇
  2023年   135篇
  2022年   165篇
  2021年   184篇
  2020年   268篇
  2019年   322篇
  2018年   317篇
  2017年   346篇
  2016年   333篇
  2015年   420篇
  2014年   816篇
  2013年   1211篇
  2012年   994篇
  2011年   1139篇
  2010年   867篇
  2009年   916篇
  2008年   1021篇
  2007年   1151篇
  2006年   1039篇
  2005年   1004篇
  2004年   961篇
  2003年   902篇
  2002年   804篇
  2001年   652篇
  2000年   385篇
  1999年   168篇
  1998年   95篇
  1997年   77篇
  1996年   64篇
  1995年   53篇
  1994年   56篇
  1993年   41篇
  1992年   43篇
  1991年   38篇
  1990年   29篇
  1989年   35篇
  1988年   25篇
  1987年   17篇
  1986年   15篇
  1985年   10篇
  1984年   10篇
  1983年   11篇
  1982年   5篇
  1981年   4篇
  1980年   4篇
  1979年   1篇
  1978年   2篇
  1975年   3篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
51.
52.
Egmar Rödel 《Statistics》2013,47(4):573-585
Normed bivariate density funtions were introduced by HOEFFDING (1940/41). In the present paper estimators for normed bivariate ranks and on a FOURIER series expansion in LEGENDRE polynomials. The estimation of normed bivarate density functions under positive dependence is also described  相似文献   
53.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
54.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997 Tang , B. , Wu , C. F. J. (1997). A method for constructing supersaturated designs and its E(s 2)-optimality. Canadian Journal of Statistics 25:191201.[Crossref], [Web of Science ®] [Google Scholar]) as well as by Koukouvinos et al. (2008 Koukouvinos , C. , Mylona , K. , Simos , D. E. ( 2008 ). E(s 2)-optimal and minimax-optimal cyclic supersaturated designs via multi-objective simulated annealing . Journal of Statistical Planning and Inference 138 : 16391646 .[Crossref], [Web of Science ®] [Google Scholar]). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation.  相似文献   
55.
We demonstrate a multidimensional approach for combining several indicators of well-being, including the traditional money-income indicators. This methodology avoids the difficult and much criticized task of computing imputed incomes for such indicators as net worth and schooling. Inequality in the proposed composite measures is computed using relative inequality indexes that permit simple analysis of both the contribution of each welfare indicator (and its factor components) and within and between components of total inequality when the population is grouped by income levels, age, gender, or any other criteria. The analysis is performed on U.S. data using the Michigan Survey of Income Dynamics.  相似文献   
56.
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations.  相似文献   
57.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   
58.
59.
ABSTRACT

We present two new estimators for estimating the entropy of absolutely continuous random variables. Some properties of them are considered, specifically consistency of the first is proved. The introduced estimators are compared with the existing entropy estimators. Also, we propose two new tests for normality based on the introduced entropy estimators and compare their powers with the powers of other tests for normality. The results show that the proposed estimators and test statistics perform very well in estimating entropy and testing normality. A real example is presented and analyzed.  相似文献   
60.
ABSTRACT

We have provided a fractional generalization of the Poisson renewal processes by replacing the first time derivative in the relaxation equation of the survival probability by a fractional derivative of order α(0 < α ? 1). A generalized Laplacian model associated with the Mittag-Leffler distribution is examined. We also discuss some properties of this new model and its relevance to time series. Distribution of gliding sums, regression behaviors, and sample path properties are studied. Finally we introduce the q-Mittag-Leffler process associated with the q-Mittag-Leffler distribution.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号