全文获取类型
收费全文 | 15966篇 |
免费 | 461篇 |
国内免费 | 207篇 |
专业分类
管理学 | 999篇 |
劳动科学 | 2篇 |
民族学 | 202篇 |
人才学 | 1篇 |
人口学 | 189篇 |
丛书文集 | 1914篇 |
理论方法论 | 813篇 |
综合类 | 10596篇 |
社会学 | 1397篇 |
统计学 | 521篇 |
出版年
2024年 | 33篇 |
2023年 | 136篇 |
2022年 | 155篇 |
2021年 | 172篇 |
2020年 | 240篇 |
2019年 | 299篇 |
2018年 | 288篇 |
2017年 | 322篇 |
2016年 | 306篇 |
2015年 | 400篇 |
2014年 | 808篇 |
2013年 | 1113篇 |
2012年 | 980篇 |
2011年 | 1117篇 |
2010年 | 874篇 |
2009年 | 897篇 |
2008年 | 1004篇 |
2007年 | 1164篇 |
2006年 | 1047篇 |
2005年 | 1026篇 |
2004年 | 996篇 |
2003年 | 901篇 |
2002年 | 791篇 |
2001年 | 617篇 |
2000年 | 365篇 |
1999年 | 156篇 |
1998年 | 78篇 |
1997年 | 59篇 |
1996年 | 46篇 |
1995年 | 41篇 |
1994年 | 32篇 |
1993年 | 27篇 |
1992年 | 28篇 |
1991年 | 27篇 |
1990年 | 19篇 |
1989年 | 22篇 |
1988年 | 12篇 |
1987年 | 11篇 |
1986年 | 3篇 |
1985年 | 6篇 |
1984年 | 2篇 |
1983年 | 5篇 |
1982年 | 2篇 |
1981年 | 2篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1975年 | 3篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method. 相似文献
62.
精算是保险发展的基础,是保险经营的技术支持。精算在国外有四百年的发展历史,引入中国只有二十年。要使精算技术在中国得到发展创新并为社会需要服务,必须了解精算思想产生的历史背景,厘清精算理论发展的脉络,真正把握精算思想的实质。基于此,介绍了精算各发展时期的主要代表人物及其学术思想,阐述精算技术对各时期保险发展的影响,同时对精算学与复利理论、数学、统计学、计算技术、金融经济学交叉融合的历史过程进行了分析述评。 相似文献
63.
64.
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998). 相似文献
65.
Egmar Rödel 《Statistics》2013,47(4):573-585
Normed bivariate density funtions were introduced by HOEFFDING (1940/41). In the present paper estimators for normed bivariate ranks and on a FOURIER series expansion in LEGENDRE polynomials. The estimation of normed bivarate density functions under positive dependence is also described 相似文献
66.
Ronald D. Armstrong 《统计学通讯:模拟与计算》2013,42(7):1057-1073
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data. 相似文献
67.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997) as well as by Koukouvinos et al. (2008). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation. 相似文献
68.
We demonstrate a multidimensional approach for combining several indicators of well-being, including the traditional money-income indicators. This methodology avoids the difficult and much criticized task of computing imputed incomes for such indicators as net worth and schooling. Inequality in the proposed composite measures is computed using relative inequality indexes that permit simple analysis of both the contribution of each welfare indicator (and its factor components) and within and between components of total inequality when the population is grouped by income levels, age, gender, or any other criteria. The analysis is performed on U.S. data using the Michigan Survey of Income Dynamics. 相似文献
69.
Arnold Zellner 《The American statistician》2013,67(4):278-280
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations. 相似文献
70.
Akram Kohansal 《统计学通讯:理论与方法》2013,42(18):5392-5411
ABSTRACTWe present two new estimators for estimating the entropy of absolutely continuous random variables. Some properties of them are considered, specifically consistency of the first is proved. The introduced estimators are compared with the existing entropy estimators. Also, we propose two new tests for normality based on the introduced entropy estimators and compare their powers with the powers of other tests for normality. The results show that the proposed estimators and test statistics perform very well in estimating entropy and testing normality. A real example is presented and analyzed. 相似文献