全文获取类型
收费全文 | 6627篇 |
免费 | 187篇 |
国内免费 | 81篇 |
专业分类
管理学 | 123篇 |
劳动科学 | 1篇 |
民族学 | 41篇 |
人口学 | 25篇 |
丛书文集 | 961篇 |
理论方法论 | 285篇 |
综合类 | 4974篇 |
社会学 | 220篇 |
统计学 | 265篇 |
出版年
2024年 | 9篇 |
2023年 | 42篇 |
2022年 | 41篇 |
2021年 | 45篇 |
2020年 | 64篇 |
2019年 | 60篇 |
2018年 | 72篇 |
2017年 | 99篇 |
2016年 | 108篇 |
2015年 | 182篇 |
2014年 | 365篇 |
2013年 | 400篇 |
2012年 | 392篇 |
2011年 | 466篇 |
2010年 | 432篇 |
2009年 | 402篇 |
2008年 | 420篇 |
2007年 | 500篇 |
2006年 | 543篇 |
2005年 | 467篇 |
2004年 | 427篇 |
2003年 | 402篇 |
2002年 | 338篇 |
2001年 | 283篇 |
2000年 | 174篇 |
1999年 | 51篇 |
1998年 | 21篇 |
1997年 | 18篇 |
1996年 | 13篇 |
1995年 | 18篇 |
1994年 | 6篇 |
1993年 | 10篇 |
1992年 | 8篇 |
1991年 | 4篇 |
1990年 | 3篇 |
1989年 | 2篇 |
1988年 | 1篇 |
1987年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1981年 | 1篇 |
排序方式: 共有6895条查询结果,搜索用时 328 毫秒
841.
This paper addresses the largest and the smallest observations, at the times when a new record of either kind (upper or lower) occurs, which are it called the current upper and lower record, respectively. We examine the entropy properties of these statistics, especially the difference between entropy of upper and lower bounds of record coverage. The results are presented for some common parametric families of distributions. Several upper and lower bounds, in terms of the entropy of parent distribution, for the entropy of current records are obtained. It is shown that mutual information, as well as Kullback–Leibler distance between the endpoints of record coverage, Kullback–Leibler distance between data distribution, and current records, are all distribution-free. 相似文献
842.
In this article, we present a goodness-of-fit test for a distribution based on some comparisons between the empirical characteristic function cn(t) and the characteristic function of a random variable under the simple null hypothesis, c0(t). We do this by introducing a suitable distance measure. Empirical critical values for the new test statistic for testing normality are computed. In addition, the new test is compared via simulation to other omnibus tests for normality and it is shown that this new test is more powerful than others. 相似文献
843.
Egbert A. van der Meulen 《统计学通讯:理论与方法》2013,42(5):699-708
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably. The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon. In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test. Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples. 相似文献
844.
In this article, a family of distributions, namely the exponentiated family of distributions, is defined and for the unknown parameters, different point estimates are derived based on record statistics. Prediction for future record values is presented from a Bayesian view point. Two numerical examples and a Monte Carlo simulation study are presented to illustrate the results. 相似文献
845.
ABSTRACTIn this article, we consider a sampling scheme in record-breaking data set-up, as record ranked set sampling. We compare the proposed sampling with the well-known sampling scheme in record values known as inverse sampling scheme when the underlying distribution follows the proportional hazard rate model. Various point estimators are obtained in each sampling schemes and compared with respect to mean squared error and Pitman measure of closeness criteria. It is observed in most of the situations that the new sampling scheme provides more efficient estimators than their counterparts. Finally, one data set has been analyzed for illustrative purposes. 相似文献
846.
All existing location-scale rank tests use equal weights for the components. We advocate the use of weighted combinations of statistics. This approach can partly be substantiated by the theory of locally most powerful tests. We specifically investi= gate a Wilcoxon-Mood combination. We give exact critical values for a range of weights. The asymptotic normality of the test statistic is proved under a general hypothesis and Chernoff-Savage conditions. The asymptotic relative efficiency of this test with respect to unweighted combinations shows that a careful choice of weights results in a gain in efficiency. 相似文献
847.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem. 相似文献
848.
Multiple comparison procedures are extended to designs consisting of several groups, where the treatment means are to be compared within each group. This may arise in two-factor experiments, with a significant interaction term, when one is interested in comparing the levels of one factor at each level of the other factor. A general approach is presented for deriving the distributions and calculating critical points, following three papers which dealt with two specific procedures. These points are used for constructing simultaneous confidence intervals over some restricted set of contrasts among treatment means in each of the groups. Tables of critical values are provided for two procedures and an application is demonstrated. Some extensions are presented for the case of possible different sets of contrasts and also for unequal variances in the various groups. 相似文献
849.
850.
Tests of significance are often made in situations where the standard assumptions underlying the probability calculations do not hold. As a result, the reported significance levels become difficult to interpret. This article sketches an alternative interpretation of a reported significance level, valid in considerable generality. This level locates the given data set within the spectrum of other data sets derived from the given one by an appropriate class of transformations. If the null hypothesis being tested holds, the derived data sets should be equivalent to the original one. Thus, a small reported significance level indicates an unusual data set. This development parallels that of randomization tests, but there is a crucial technical difference: our approach involves permuting observed residuals; the classical randomization approach involves permuting unobservable, or perhaps nonexistent, stochastic disturbance terms. 相似文献