首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2350篇
  免费   77篇
  国内免费   37篇
管理学   107篇
劳动科学   1篇
民族学   15篇
人口学   19篇
丛书文集   286篇
理论方法论   157篇
综合类   1438篇
社会学   177篇
统计学   264篇
  2024年   6篇
  2023年   21篇
  2022年   17篇
  2021年   21篇
  2020年   33篇
  2019年   37篇
  2018年   50篇
  2017年   68篇
  2016年   70篇
  2015年   104篇
  2014年   182篇
  2013年   279篇
  2012年   154篇
  2011年   160篇
  2010年   149篇
  2009年   148篇
  2008年   126篇
  2007年   127篇
  2006年   104篇
  2005年   114篇
  2004年   110篇
  2003年   96篇
  2002年   84篇
  2001年   61篇
  2000年   48篇
  1999年   22篇
  1998年   10篇
  1997年   13篇
  1996年   8篇
  1995年   14篇
  1994年   4篇
  1993年   3篇
  1992年   7篇
  1991年   2篇
  1990年   3篇
  1989年   2篇
  1987年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
排序方式: 共有2464条查询结果,搜索用时 359 毫秒
271.
This paper addresses the largest and the smallest observations, at the times when a new record of either kind (upper or lower) occurs, which are it called the current upper and lower record, respectively. We examine the entropy properties of these statistics, especially the difference between entropy of upper and lower bounds of record coverage. The results are presented for some common parametric families of distributions. Several upper and lower bounds, in terms of the entropy of parent distribution, for the entropy of current records are obtained. It is shown that mutual information, as well as Kullback–Leibler distance between the endpoints of record coverage, Kullback–Leibler distance between data distribution, and current records, are all distribution-free.  相似文献   
272.
In this article, we present a goodness-of-fit test for a distribution based on some comparisons between the empirical characteristic function cn(t) and the characteristic function of a random variable under the simple null hypothesis, c0(t). We do this by introducing a suitable distance measure. Empirical critical values for the new test statistic for testing normality are computed. In addition, the new test is compared via simulation to other omnibus tests for normality and it is shown that this new test is more powerful than others.  相似文献   
273.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   
274.
In this article, a family of distributions, namely the exponentiated family of distributions, is defined and for the unknown parameters, different point estimates are derived based on record statistics. Prediction for future record values is presented from a Bayesian view point. Two numerical examples and a Monte Carlo simulation study are presented to illustrate the results.  相似文献   
275.
ABSTRACT

In this article, we consider a sampling scheme in record-breaking data set-up, as record ranked set sampling. We compare the proposed sampling with the well-known sampling scheme in record values known as inverse sampling scheme when the underlying distribution follows the proportional hazard rate model. Various point estimators are obtained in each sampling schemes and compared with respect to mean squared error and Pitman measure of closeness criteria. It is observed in most of the situations that the new sampling scheme provides more efficient estimators than their counterparts. Finally, one data set has been analyzed for illustrative purposes.  相似文献   
276.
All existing location-scale rank tests use equal weights for the components. We advocate the use of weighted combinations of statistics. This approach can partly be substantiated by the theory of locally most powerful tests. We specifically investi= gate a Wilcoxon-Mood combination. We give exact critical values for a range of weights. The asymptotic normality of the test statistic is proved under a general hypothesis and Chernoff-Savage conditions. The asymptotic relative efficiency of this test with respect to unweighted combinations shows that a careful choice of weights results in a gain in efficiency.  相似文献   
277.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem.  相似文献   
278.
Multiple comparison procedures are extended to designs consisting of several groups, where the treatment means are to be compared within each group. This may arise in two-factor experiments, with a significant interaction term, when one is interested in comparing the levels of one factor at each level of the other factor. A general approach is presented for deriving the distributions and calculating critical points, following three papers which dealt with two specific procedures. These points are used for constructing simultaneous confidence intervals over some restricted set of contrasts among treatment means in each of the groups. Tables of critical values are provided for two procedures and an application is demonstrated. Some extensions are presented for the case of possible different sets of contrasts and also for unequal variances in the various groups.  相似文献   
279.
280.
Tests of significance are often made in situations where the standard assumptions underlying the probability calculations do not hold. As a result, the reported significance levels become difficult to interpret. This article sketches an alternative interpretation of a reported significance level, valid in considerable generality. This level locates the given data set within the spectrum of other data sets derived from the given one by an appropriate class of transformations. If the null hypothesis being tested holds, the derived data sets should be equivalent to the original one. Thus, a small reported significance level indicates an unusual data set. This development parallels that of randomization tests, but there is a crucial technical difference: our approach involves permuting observed residuals; the classical randomization approach involves permuting unobservable, or perhaps nonexistent, stochastic disturbance terms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号