全文获取类型
收费全文 | 4488篇 |
免费 | 155篇 |
国内免费 | 17篇 |
专业分类
管理学 | 142篇 |
民族学 | 5篇 |
人才学 | 1篇 |
人口学 | 15篇 |
丛书文集 | 98篇 |
理论方法论 | 29篇 |
综合类 | 1120篇 |
社会学 | 32篇 |
统计学 | 3218篇 |
出版年
2024年 | 13篇 |
2023年 | 25篇 |
2022年 | 22篇 |
2021年 | 27篇 |
2020年 | 78篇 |
2019年 | 106篇 |
2018年 | 146篇 |
2017年 | 214篇 |
2016年 | 129篇 |
2015年 | 124篇 |
2014年 | 147篇 |
2013年 | 1162篇 |
2012年 | 588篇 |
2011年 | 189篇 |
2010年 | 167篇 |
2009年 | 170篇 |
2008年 | 186篇 |
2007年 | 165篇 |
2006年 | 127篇 |
2005年 | 126篇 |
2004年 | 116篇 |
2003年 | 90篇 |
2002年 | 65篇 |
2001年 | 80篇 |
2000年 | 75篇 |
1999年 | 61篇 |
1998年 | 45篇 |
1997年 | 42篇 |
1996年 | 24篇 |
1995年 | 21篇 |
1994年 | 14篇 |
1993年 | 15篇 |
1992年 | 13篇 |
1991年 | 10篇 |
1990年 | 11篇 |
1989年 | 6篇 |
1988年 | 10篇 |
1987年 | 6篇 |
1986年 | 3篇 |
1985年 | 6篇 |
1984年 | 6篇 |
1983年 | 11篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 4篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 3篇 |
1976年 | 1篇 |
1975年 | 2篇 |
排序方式: 共有4660条查询结果,搜索用时 15 毫秒
21.
自由度是统计学中一个十分重要而又长期没有被圆满解释的概念。对此,从统计学史角度,对皮尔逊、费歇尔有关自由度问题争论原始文献细致考察,彻底澄清了自由度概念的内涵及与其相关的统计思想,弥补了Fienberg、Stigler与陈希孺已有解释的缺陷。研究表明:皮尔逊关于卡方检验中无论总体分布已知还是其来自于样本推断统计量都具有同一分布的错误判断,导致卡方检验的准确性出现偏差,这种偏差虽被同时代少数几个统计学家察觉但他们却无法解释其根源。费歇尔提出自由度概念并结合n维几何、假设检验与最大似然方法的论证不仅修正了皮尔逊的错误,也完善了从样本统计量估计总体参数的数理逻辑。 相似文献
22.
Ryszard Zieliński 《Statistics》2013,47(4):453-462
According to Pitman's Measure of Closeness, if T1and T2are two estimators of a real parameter $[d], then T1is better than T2if Po[d]{T1-o[d] < T2-0[d]} > 1/2 for all 0[d]. It may however happen that while T1is better than T2and T2is better than T3, T3is better than T1. Given q ? (0,1) and a sample X1, X2, ..., Xnfrom an unknown F ? F, an estimator T* = T*(X1,X2...Xn)of the q-th quantile of the distribution F is constructed such that PF{F(T*)-q <[d] F(T)-q} >[d] 1/2 for all F?F and for all T€T, where F is a nonparametric family of distributions and T is a class of estimators. It is shown that T* =Xj:n'for a suitably chosen jth order statistic. 相似文献
23.
A multivariate modified histogram density estimate depending on a reference density g and a partition P has been proved to have good consistency properties according to several information theoretic criteria. Given an i.i.d. sample, we show how to select automatically both g and P so that the expected L 1 error of the corresponding selected estimate is within a given constant multiple of the best possible error plus an additive term which tends to zero under mild assumptions. Our method is inspired by the combinatorial tools developed by Devroye and Lugosi [Devroye, L. and Lugosi, G., 2001, Combinatorial Methods in Density Estimation (New York, NY: Springer–Verlag)] and it includes a wide range of reference density and partition models. Results of simulations are also presented. 相似文献
24.
This article examines a semiparametric test for checking the constancy of serial dependence via copula models for Markov time series. A semiparametric score test is proposed for testing the constancy of the copula parameter against stochastically varying copula parameter. The asymptotic null distribution of the test is established. A semiparametric bootstrap procedure is employed for the estimation of the variance of the proposed score test. Illustrations are given based on simulated series and historic interest rate data. 相似文献
25.
Finding the influence of traffic accident on the road is helpful to analyze the characteristics of traffic flow, and take reasonable and effective control measures. Here, the detrended fluctuation analysis method is applied to investigate the complexity of time series in mixed traffic flow with a blockage induced by an accident. As a parameter to depict the long-term evolutionary behavior of the time series in traffic flow, the scaling exponent is analyzed. According to the scaling exponent, it is shown that the traffic flow time series can display long-range correlation characteristics, short-range correlation characteristics, and non-power-law relation in the long-range correlation characteristics, which is strongly dependent on the entering probability of vehicle, the ratio of slow vehicle and the blockage duration time. 相似文献
26.
Various programs in statistical packages for analysis of variance with unequal cell size give different results to the same data because of nonorthogonality of the main effects and interactions. This paper explains how these programs treat the problem of analysis of variance of unbalanced data. 相似文献
27.
Since the squared ranks test was first proposed by Taha in 1964 it has been mentioned by several authors as a test that is easy to use, with good power in many situations. It is almost as easy to use as the Wilcoxon rank sum test, and has greater power when two populations differ in their scale parameters rather than in their location parameters. This paper discuss the versatility of the squared ranks test, introduces a test which uses squared ranks, and presents some exact tables 相似文献
28.
Conditional confidence intervals for the location parameter of the double exponential distribution based on maximum likelihood estimators conditioned on a set of ancillary statistics and the corresponding unconditional confidence intervals based on the maximum likelihood estimators alone are compared in two ways. Monte Carlo techniques are used and the conditional approach appears to give slightly better results although agreement as n becomes larger is noted 相似文献
29.
Correlation-Type Goodness of Fit Test for Extreme Value Distribution Based on Simultaneous Closeness
In reliability studies, one typically would assume a lifetime distribution for the units under study and then carry out the required analysis. One popular choice for the lifetime distribution is the family of two-parameter Weibull distributions (with scale and shape parameters) which, through a logarithmic transformation, can be transformed to the family of two-parameter extreme value distributions (with location and scale parameters). In carrying out a parametric analysis of this type, it is highly desirable to be able to test the validity of such a model assumption. A basic tool that is useful for this purpose is a quantile–quantile (QQ) plot, but in its use, the issue of the choice of plotting position arises. Here, by adopting the optimal plotting points based on Pitman closeness criterion proposed recently by Balakrishnan et al. (2010b), and referred to as simultaneous closeness probability (SCP) plotting points, we propose a correlation-type goodness of fit test for the extreme value distribution. We compute the SCP plotting points for various sample sizes and use them to determine the mean, standard deviation and critical values for the proposed correlation-type test statistic. Using these critical values, we carry out a power study, similar to the one carried out by Kinnison (1989), through which we demonstrate that the use of SCP plotting points results in better power than with the use of mean ranks as plotting points and nearly the same power as with the use of median ranks. We then demonstrate the use of the SCP plotting points and the associated correlation-type test for Weibull analysis with an illustrative example. Finally, for the sake of comparison, we also adapt two statistics proposed by Gan and Koehler (1990), in the context of probability–probability (PP) plots, based on SCP plotting points and compare their performance to those based on mean ranks. The empirical study also reveals that the tests from the QQ plot have better power than those from the PP plot. 相似文献
30.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct. The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables. Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model. 相似文献