全文获取类型
收费全文 | 4724篇 |
免费 | 116篇 |
专业分类
管理学 | 793篇 |
民族学 | 30篇 |
人才学 | 1篇 |
人口学 | 405篇 |
丛书文集 | 32篇 |
理论方法论 | 632篇 |
综合类 | 85篇 |
社会学 | 2139篇 |
统计学 | 723篇 |
出版年
2023年 | 42篇 |
2022年 | 16篇 |
2021年 | 41篇 |
2020年 | 91篇 |
2019年 | 160篇 |
2018年 | 148篇 |
2017年 | 175篇 |
2016年 | 211篇 |
2015年 | 130篇 |
2014年 | 162篇 |
2013年 | 633篇 |
2012年 | 207篇 |
2011年 | 176篇 |
2010年 | 143篇 |
2009年 | 149篇 |
2008年 | 156篇 |
2007年 | 169篇 |
2006年 | 124篇 |
2005年 | 148篇 |
2004年 | 141篇 |
2003年 | 124篇 |
2002年 | 136篇 |
2001年 | 121篇 |
2000年 | 90篇 |
1999年 | 79篇 |
1998年 | 69篇 |
1997年 | 78篇 |
1996年 | 64篇 |
1995年 | 53篇 |
1994年 | 59篇 |
1993年 | 57篇 |
1992年 | 41篇 |
1991年 | 42篇 |
1990年 | 49篇 |
1989年 | 42篇 |
1988年 | 37篇 |
1987年 | 36篇 |
1986年 | 32篇 |
1985年 | 42篇 |
1984年 | 42篇 |
1983年 | 25篇 |
1982年 | 36篇 |
1981年 | 33篇 |
1980年 | 30篇 |
1979年 | 36篇 |
1978年 | 17篇 |
1976年 | 31篇 |
1975年 | 17篇 |
1974年 | 21篇 |
1973年 | 16篇 |
排序方式: 共有4840条查询结果,搜索用时 15 毫秒
81.
In this paper, we study the robustness properties of several procedures for the joint estimation of shape and scale in a generalized Pareto model. The estimators that we primarily focus upon, most bias robust estimator (MBRE) and optimal MSE-robust estimator (OMSE), are one-step estimators distinguished as optimally robust in the shrinking neighbourhood setting; that is, they minimize the maximal bias, respectively, on such a specific neighbourhood, the maximal mean squared error (MSE). For their initialization, we propose a particular location–dispersion estimator, MedkMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally robust estimators are compared to the maximum-likelihood, skipped maximum-likelihood, Cramér–von-Mises minimum distance, method-of-medians, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite-sample breakdown point and the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE – all evaluated uniformly on shrinking neighbourhoods. These asymptotic findings are complemented by an extensive simulation study to assess the finite-sample behaviour of the considered procedures. The applicability of the procedures and their stability against outliers are illustrated for the Danish fire insurance data set from the package evir. 相似文献
82.
Peter C. Austin 《统计学通讯:模拟与计算》2013,42(6):1228-1234
Researchers are increasingly using the standardized difference to compare the distribution of baseline covariates between treatment groups in observational studies. Standardized differences were initially developed in the context of comparing the mean of continuous variables between two groups. However, in medical research, many baseline covariates are dichotomous. In this article, we explore the utility and interpretation of the standardized difference for comparing the prevalence of dichotomous variables between two groups. We examined the relationship between the standardized difference, and the maximal difference in the prevalence of the binary variable between two groups, the relative risk relating the prevalence of the binary variable in one group compared to the prevalence in the other group, and the phi coefficient for measuring correlation between the treatment group and the binary variable. We found that a standardized difference of 10% (or 0.1) is equivalent to having a phi coefficient of 0.05 (indicating negligible correlation) for the correlation between treatment group and the binary variable. 相似文献
83.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples. 相似文献
84.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct. The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables. Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model. 相似文献
85.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929. 相似文献
86.
John Haslett Ronan Bradley Peter Craig Antony Unwin Graham Wills 《The American statistician》2013,67(3):234-242
We explore the application of dynamic graphics to the exploratory analysis of spatial data. We introduce a number of new tools and illustrate their use with prototype software, developed at Trinity College, Dublin. These tools are used to examine local variability—anomalies—through plots of the data that display its marginal and multivariate distributions, through interactive smoothers, and through plots motivated by the spatial auto-covariance ideas implicit in the variogram. We regard these as alternative and linked views of the data. We conclude that the most important single view of the data is the Map View: All other views must be cross-referred to this, and the software must encourage this. The view can be enriched by overlaying on other pertinent spatial information. We draw attention to the possibilities of one-many linking, and to the use of line-objects to link pairs of data points. We draw attention to the parallels with work on Geographical Information Systems. 相似文献
87.
Peter J. Smith 《The American statistician》2013,67(2):217-218
It seems difficult to find a formula in the literature that relates moments to cumulants (and vice versa) and is useful in computational work rather than in an algebraic approach. Hence I present four very simple recursive formulas that translate moments to cumulants and vice versa in the univariate and multivariate situations. 相似文献
88.
The quadratic discriminant function is commonly used for the two group classification problem when the covariance matrices in the two populations are substantially unequal. This procedure is optimal when both populations are multivariate normal with known means and covariance matrices. This study examined the robustness of the QDF to non-normality. Sampling experiments were conducted to estimate expected actual error rates for the QDF when sampling from a variety of non-normal distributions. Results indicated that the QDF was robust to non-normality except when the distributions were highly skewed, in which case relatively large deviations from optimal were observed. In all cases studied the average probabilities of misclassification were relatively stable while the individual population error rates exhibited considerable variability. 相似文献
89.
Egbert A. van der Meulen 《统计学通讯:理论与方法》2013,42(5):699-708
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably. The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon. In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test. Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples. 相似文献
90.
Peter W.M. John 《统计学通讯:理论与方法》2013,42(6):1995-2001
Kageyama Mohan (1984) have presented three methods of constructing new incomplete block designs from balanced incomplete block designs, They raise questions about the designs which come from each of their methods, These questions are answered, Another series of group divisible designs is derived as a special case of their second method. 相似文献