全文获取类型
收费全文 | 27187篇 |
免费 | 890篇 |
国内免费 | 238篇 |
专业分类
管理学 | 1404篇 |
劳动科学 | 67篇 |
民族学 | 710篇 |
人才学 | 24篇 |
人口学 | 488篇 |
丛书文集 | 6136篇 |
理论方法论 | 1300篇 |
综合类 | 15477篇 |
社会学 | 1166篇 |
统计学 | 1543篇 |
出版年
2024年 | 50篇 |
2023年 | 113篇 |
2022年 | 371篇 |
2021年 | 459篇 |
2020年 | 364篇 |
2019年 | 269篇 |
2018年 | 385篇 |
2017年 | 563篇 |
2016年 | 459篇 |
2015年 | 801篇 |
2014年 | 1055篇 |
2013年 | 1460篇 |
2012年 | 1578篇 |
2011年 | 1960篇 |
2010年 | 2002篇 |
2009年 | 2060篇 |
2008年 | 1977篇 |
2007年 | 2129篇 |
2006年 | 2078篇 |
2005年 | 1766篇 |
2004年 | 1085篇 |
2003年 | 976篇 |
2002年 | 1221篇 |
2001年 | 1011篇 |
2000年 | 598篇 |
1999年 | 360篇 |
1998年 | 188篇 |
1997年 | 167篇 |
1996年 | 189篇 |
1995年 | 128篇 |
1994年 | 111篇 |
1993年 | 80篇 |
1992年 | 85篇 |
1991年 | 43篇 |
1990年 | 38篇 |
1989年 | 28篇 |
1988年 | 33篇 |
1987年 | 14篇 |
1986年 | 13篇 |
1985年 | 9篇 |
1984年 | 4篇 |
1983年 | 6篇 |
1982年 | 7篇 |
1981年 | 5篇 |
1980年 | 4篇 |
1977年 | 2篇 |
1974年 | 2篇 |
1973年 | 2篇 |
1972年 | 3篇 |
1970年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
91.
Fang Li 《统计学通讯:理论与方法》2013,42(9):1404-1421
This article discusses the problem of testing the equality of two nonparametric autoregressive functions against one-sided alternatives. The heteroscedastic errors and stationary densities of the two independent strong mixing strictly stationary time series can be possibly different. The article adapts the idea of using sum of quasi-residuals to construct the test and derives its asymptotic null distributions. The article also shows that the test is consistent for general alternatives and obtains its limiting distributions under a sequence of local alternatives. Then a Monte Carlo simulation is conducted to study the finite sample level and power behavior of these tests at some alternatives. We also compare the test to an existing lag matched test theoretically and by Monte Carlo experiments. 相似文献
92.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1. It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width. 相似文献
93.
Mingliang Li 《Econometric Reviews》2013,32(5):529-556
In this paper, I study the timing of high school dropout decisions using data from High School and Beyond. I propose a Bayesian proportional hazard analysis framework that takes into account the specification of piecewise constant baseline hazard, the time-varying covariate of dropout eligibility, and individual, school, and state level random effects in the dropout hazard. I find that students who have reached their state compulsory school attendance ages are more likely to drop out of high school than those who have not reached compulsory school attendance ages. Regarding the school quality effects, a student is more likely to drop out of high school if the school she attends is associated with a higher pupil–teacher ratio or lower district expenditure per pupil. An interesting finding of the paper that comes along with the empirical results is that failure to account for the time-varying heterogeneity in the hazard, in this application, results in upward biases in the duration dependence estimates. Moreover, these upward biases are comparable in magnitude to the well-known downward biases in the duration dependence estimates when the modeling of the time-invariant heterogeneity in the hazard is absent. 相似文献
94.
The article discusses alternative Research Assessment Measures (RAM), with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Some analysis and comparisons are also made with data from the SciVerse Scopus database. The various RAM that are calculated annually or updated daily are defined and analyzed, including the classic 2-year impact factor (2YIF), 2YIF without journal self-citations (2YIF*), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Impact Factor Inflation (IFI), Self-citation Threshold Approval Rating (STAR), Eigenfactor score, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, and PI-BETA (Papers Ignored – By Even The Authors). The RAM are analyzed for 10 leading econometrics journals and 4 leading statistics journals. The application to econometrics can be used as a template for other areas in economics, for other scientific disciplines, and as a benchmark for newer journals in a range of disciplines. In addition to evaluating high quality research in leading econometrics journals, the paper also compares econometrics and statistics, alternative RAM, highlights the similarities and differences of the alternative RAM, finds that several RAM capture similar performance characteristics for the leading econometrics and statistics journals, while the new PI-BETA criterion is not highly correlated with any of the other RAM, and hence conveys additional information regarding RAM, highlights major research areas in leading journals in econometrics, and discusses some likely future uses of RAM, and shows that the harmonic mean of 13 RAM provides more robust journal rankings than relying solely on 2YIF. 相似文献
95.
This paper investigates nonparametric estimation of density on [0, 1]. The kernel estimator of density on [0, 1] has been found to be sensitive to both bandwidth and kernel. This paper proposes a unified Bayesian framework for choosing both the bandwidth and kernel function. In a simulation study, the Bayesian bandwidth estimator performed better than others, and kernel estimators were sensitive to the choice of the kernel and the shapes of the population densities on [0, 1]. The simulation and empirical results demonstrate that the methods proposed in this paper can improve the way the probability densities on [0, 1] are presently estimated. 相似文献
96.
ABSTRACTA quantile autoregresive model is a useful extension of classical autoregresive models as it can capture the influences of conditioning variables on the location, scale, and shape of the response distribution. However, at the extreme tails, standard quantile autoregression estimator is often unstable due to data sparsity. In this article, assuming quantile autoregresive models, we develop a new estimator for extreme conditional quantiles of time series data based on extreme value theory. We build the connection between the second-order conditions for the autoregression coefficients and for the conditional quantile functions, and establish the asymptotic properties of the proposed estimator. The finite sample performance of the proposed method is illustrated through a simulation study and the analysis of U.S. retail gasoline price. 相似文献
97.
98.
Let γ(t) be the residual life at time t of the renewal process {A(t), t > 0}, which has F as the common distribution function of the inter-arrival times. In this article we prove that if Var(γ(t)) is constant, then F will be exponentially or geometrically distributed under the assumption F is continuous or discrete respectively. An application and a related example also are given. 相似文献
99.
Howard D. Bondell Lexin Li 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(1):287-299
Summary. The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis. 相似文献
100.
In many situations the diagnostic decision is not limited to a binary choice. Binary statistical tools such as receiver operating characteristic (ROC) curve and area under the ROC curve (AUC) need to be expanded to address three-category classification problem. Previous authors have suggest various ways to model the extension of AUC but not the ROC surface. Only simple parametric approaches are proposed for modeling the ROC measure under the assumption that test results all follow normal distributions. We study the estimation methods of three-dimensional ROC surfaces with nonparametric and semiparametric estimators. Asymptotical results are provided as a basis for statistical inference. Simulation studies are performed to assess the validity of our proposed methods in finite samples. We consider an Alzheimer's disease example from a clinical study in the US as an illustration. The nonparametric and semiparametric modelling approaches for the three way ROC analysis can be readily generalized to diagnostic problems with more than three classes. 相似文献