全文获取类型
收费全文 | 8501篇 |
免费 | 704篇 |
国内免费 | 25篇 |
专业分类
管理学 | 1257篇 |
民族学 | 12篇 |
人才学 | 1篇 |
人口学 | 64篇 |
丛书文集 | 120篇 |
理论方法论 | 869篇 |
综合类 | 1313篇 |
社会学 | 1659篇 |
统计学 | 3935篇 |
出版年
2024年 | 1篇 |
2023年 | 22篇 |
2022年 | 24篇 |
2021年 | 120篇 |
2020年 | 227篇 |
2019年 | 436篇 |
2018年 | 342篇 |
2017年 | 599篇 |
2016年 | 463篇 |
2015年 | 460篇 |
2014年 | 505篇 |
2013年 | 1706篇 |
2012年 | 767篇 |
2011年 | 448篇 |
2010年 | 425篇 |
2009年 | 320篇 |
2008年 | 371篇 |
2007年 | 276篇 |
2006年 | 241篇 |
2005年 | 239篇 |
2004年 | 215篇 |
2003年 | 172篇 |
2002年 | 151篇 |
2001年 | 184篇 |
2000年 | 138篇 |
1999年 | 62篇 |
1998年 | 40篇 |
1997年 | 56篇 |
1996年 | 31篇 |
1995年 | 28篇 |
1994年 | 20篇 |
1993年 | 20篇 |
1992年 | 19篇 |
1991年 | 9篇 |
1990年 | 13篇 |
1989年 | 7篇 |
1988年 | 11篇 |
1987年 | 6篇 |
1986年 | 6篇 |
1985年 | 7篇 |
1984年 | 10篇 |
1983年 | 9篇 |
1982年 | 6篇 |
1980年 | 5篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 3篇 |
1976年 | 1篇 |
1975年 | 4篇 |
排序方式: 共有9230条查询结果,搜索用时 13 毫秒
41.
一、引言由于世界经济结构的剧烈动荡,如金融危机、政策变更等,致使经济时间序列中的结构突变时有发生,经济过程的结构突变会影响协整分析的结果,使协整方法论中许多有代表性的检验失去原有的功效,如单位根检验[单位根可能会发生漂移(特征根的取值不稳定),单位根检验统计量也可 相似文献
42.
货币政策传导机制有效性的实证研究 总被引:1,自引:2,他引:1
文章选取货币供应量和金融机构年末贷款余额作为货币政策的中介指标变量,以经济增长作为货币政策的最终目标变量,运用动态计量经济学理论,通过对变量的单位根检验,协整检验和因果关系检验,对1978~2004年的数据采用EG两步法建立误差修正模型。研究表明货币政策传导存在明显的时滞,而且货币供应量增长率与贷款余额增长率相比对经济增长率的影响更显著。 相似文献
43.
WEIGHTED SUMS OF NEGATIVELY ASSOCIATED RANDOM VARIABLES 总被引:2,自引:0,他引:2
In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher‐order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000) [Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett. 43, 105–112,] and Sung S.K. (2001) [Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett. 52, 413–419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995) [Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76,] is complemented and extended. 相似文献
44.
G = F
k
(k > 1); G = 1 − (1−F)
k
(k < 1); G = F
k
(k < 1); and G = 1 − (1−F)
k
(k > 1), where F and G are two continuous cumulative distribution functions. If an optimal precedence test (one with the maximal power) is determined
for one of these four classes, the optimal tests for the other classes of alternatives can be derived. Application of this
is given using the results of Lin and Sukhatme (1992) who derived the best precedence test for testing the null hypothesis
that the lifetimes of two types of items on test have the same distibution. The test has maximum power for fixed κ in the
class of alternatives G = 1 − (1−F)
k
, with k < 1. Best precedence tests for the other three classes of Lehmann-type alternatives are derived using their results. Finally,
a comparison of precedence tests with Wilcoxon's two-sample test is presented.
Received: February 22, 1999; revised version: June 7, 2000 相似文献
45.
The authors consider dimensionality reduction methods used for prediction, such as reduced rank regression, principal component regression and partial least squares. They show how it is possible to obtain intermediate solutions by estimating simultaneously the latent variables for the predictors and for the responses. They obtain a continuum of solutions that goes from reduced rank regression to principal component regression via maximum likelihood and least squares estimation. Different solutions are compared using simulated and real data. 相似文献
46.
John Whitehead Susan Todd & W. J. Hall 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(4):731-745
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis. 相似文献
47.
Bootstrap tests: how many bootstraps? 总被引:3,自引:0,他引:3
In practice, bootstrap tests must use a finite number of bootstrap samples. This means that the outcome of the test will depend on the sequence of random numbers used to generate the bootstrap samples, and it necessarily results in some loss of power. We examine the extent of this power loss and propose a simple pretest procedure for choosing the number of bootstrap samples so as to minimize experimental randomness. Simulation experiments suggest that this procedure will work very well in practice. 相似文献
48.
We give a critical synopsis of classical and recent tests for Poissonity, our emphasis being on procedures which are consistent against general alternatives. Two classes of weighted Cramér–von Mises type test statistics, based on the empirical probability generating function process, are studied in more detail. Both of them generalize already known test statistics by introducing a weighting parameter, thus providing more flexibility with regard to power against specific alternatives. In both cases, we prove convergence in distribution of the statistics under the null hypothesis in the setting of a triangular array of rowwise independent and identically distributed random variables as well as consistency of the corresponding test against general alternatives. Therefore, a sound theoretical basis is provided for the parametric bootstrap procedure, which is applied to obtain critical values in a large-scale simulation study. Each of the tests considered in this study, when implemented via the parametric bootstrap method, maintains a nominal level of significance very closely, even for small sample sizes. The procedures are applied to four well-known data sets. 相似文献
49.
Thispaper considers the stratified proportional hazards model witha focus on the assessment of stratum effects. The assessmentof such effects is often of interest, for example, in clinicaltrials. In this case, two relevant tests are the test of stratuminteraction with covariates and the test of stratum interactionwith baseline hazard functions. For the test of stratum interactionwith covariates, one can use the partial likelihood method (Kalbfleischand Prentice, 1980; Lin, 1994). For the test of stratum interactionwith baseline hazard functions, however, there seems to be noformal test available. We consider this problem and propose aclass of nonparametric tests. The asymptotic distributions ofthe tests are derived using the martingale theory. The proposedtests can also be used for survival comparisons which need tobe adjusted for covariate effects. The method is illustratedwith data from a lung cancer clinical trial. 相似文献
50.
Partial least squares regression (PLS) is one method to estimate parameters in a linear model when predictor variables are nearly collinear. One way to characterize PLS is in terms of the scaling (shrinkage or expansion) along each eigenvector of the predictor correlation matrix. This characterization is useful in providing a link between PLS and other shrinkage estimators, such as principal components regression (PCR) and ridge regression (RR), thus facilitating a direct comparison of PLS with these methods. This paper gives a detailed analysis of the shrinkage structure of PLS, and several new results are presented regarding the nature and extent of shrinkage. 相似文献