首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6315篇
  免费   157篇
  国内免费   3篇
管理学   986篇
民族学   54篇
人才学   9篇
人口学   499篇
丛书文集   47篇
理论方法论   769篇
综合类   51篇
社会学   3260篇
统计学   800篇
  2023年   38篇
  2022年   29篇
  2021年   46篇
  2020年   113篇
  2019年   163篇
  2018年   195篇
  2017年   199篇
  2016年   202篇
  2015年   142篇
  2014年   163篇
  2013年   994篇
  2012年   222篇
  2011年   241篇
  2010年   188篇
  2009年   155篇
  2008年   191篇
  2007年   216篇
  2006年   198篇
  2005年   222篇
  2004年   193篇
  2003年   169篇
  2002年   167篇
  2001年   113篇
  2000年   150篇
  1999年   122篇
  1998年   110篇
  1997年   103篇
  1996年   94篇
  1995年   83篇
  1994年   107篇
  1993年   90篇
  1992年   96篇
  1991年   64篇
  1990年   55篇
  1989年   56篇
  1988年   68篇
  1987年   52篇
  1986年   47篇
  1985年   56篇
  1984年   67篇
  1983年   54篇
  1982年   58篇
  1981年   50篇
  1980年   51篇
  1979年   44篇
  1978年   32篇
  1977年   31篇
  1976年   46篇
  1975年   26篇
  1974年   35篇
排序方式: 共有6475条查询结果,搜索用时 0 毫秒
141.
Experiments in which very few units are measured many times sometimes present particular difficulties. Interest often centers on simple location shifts between two treatment groups, but appropriate modeling of the error distribution can be challenging. For example, normality may be difficult to verify, or a single transformation stabilizing variance or improving normality for all units and all measurements may not exist. We propose an analysis of two sample repeated measures data based on the permutation distribution of units. This provides a distribution free alternative to standard analyses. The analysis includes testing, estimation and confidence intervals. By assuming a certain structure in the location shift model, the dimension of the problem is reduced by analyzing linear combinations of the marginal statistics. Recently proposed algorithms for computation of two sample permutation distributions, require only a few seconds for experiments having as many as 100 units and any number of repeated measures. The test has high asymptotic efficiency and good power with respect to tests based on the normal distribution. Since the computational burden is minimal, approximation of the permutation distribution is unnecessary.  相似文献   
142.
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006 Campbell, J. Y. and Yogo, M. (2006), “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 2760.[Crossref], [Web of Science ®] [Google Scholar]). Supplementary materials for this article are available online.  相似文献   
143.
This paper presents a simply viewed framework that brings together various concepts of regression, prediction, and principal components. Several new concepts related to prediction are introduced, and then the interrelationships of these concepts are established. The generalizations are examined in detail and are illustrated in the context of a well known data set.  相似文献   
144.
145.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   
146.
Summary.  A general method for exploring multivariate data by comparing different estimates of multivariate scatter is presented. The method is based on the eigenvalue–eigenvector decomposition of one scatter matrix relative to another. In particular, it is shown that the eigenvectors can be used to generate an affine invariant co-ordinate system for the multivariate data. Consequently, we view this method as a method for invariant co-ordinate selection . By plotting the data with respect to this new invariant co-ordinate system, various data structures can be revealed. For example, under certain independent components models, it is shown that the invariant co- ordinates correspond to the independent components. Another example pertains to mixtures of elliptical distributions. In this case, it is shown that a subset of the invariant co-ordinates corresponds to Fisher's linear discriminant subspace, even though the class identifications of the data points are unknown. Some illustrative examples are given.  相似文献   
147.
David T. Palmer   《Serials Review》2009,35(3):138-141
The Pacific Rim Library (PRL) is an initiative of the Pacific Rim Digital Library Association (PRDLA). The project began in 2006 using the OAI-PMH paradigm and now holds over 300,000 records harvested from OAI data provider libraries around the Pacific. PRL's goal is to enable the sharing of digital collections amongst PRDLA members and the world, but greater unexpected benefits have been discovered. Through mirroring their metadata, PRL increases the chance that their data will be discovered in Google and other general search engines. With its many disparate collections, PRL is not a repository for traditional information discovery and retrieval. Initially users will bounce from a Google hit to the PRL metadata record in Hong Kong and then begin an intensive search on the original site which hosts the full digital object, in Vancouver, Honolulu, Wuhan, Singapore, or other PRDLA member location.  相似文献   
148.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice.  相似文献   
149.
In this note we provide a counterexample which resolves conjectures about Hadamard matrices made in this journal. Beder [1998. Conjectures about Hadamard matrices. Journal of Statistical Planning and Inference 72, 7–14] conjectured that if HH is a maximal m×nm×n row-Hadamard matrix then m is a multiple of 4; and that if n   is a power of 2 then every row-Hadamard matrix can be extended to a Hadamard matrix. Using binary integer programming we obtain a maximal 13×3213×32 row-Hadamard matrix, which disproves both conjectures. Additionally for n being a multiple of 4 up to 64, we tabulate values of m   for which we have found a maximal row-Hadamard matrix. Based on the tabulated results we conjecture that a m×nm×n row-Hadamard matrix with m?n-7m?n-7 can be extended to a Hadamard matrix.  相似文献   
150.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号