首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5217篇
  免费   126篇
管理学   920篇
民族学   41篇
人才学   1篇
人口学   406篇
丛书文集   22篇
理论方法论   614篇
综合类   68篇
社会学   2620篇
统计学   651篇
  2023年   47篇
  2022年   27篇
  2021年   36篇
  2020年   102篇
  2019年   165篇
  2018年   148篇
  2017年   222篇
  2016年   153篇
  2015年   145篇
  2014年   156篇
  2013年   842篇
  2012年   208篇
  2011年   249篇
  2010年   169篇
  2009年   146篇
  2008年   160篇
  2007年   193篇
  2006年   172篇
  2005年   162篇
  2004年   163篇
  2003年   129篇
  2002年   147篇
  2001年   123篇
  2000年   96篇
  1999年   93篇
  1998年   66篇
  1997年   68篇
  1996年   55篇
  1995年   52篇
  1994年   45篇
  1993年   61篇
  1992年   54篇
  1991年   49篇
  1990年   51篇
  1989年   52篇
  1988年   50篇
  1987年   38篇
  1986年   41篇
  1985年   29篇
  1984年   51篇
  1983年   38篇
  1982年   42篇
  1981年   40篇
  1980年   37篇
  1979年   31篇
  1978年   25篇
  1977年   19篇
  1976年   20篇
  1975年   20篇
  1974年   13篇
排序方式: 共有5343条查询结果,搜索用时 0 毫秒
81.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   
82.
Partial least squares regression has been widely adopted within some areas as a useful alternative to ordinary least squares regression in the manner of other shrinkage methods such as principal components regression and ridge regression. In this paper we examine the nature of this shrinkage and demonstrate that partial least squares regression exhibits some undesirable properties.  相似文献   
83.
In 1960 Levene suggested a potentially robust test of homogeneity of variance based on an ordinary least squares analysis of variance of the absolute values of mean-based residuals. Levene's test has since been shown to have inflated levels of significance when based on the F-distribution, and tests a hypothesis other than homogeneity of variance when treatments are unequally replicated, but the incorrect formulation is now standard output in several statistical packages. This paper develops a weighted least squares analysis of variance of the absolute values of both mean-based and median-based residuals. It shows how to adjust the residuals so that tests using the F -statistic focus on homogeneity of variance for both balanced and unbalanced designs. It shows how to modify the F -statistics currently produced by statistical packages so that the distribution of the resultant test statistic is closer to an F-distribution than is currently the case. The weighted least squares approach also produces component mean squares that are unbiased irrespective of which variable is used in Levene's test. To complete this aspect of the investigation the paper derives exact second-order moments of the component sums of squares used in the calculation of the mean-based test statistic. It shows that, for large samples, both ordinary and weighted least squares test statistics are equivalent; however they are over-dispersed compared to an F variable.  相似文献   
84.
Michael Robinson   《Serials Review》2009,35(3):133-137
Established in 1994 through the amalgamation of several teacher training colleges, The Hong Kong Institute of Education (HKIEd) is the major multidisciplinary teacher education provider in the Hong Kong SAR. Despite this, the Institute does not have a particularly high research profile when compared with its peer institutions in Hong Kong and around the world. Its research publishing achieves modest exposure and impact in international educational research literature. The Institute has a goal to attain the title of a “university of education” and has identified an improvement in its research output and profile as critical to achieving this. In this context the HKIEd Library embarked on the redevelopment of its institutional repository, changing its direction from being an archive of institutional publications to one which brought together and offered access to the sum total of published output of the Institute since its foundation, in a deliberate effort to promote Institute research. This paper explores the particular approach taken by the Library to the development of the institutional repository, how the repository contributes directly to and aligns with the research strategies of the Institute, and the impact the Repository has had so far on improving the profile of research at HKIEd.  相似文献   
85.
86.
This installment of "Serials Spoken Here" covers events that transpired between late September and late October 2008. Reported herein are two Webinars, one on ONIX for Serials, the other on SUSHI, and two conferences: the eighty-fourth annual Meeting of the Potomac Technical Processing Librarians and the New England Library Association's Annual Conference.  相似文献   
87.
The analysis of data using a stable probability distribution with tail parameter α<2 (sometimes called a Pareto–Levy distribution) seems to have been avoided in the past in part because of the lack of a significance test for the mean, even though it appears to be the correct distribution to use for describing returns in the financial markets. A z test for the significance of the mean of a stable distribution with tail parameter 1<α≤2 is defined. Tables are calculated and displayed for the 5% and 1% significance levels for a range of tail and skew parameters α and β. Through the use of maximum likelihood estimates, the test becomes a practical tool even when α and β are not that accurately determined. As an example, the z test is applied to the daily closing prices for the Dow Jones Industrial average from 2 January 1940 to 19 March 2010.  相似文献   
88.
We consider the maximum likelihood estimator $\hat{F}_n$ of a distribution function in a class of deconvolution models where the known density of the noise variable is of bounded variation. This class of noise densities contains in particular bounded, decreasing densities. The estimator $\hat{F}_n$ is defined, characterized in terms of Fenchel optimality conditions and computed. Under appropriate conditions, various consistency results for $\hat{F}_n$ are derived, including uniform strong consistency. The Canadian Journal of Statistics 41: 98–110; 2013 © 2012 Statistical Society of Canada  相似文献   
89.
Interpreting data and communicating effectively through graphs and tables are requisite skills for statisticians and non‐statisticians in the pharmaceutical industry. However, the quality of visual displays of data in the medical and pharmaceutical literature and at scientific conferences is severely lacking. We describe an interactive, workshop‐driven, 2‐day short course that we constructed for pharmaceutical research personnel to learn these skills. The examples in the course and the workshop datasets source from our professional experiences, the scientific literature, and the mass media. During the course, the participants are exposed to and gain hands‐on experience with the principles of visual and graphical perception, design, and construction of both graphic and tabular displays of quantitative and qualitative information. After completing the course, with a critical eye, the participants are able to construct, revise, critique, and interpret graphic and tabular displays according to an extensive set of guidelines. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
90.
The area under the receiver operating characteristic curve (AUC) is the most commonly reported measure of discrimination for prediction models with binary outcomes. However, recently it has been criticized for its inability to increase when important risk factors are added to a baseline model with good discrimination. This has led to the claim that the reliance on the AUC as a measure of discrimination may miss important improvements in clinical performance of risk prediction rules derived from a baseline model. In this paper we investigate this claim by relating the AUC to measures of clinical performance based on sensitivity and specificity under the assumption of multivariate normality. The behavior of the AUC is contrasted with that of discrimination slope. We show that unless rules with very good specificity are desired, the change in the AUC does an adequate job as a predictor of the change in measures of clinical performance. However, stronger or more numerous predictors are needed to achieve the same increment in the AUC for baseline models with good versus poor discrimination. When excellent specificity is desired, our results suggest that the discrimination slope might be a better measure of model improvement than AUC. The theoretical results are illustrated using a Framingham Heart Study example of a model for predicting the 10-year incidence of atrial fibrillation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号