首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9709篇
  免费   154篇
管理学   1328篇
民族学   40篇
人口学   964篇
丛书文集   29篇
理论方法论   830篇
综合类   117篇
社会学   4442篇
统计学   2113篇
  2021年   50篇
  2020年   153篇
  2019年   200篇
  2018年   267篇
  2017年   342篇
  2016年   251篇
  2015年   165篇
  2014年   240篇
  2013年   1618篇
  2012年   327篇
  2011年   321篇
  2010年   227篇
  2009年   216篇
  2008年   206篇
  2007年   246篇
  2006年   233篇
  2005年   221篇
  2004年   209篇
  2003年   202篇
  2002年   210篇
  2001年   242篇
  2000年   242篇
  1999年   202篇
  1998年   160篇
  1997年   151篇
  1996年   146篇
  1995年   122篇
  1994年   131篇
  1993年   114篇
  1992年   159篇
  1991年   131篇
  1990年   130篇
  1989年   122篇
  1988年   104篇
  1987年   101篇
  1986年   104篇
  1985年   120篇
  1984年   128篇
  1983年   131篇
  1982年   104篇
  1981年   90篇
  1980年   94篇
  1979年   101篇
  1978年   94篇
  1977年   83篇
  1976年   69篇
  1975年   70篇
  1974年   61篇
  1973年   59篇
  1971年   52篇
排序方式: 共有9863条查询结果,搜索用时 15 毫秒
971.
A multivariate modified histogram density estimate depending on a reference density g and a partition P has been proved to have good consistency properties according to several information theoretic criteria. Given an i.i.d. sample, we show how to select automatically both g and P so that the expected L 1 error of the corresponding selected estimate is within a given constant multiple of the best possible error plus an additive term which tends to zero under mild assumptions. Our method is inspired by the combinatorial tools developed by Devroye and Lugosi [Devroye, L. and Lugosi, G., 2001, Combinatorial Methods in Density Estimation (New York, NY: Springer–Verlag)] and it includes a wide range of reference density and partition models. Results of simulations are also presented.  相似文献   
972.
G.J.S. Ross 《Statistics》2013,47(3):445-453
This is the first application of a new method for testing stationary random point processes. Consider the class of all stationary ergodic point processes on the real line with arbitrary dependences among the inter–point distances (spacing).The hypothesis is :The observed process φ is a homogeneous Poisson process or more (resp.less) regular than a Poisson process.The sample is the vector of the first n points t1, …,tn.There is a close relation between our method for testing and queueing theory: For finding an appropriate test statistic, we observe the behaviour of a single server queue with the input φ.A table of critical values is given.  相似文献   
973.
The paper reconsider certain estimators proposed by COHENand SACKROWITZ[Ann.Statist.(1974)2,1274-1282,Ann.Statist.4,1294]for the common mean of two normal distributions on the basis of independent samples of equal size from the two populations. It derives the ncecessary and sufficient condition for improvement over the first sample mean, under squared error loss, for any member of a class containing these. It shows that the estimator proposded by them for simultaneous improvement over botyh sample means has the desired property if and only if the common size of the samples is at least nine. The requirement is milder than that for any other estimator at the present state of knolwledge and may be constrasted with their result which implies the desired property of the estimator only if the common size of the samples is at least fifteen. Upper bounds for variances if the estimators derived by them are also improved  相似文献   
974.
A new method for constructing interpretable principal components is proposed. The method first clusters the variables, and then interpretable (sparse) components are constructed from the correlation matrices of the clustered variables. For the first step of the method, a new weighted-variances method for clustering variables is proposed. It reflects the nature of the problem that the interpretable components should maximize the explained variance and thus provide sparse dimension reduction. An important feature of the new clustering procedure is that the optimal number of clusters (and components) can be determined in a non-subjective manner. The new method is illustrated using well-known simulated and real data sets. It clearly outperforms many existing methods for sparse principal component analysis in terms of both explained variance and sparseness.  相似文献   
975.
The paper introduces a new method for flexible spline fitting for copula density estimation. Spline coefficients are penalized to achieve a smooth fit. To weaken the curse of dimensionality, instead of a full tensor spline basis, a reduced tensor product based on so called sparse grids (Notes Numer. Fluid Mech. Multidiscip. Des., 31, 1991, 241‐251) is used. To achieve uniform margins of the copula density, linear constraints are placed on the spline coefficients, and quadratic programming is used to fit the model. Simulations and practical examples accompany the presentation.  相似文献   
976.
977.
978.
979.
k-POD: A Method for k-Means Clustering of Missing Data   总被引:1,自引:0,他引:1  
The k-means algorithm is often used in clustering applications but its usage requires a complete data matrix. Missing data, however, are common in many applications. Mainstream approaches to clustering missing data reduce the missing data problem to a complete data formulation through either deletion or imputation but these solutions may incur significant costs. Our k-POD method presents a simple extension of k-means clustering for missing data that works even when the missingness mechanism is unknown, when external information is unavailable, and when there is significant missingness in the data.

[Received November 2014. Revised August 2015.]  相似文献   
980.
In this paper, a new test statistic is presented for testing the null hypothesis of equal multinomial cell probabilities versus various trend alternatives. Exact asymptotic critical values are obtained, The power of the test is compared with several other statistics considered by Choulakian et al (1995), The test is shown to have better power for certain trend alternatives.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号