首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7963篇
  免费   110篇
管理学   1187篇
民族学   27篇
人口学   766篇
丛书文集   26篇
理论方法论   593篇
综合类   105篇
社会学   3377篇
统计学   1992篇
  2020年   117篇
  2019年   151篇
  2018年   218篇
  2017年   272篇
  2016年   190篇
  2015年   131篇
  2014年   190篇
  2013年   1319篇
  2012年   259篇
  2011年   241篇
  2010年   178篇
  2009年   170篇
  2008年   145篇
  2007年   184篇
  2006年   160篇
  2005年   161篇
  2004年   146篇
  2003年   143篇
  2002年   149篇
  2001年   197篇
  2000年   189篇
  1999年   172篇
  1998年   129篇
  1997年   126篇
  1996年   126篇
  1995年   106篇
  1994年   104篇
  1993年   97篇
  1992年   140篇
  1991年   119篇
  1990年   116篇
  1989年   111篇
  1988年   96篇
  1987年   90篇
  1986年   95篇
  1985年   107篇
  1984年   112篇
  1983年   123篇
  1982年   98篇
  1981年   88篇
  1980年   85篇
  1979年   94篇
  1978年   90篇
  1977年   80篇
  1976年   67篇
  1975年   67篇
  1974年   58篇
  1973年   58篇
  1972年   41篇
  1971年   49篇
排序方式: 共有8073条查询结果,搜索用时 15 毫秒
161.
There has been much recent interest in supersaturated designs and their application in factor screening experiments. Supersaturated designs have mainly been constructed by using the E ( s 2)-optimality criterion originally proposed by Booth and Cox in 1962. However, until now E ( s 2)-optimal designs have only been established with certainty for n experimental runs when the number of factors m is a multiple of n-1 , and in adjacent cases where m = q ( n -1) + r (| r | 2, q an integer). A method of constructing E ( s 2)-optimal designs is presented which allows a reasonably complete solution to be found for various numbers of runs n including n ,=8 12, 16, 20, 24, 32, 40, 48, 64.  相似文献   
162.
Many companies are trying to get to the bottom of what their main objectives are and what their business should be doing. The new Six Sigma approach concentrates on clarifying business strategy and making sure that everything relates to company objectives. It is vital to clarify each part of the business in such a way that everyone can understand the causes of variation that can lead to improvements in processes and performance. This paper describes a situation where the full implementation of SPC methodology has made possible a visual and widely appreciated summary of the performance of one important aspect of the business. The major part of the work was identifying the core objectives and deciding how to encapsulate each of them in one or more suitable measurements. The next step was to review the practicalities of obtaining the measurements and their reliability and representativeness. Finally, the measurements were presented in chart form and the more traditional steps of SPC analysis were commenced. Data from fast changing business environments are prone to many different problems, such as the short previous span of typical data, strange distributions and other uncertainties. Issues surrounding these and the eventual extraction of a meaningful set of information will be discussed in the paper. The measurement framework has proved very useful and, from an initial circulation of a handful of people, it now forms an important part of an information process that provides responsible managers with valuable control information. The measurement framework is kept fresh and vital by constant review and modifications. Improved electronic data collection and dissemination of the report has proved very important.  相似文献   
163.
Ordinal regression is used for modelling an ordinal response variable as a function of some explanatory variables. The classical technique for estimating the unknown parameters of this model is Maximum Likelihood (ML). The lack of robustness of this estimator is formally shown by deriving its breakdown point and its influence function. To robustify the procedure, a weighting step is added to the Maximum Likelihood estimator, yielding an estimator with bounded influence function. We also show that the loss in efficiency due to the weighting step remains limited. A diagnostic plot based on the Weighted Maximum Likelihood estimator allows to detect outliers of different types in a single plot.  相似文献   
164.
This article reviews Bayesian inference from the perspective that the designated model is misspecified. This misspecification has implications in interpretation of objects, such as the prior distribution, which has been the cause of recent questioning of the appropriateness of Bayesian inference in this scenario. The main focus of this article is to establish the suitability of applying the Bayes update to a misspecified model, and relies on representation theorems for sequences of symmetric distributions; the identification of parameter values of interest; and the construction of sequences of distributions which act as the guesses as to where the next observation is coming from. A conclusion is that a clear identification of the fundamental starting point for the Bayesian is described.  相似文献   
165.
A new method for constructing interpretable principal components is proposed. The method first clusters the variables, and then interpretable (sparse) components are constructed from the correlation matrices of the clustered variables. For the first step of the method, a new weighted-variances method for clustering variables is proposed. It reflects the nature of the problem that the interpretable components should maximize the explained variance and thus provide sparse dimension reduction. An important feature of the new clustering procedure is that the optimal number of clusters (and components) can be determined in a non-subjective manner. The new method is illustrated using well-known simulated and real data sets. It clearly outperforms many existing methods for sparse principal component analysis in terms of both explained variance and sparseness.  相似文献   
166.
167.
The paper introduces a new method for flexible spline fitting for copula density estimation. Spline coefficients are penalized to achieve a smooth fit. To weaken the curse of dimensionality, instead of a full tensor spline basis, a reduced tensor product based on so called sparse grids (Notes Numer. Fluid Mech. Multidiscip. Des., 31, 1991, 241‐251) is used. To achieve uniform margins of the copula density, linear constraints are placed on the spline coefficients, and quadratic programming is used to fit the model. Simulations and practical examples accompany the presentation.  相似文献   
168.
169.
A multivariate modified histogram density estimate depending on a reference density g and a partition P has been proved to have good consistency properties according to several information theoretic criteria. Given an i.i.d. sample, we show how to select automatically both g and P so that the expected L 1 error of the corresponding selected estimate is within a given constant multiple of the best possible error plus an additive term which tends to zero under mild assumptions. Our method is inspired by the combinatorial tools developed by Devroye and Lugosi [Devroye, L. and Lugosi, G., 2001, Combinatorial Methods in Density Estimation (New York, NY: Springer–Verlag)] and it includes a wide range of reference density and partition models. Results of simulations are also presented.  相似文献   
170.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号