首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5855篇
  免费   123篇
管理学   833篇
民族学   27篇
人才学   1篇
人口学   638篇
丛书文集   22篇
教育普及   1篇
理论方法论   554篇
综合类   63篇
社会学   2661篇
统计学   1178篇
  2023年   37篇
  2021年   58篇
  2020年   112篇
  2019年   138篇
  2018年   151篇
  2017年   224篇
  2016年   167篇
  2015年   100篇
  2014年   155篇
  2013年   909篇
  2012年   183篇
  2011年   178篇
  2010年   141篇
  2009年   120篇
  2008年   159篇
  2007年   149篇
  2006年   159篇
  2005年   128篇
  2004年   125篇
  2003年   88篇
  2002年   114篇
  2001年   148篇
  2000年   133篇
  1999年   139篇
  1998年   109篇
  1997年   95篇
  1996年   98篇
  1995年   71篇
  1994年   64篇
  1993年   93篇
  1992年   92篇
  1991年   100篇
  1990年   81篇
  1989年   82篇
  1988年   72篇
  1987年   75篇
  1986年   64篇
  1985年   85篇
  1984年   78篇
  1983年   76篇
  1982年   52篇
  1981年   47篇
  1980年   42篇
  1979年   56篇
  1978年   44篇
  1977年   43篇
  1976年   44篇
  1975年   44篇
  1974年   51篇
  1973年   38篇
排序方式: 共有5978条查询结果,搜索用时 0 毫秒
121.
The estimated effect of any factor can be highly dependent on both the model and the data used for the analyses. This article presents an example of the estimated effect of one factor in two different data sets under three different forms of the standard linear model using the effect of track placement on achievement as an example. Some relative advantages and disadvantages of each model are considered. The analyses demonstrate that, given collinearity among the predictor variables, a model with a poorer statistical fit may be useful for some interpretive purposes.  相似文献   
122.
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   
123.
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm.  相似文献   
124.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses.  相似文献   
125.
Kernel-based density estimation algorithms are inefficient in presence of discontinuities at support endpoints. This is substantially due to the fact that classic kernel density estimators lead to positive estimates beyond the endopoints. If a nonparametric estimate of a density functional is required in determining the bandwidth, then the problem also affects the bandwidth selection procedure. In this paper algorithms for bandwidth selection and kernel density estimation are proposed for non-negative random variables. Furthermore, the methods we propose are compared with some of the principal solutions in the literature through a simulation study.  相似文献   
126.
A new method for constructing interpretable principal components is proposed. The method first clusters the variables, and then interpretable (sparse) components are constructed from the correlation matrices of the clustered variables. For the first step of the method, a new weighted-variances method for clustering variables is proposed. It reflects the nature of the problem that the interpretable components should maximize the explained variance and thus provide sparse dimension reduction. An important feature of the new clustering procedure is that the optimal number of clusters (and components) can be determined in a non-subjective manner. The new method is illustrated using well-known simulated and real data sets. It clearly outperforms many existing methods for sparse principal component analysis in terms of both explained variance and sparseness.  相似文献   
127.
Exploratory Factor Analysis (EFA) and Principal Component Analysis (PCA) are popular techniques for simplifying the presentation of, and investigating the structure of, an (n×p) data matrix. However, these fundamentally different techniques are frequently confused, and the differences between them are obscured, because they give similar results in some practical cases. We therefore investigate conditions under which they are expected to be close to each other, by considering EFA as a matrix decomposition so that it can be directly compared with the data matrix decomposition underlying PCA. Correspondingly, we propose an extended version of PCA, called the EFA-like PCA, which mimics the EFA matrix decomposition in the sense that they contain the same unknowns. We provide iterative algorithms for estimating the EFA-like PCA parameters, and derive conditions that have to be satisfied for the two techniques to give similar results. Throughout, we consider separately the cases n>p and pn. All derived algorithms and matrix conditions are illustrated on two data sets, one for each of these two cases.  相似文献   
128.
Troutt (1991,1993) proposed the idea of the vertical density representation (VDR) based on Box-Millar method. Kotz, Fang and Liang (1997) provided a systematic study on the multivariate vertical density representation (MVDR). Suppose that we want to generate a random vector X[d]Rnthat has a density function ?(x). The key point of using the MVDR is to generate the uniform distribution on [D]?(v) = {x :?(x) = v} for any v > 0 which is the surface in RnIn this paper we use the conditional distribution method to generate the uniform distribution on a domain or on some surface and based on it we proposed an alternative version of the MVDR(type 2 MVDR), by which one can transfer the problem of generating a random vector X with given density f to one of generating (X, Xn+i) that follows the uniform distribution on a region in Rn+1defined by ?. Several examples indicate that the proposed method is quite practical.  相似文献   
129.
130.
DIMITROV, RACHEV and YAKOVLEV ( 1985 ) have obtained the isotonic maximum likelihood estimator for the bimodal failure rate function. The authors considered only the complete failure time data. The generalization of this estimator for the case of censored and tied observations is now proposed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号