首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7721篇
  免费   216篇
  国内免费   67篇
管理学   347篇
民族学   29篇
人才学   4篇
人口学   76篇
丛书文集   618篇
理论方法论   154篇
综合类   4951篇
社会学   156篇
统计学   1669篇
  2024年   15篇
  2023年   41篇
  2022年   47篇
  2021年   59篇
  2020年   84篇
  2019年   146篇
  2018年   157篇
  2017年   218篇
  2016年   159篇
  2015年   200篇
  2014年   352篇
  2013年   712篇
  2012年   502篇
  2011年   421篇
  2010年   392篇
  2009年   389篇
  2008年   436篇
  2007年   503篇
  2006年   529篇
  2005年   451篇
  2004年   406篇
  2003年   368篇
  2002年   305篇
  2001年   289篇
  2000年   186篇
  1999年   91篇
  1998年   86篇
  1997年   69篇
  1996年   53篇
  1995年   67篇
  1994年   55篇
  1993年   45篇
  1992年   37篇
  1991年   25篇
  1990年   21篇
  1989年   33篇
  1988年   20篇
  1987年   15篇
  1986年   6篇
  1985年   2篇
  1984年   1篇
  1983年   3篇
  1982年   2篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
排序方式: 共有8004条查询结果,搜索用时 15 毫秒
151.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   
152.
The construction and enumeration of (0, 1)-matrices with given line-sums is described for the rectangular cases often encountered in applications. Improved approximations are provided for the number of such matrices. Some new enumeration results for semi-regular bipartite graphs are included, and the related category of the quasi-semiregular bipartite graphs is recognized. The range of certain elements of products of a (0, I)-matrix is considered as a function of the line-sums. This, in turn, is related to the range in the numbers of interchanges available. Improvements in statistical practice that come from these constructions and enumerations are described.  相似文献   
153.
Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research.  相似文献   
154.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
155.
赵丙祥 《社会》2019,39(1):37-70
在田野民族志和历史编纂学中,传记法是一种重要的叙事方式。本文从分析林耀华的作品入手,将其置于20世纪20年代以来的社会学和人类学思想脉络中,并在此基础上探讨传记法在中国历史和社会研究中的潜力与可能性。本文将谱系法、个人生命史和社会生命论归纳为“传记法三角”。在(扩展)谱系法和社会结构论的基础上,生命传记法可以呈现一个或多个人的生命历史,作为大历史中的片段。不过,现象学路径有其自身的限度,不能离开对中央政权本身和总体结构的考察。在“时势”和“社会结构”的变迁过程中呈现生命传记,才不会只是个体的故事,而有希望成为一种对中国“总体”社会生活的有力叙事方式。  相似文献   
156.
In the statistical process control literature, there exists several improved quality control charts based on cost-effective sampling schemes, including the ranked set sampling (RSS) and median RSS (MRSS). A generalized cost-effective RSS scheme has been recently introduced for efficiently estimating the population mean, namely varied L RSS (VLRSS). In this article, we propose a new exponentially weighted moving average (EWMA) control chart for monitoring the process mean using VLRSS, named the EWMA-VLRSS chart, under both perfect and imperfect rankings. The EWMA-VLRSS chart encompasses the existing EWMA charts based on RSS and MRSS (named the EWMA-RSS and EWMA-MRSS charts). We use extensive Monte Carlo simulations to compute the run length characteristics of the EWMA-VLRSS chart. The proposed chart is then compared with the existing EWMA charts. It is found that, with either perfect or imperfect rankings, the EWMA-VLRSS chart is more sensitive than the EWMA-RSS and EWMA-MRSS charts in detecting small to large shifts in the process mean. A real dataset is also used to explain the working of the EWMA-VLRSS chart.  相似文献   
157.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
158.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   
159.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
160.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号