首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10615篇
  免费   285篇
  国内免费   85篇
管理学   411篇
民族学   33篇
人才学   5篇
人口学   75篇
丛书文集   697篇
理论方法论   150篇
综合类   5790篇
社会学   170篇
统计学   3654篇
  2024年   16篇
  2023年   48篇
  2022年   56篇
  2021年   70篇
  2020年   135篇
  2019年   199篇
  2018年   245篇
  2017年   359篇
  2016年   251篇
  2015年   287篇
  2014年   451篇
  2013年   1556篇
  2012年   787篇
  2011年   550篇
  2010年   505篇
  2009年   510篇
  2008年   567篇
  2007年   607篇
  2006年   603篇
  2005年   534篇
  2004年   476篇
  2003年   413篇
  2002年   340篇
  2001年   349篇
  2000年   245篇
  1999年   128篇
  1998年   107篇
  1997年   99篇
  1996年   65篇
  1995年   75篇
  1994年   64篇
  1993年   53篇
  1992年   42篇
  1991年   30篇
  1990年   30篇
  1989年   37篇
  1988年   28篇
  1987年   18篇
  1986年   9篇
  1985年   6篇
  1984年   7篇
  1983年   8篇
  1982年   4篇
  1981年   2篇
  1980年   5篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
181.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   
182.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   
183.
解决空气雾化钻井井眼净化问题,试验架是关键装置、基础设备和必要手段。对空气雾化钻井试验架的研制过程进行了深入剖析,介绍了在试验架研制中应遵循的一些基本原则,试验架基本结构,试验架主要参数,试验结果可靠性分析,试验研究进行情况,以及研制过程中的经验教训等内容,体现了空气雾化钻井试验架研制工作的特点与特色。  相似文献   
184.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   
185.
In this study, we develop a test based on computational approach for the equality of variances of several normal populations. The proposed method is numerically compared with the existing methods. The numeric results demonstrate that the proposed method performs very well in terms of type I error rate and power of test. Furthermore we study the robustness of the tests by using simulation study when the underlying data are from t, exponential and uniform distributions. Finally we analyze a real dataset that motivated our study using the proposed test.  相似文献   
186.
Many studies have been used to compare the power of several goodness-of-fit (GOF) tests under simple random sampling (SRS) and ranked set sampling (RSS). In our study, a different design procedure and ranking process in RSS are thoroughly investigated. A simulation study is conducted to compare the power of the Kolmogorov–Smirnov test under SRS and RSS with different sets and cycle sizes for several distributions. Level-2 sampling design and partially rank-ordered sets are used. Also, we benefited from auxiliary variables in the ranking process. Finally, results are presented with tables and figures. Under these conditions we show that the RSS has better performance against the SRS in finite population.  相似文献   
187.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
188.
赵丙祥 《社会》2019,39(1):37-70
在田野民族志和历史编纂学中,传记法是一种重要的叙事方式。本文从分析林耀华的作品入手,将其置于20世纪20年代以来的社会学和人类学思想脉络中,并在此基础上探讨传记法在中国历史和社会研究中的潜力与可能性。本文将谱系法、个人生命史和社会生命论归纳为“传记法三角”。在(扩展)谱系法和社会结构论的基础上,生命传记法可以呈现一个或多个人的生命历史,作为大历史中的片段。不过,现象学路径有其自身的限度,不能离开对中央政权本身和总体结构的考察。在“时势”和“社会结构”的变迁过程中呈现生命传记,才不会只是个体的故事,而有希望成为一种对中国“总体”社会生活的有力叙事方式。  相似文献   
189.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
190.
In this paper, we propose a multiple deferred state repetitive group sampling plan which is a new sampling plan developed by incorporating the features of both multiple deferred state sampling plan and repetitive group sampling plan, for assuring Weibull or gamma distributed mean life of the products. The quality of the product is represented by the ratio of true mean life and specified mean life of the products. Two points on the operating characteristic curve approach is used to determine the optimal parameters of the proposed plan. The plan parameters are determined by formulating an optimization problem for various combinations of producer's risk and consumer's risk for both distributions. The sensitivity analysis of the proposed plan is discussed. The implementation of the proposed plan is explained using real-life data and simulated data. The proposed plan under Weibull distribution is compared with the existing sampling plans. The average sample number (ASN) of the proposed plan and failure probability of the product are obtained under Weibull, gamma and Birnbaum–Saunders distributions for a specified value of shape parameter and compared with each other. In addition, a comparative study is made between the ASN of the proposed plan under Weibull and gamma distributions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号