首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7354篇
  免费   196篇
  国内免费   67篇
管理学   329篇
民族学   28篇
人才学   4篇
人口学   65篇
丛书文集   610篇
理论方法论   128篇
综合类   4779篇
社会学   142篇
统计学   1532篇
  2024年   15篇
  2023年   39篇
  2022年   45篇
  2021年   55篇
  2020年   80篇
  2019年   136篇
  2018年   141篇
  2017年   195篇
  2016年   152篇
  2015年   193篇
  2014年   334篇
  2013年   724篇
  2012年   466篇
  2011年   393篇
  2010年   371篇
  2009年   366篇
  2008年   411篇
  2007年   473篇
  2006年   496篇
  2005年   429篇
  2004年   381篇
  2003年   349篇
  2002年   297篇
  2001年   280篇
  2000年   181篇
  1999年   89篇
  1998年   83篇
  1997年   65篇
  1996年   48篇
  1995年   59篇
  1994年   55篇
  1993年   43篇
  1992年   39篇
  1991年   23篇
  1990年   20篇
  1989年   34篇
  1988年   19篇
  1987年   12篇
  1986年   7篇
  1985年   1篇
  1984年   3篇
  1983年   4篇
  1982年   2篇
  1981年   3篇
  1980年   4篇
  1979年   1篇
  1978年   1篇
排序方式: 共有7617条查询结果,搜索用时 15 毫秒
141.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   
142.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
143.
赵丙祥 《社会》2019,39(1):37-70
在田野民族志和历史编纂学中,传记法是一种重要的叙事方式。本文从分析林耀华的作品入手,将其置于20世纪20年代以来的社会学和人类学思想脉络中,并在此基础上探讨传记法在中国历史和社会研究中的潜力与可能性。本文将谱系法、个人生命史和社会生命论归纳为“传记法三角”。在(扩展)谱系法和社会结构论的基础上,生命传记法可以呈现一个或多个人的生命历史,作为大历史中的片段。不过,现象学路径有其自身的限度,不能离开对中央政权本身和总体结构的考察。在“时势”和“社会结构”的变迁过程中呈现生命传记,才不会只是个体的故事,而有希望成为一种对中国“总体”社会生活的有力叙事方式。  相似文献   
144.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
145.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   
146.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
147.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   
148.
It is known that the normal approximation is applicable for sums of non negative random variables, W, with the commonly employed couplings. In this work, we use the Stein’s method to obtain a general theorem of non uniform exponential bound on normal approximation base on monotone size bias couplings of W. Applications of the main result to give the bound on normal approximation for binomial random variable, the number of bulbs on at the terminal time in the lightbulb process, and the number of m runs are also provided.  相似文献   
149.
Seasonal fractional ARIMA (ARFISMA) model with infinite variance innovations is used in the analysis of seasonal long-memory time series with large fluctuations (heavy-tailed distributions). Two methods, which are the empirical characteristic function (ECF) procedure developed by Knight and Yu [The empirical characteristic function in time series estimation. Econometric Theory. 2002;18:691–721] and the Two-Step method (TSM) are proposed to estimate the parameters of stable ARFISMA model. The ECF method estimates simultaneously all the parameters, while the TSM considers in the first step the Markov Chains Monte Carlo–Whittle approach introduced by Ndongo et al. [Estimation of long-memory parameters for seasonal fractional ARIMA with stable innovations. Stat Methodol. 2010;7:141–151], combined with the maximum likelihood estimation method developed by Alvarez and Olivares [Méthodes d'estimation pour des lois stables avec des applications en finance. Journal de la Société Française de Statistique. 2005;1(4):23–54] in the second step. Monte Carlo simulations are also used to evaluate the finite sample performance of these estimation techniques.  相似文献   
150.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号