首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7012篇
  免费   179篇
  国内免费   65篇
管理学   297篇
民族学   28篇
人才学   4篇
人口学   63篇
丛书文集   605篇
理论方法论   122篇
综合类   4748篇
社会学   139篇
统计学   1250篇
  2024年   11篇
  2023年   35篇
  2022年   40篇
  2021年   50篇
  2020年   72篇
  2019年   122篇
  2018年   131篇
  2017年   182篇
  2016年   141篇
  2015年   180篇
  2014年   327篇
  2013年   613篇
  2012年   440篇
  2011年   386篇
  2010年   360篇
  2009年   357篇
  2008年   406篇
  2007年   468篇
  2006年   490篇
  2005年   419篇
  2004年   374篇
  2003年   340篇
  2002年   287篇
  2001年   276篇
  2000年   176篇
  1999年   84篇
  1998年   78篇
  1997年   62篇
  1996年   46篇
  1995年   56篇
  1994年   53篇
  1993年   42篇
  1992年   32篇
  1991年   22篇
  1990年   19篇
  1989年   31篇
  1988年   19篇
  1987年   12篇
  1986年   6篇
  1985年   1篇
  1984年   1篇
  1983年   3篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
排序方式: 共有7256条查询结果,搜索用时 15 毫秒
131.
This paper presents a method for using end-to-end available bandwidth measurements in order to estimate available bandwidth on individual internal links. The basic approach is to use a power transform on the observed end-to-end measurements, model the result as a mixture of spatially correlated exponential random variables, carryout estimation by moment methods, then transform back to the original variables to get estimates and confidence intervals for the expected available bandwidth on each link. Because spatial dependence leads to certain parameter confounding, only upper bounds can be found reliably. Simulations with ns2 show that the method can work well and that the assumptions are approximately valid in the examples.  相似文献   
132.
Interval-valued variables have become very common in data analysis. Up until now, symbolic regression mostly approaches this type of data from an optimization point of view, considering neither the probabilistic aspects of the models nor the nonlinear relationships between the interval response and the interval predictors. In this article, we formulate interval-valued variables as bivariate random vectors and introduce the bivariate symbolic regression model based on the generalized linear models theory which provides much-needed exibility in practice. Important inferential aspects are investigated. Applications to synthetic and real data illustrate the usefulness of the proposed approach.  相似文献   
133.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   
134.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   
135.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   
136.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
137.
赵丙祥 《社会》2019,39(1):37-70
在田野民族志和历史编纂学中,传记法是一种重要的叙事方式。本文从分析林耀华的作品入手,将其置于20世纪20年代以来的社会学和人类学思想脉络中,并在此基础上探讨传记法在中国历史和社会研究中的潜力与可能性。本文将谱系法、个人生命史和社会生命论归纳为“传记法三角”。在(扩展)谱系法和社会结构论的基础上,生命传记法可以呈现一个或多个人的生命历史,作为大历史中的片段。不过,现象学路径有其自身的限度,不能离开对中央政权本身和总体结构的考察。在“时势”和“社会结构”的变迁过程中呈现生命传记,才不会只是个体的故事,而有希望成为一种对中国“总体”社会生活的有力叙事方式。  相似文献   
138.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
139.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   
140.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号