首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7071篇
  免费   180篇
  国内免费   65篇
管理学   304篇
民族学   28篇
人才学   4篇
人口学   67篇
丛书文集   606篇
理论方法论   123篇
综合类   4777篇
社会学   155篇
统计学   1252篇
  2024年   15篇
  2023年   36篇
  2022年   45篇
  2021年   50篇
  2020年   74篇
  2019年   124篇
  2018年   131篇
  2017年   186篇
  2016年   142篇
  2015年   182篇
  2014年   328篇
  2013年   620篇
  2012年   442篇
  2011年   390篇
  2010年   365篇
  2009年   357篇
  2008年   409篇
  2007年   470篇
  2006年   492篇
  2005年   419篇
  2004年   375篇
  2003年   344篇
  2002年   290篇
  2001年   279篇
  2000年   176篇
  1999年   84篇
  1998年   79篇
  1997年   62篇
  1996年   47篇
  1995年   56篇
  1994年   53篇
  1993年   42篇
  1992年   32篇
  1991年   22篇
  1990年   19篇
  1989年   31篇
  1988年   19篇
  1987年   12篇
  1986年   6篇
  1985年   1篇
  1984年   1篇
  1983年   3篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
排序方式: 共有7316条查询结果,搜索用时 859 毫秒
131.
The Best Worst Method (BWM) is a multi-criteria decision-making method that uses two vectors of pairwise comparisons to determine the weights of criteria. First, the best (e.g. most desirable, most important), and the worst (e.g. least desirable, least important) criteria are identified by the decision-maker, after which the best criterion is compared to the other criteria, and the other criteria to the worst criterion. A non-linear minmax model is then used to identify the weights such that the maximum absolute difference between the weight ratios and their corresponding comparisons is minimized. The minmax model may result in multiple optimal solutions. Although, in some cases, decision-makers prefer to have multiple optimal solutions, in other cases they prefer to have a unique solution. The aim of this paper is twofold: firstly, we propose using interval analysis for the case of multiple optimal solutions, in which we show how the criteria can be weighed and ranked. Secondly, we propose a linear model for BWM, which is based on the same philosophy, but yields a unique solution.  相似文献   
132.
The Theil, Pietra, Éltetö and Frigyes measures of income inequality associated with the Pareto distribution function are expressed in terms of parameters defining the Pareto distribution. Inference procedures based on the generalized variable method, the large sample method, and the Bayesian method for testing of, and constructing confidence interval for, these measures are discussed. The results of Monte Carlo study are used to compare the performance of the suggested inference procedures from a population characterized by a Pareto distribution.  相似文献   
133.
This paper presents a method for using end-to-end available bandwidth measurements in order to estimate available bandwidth on individual internal links. The basic approach is to use a power transform on the observed end-to-end measurements, model the result as a mixture of spatially correlated exponential random variables, carryout estimation by moment methods, then transform back to the original variables to get estimates and confidence intervals for the expected available bandwidth on each link. Because spatial dependence leads to certain parameter confounding, only upper bounds can be found reliably. Simulations with ns2 show that the method can work well and that the assumptions are approximately valid in the examples.  相似文献   
134.
Interval-valued variables have become very common in data analysis. Up until now, symbolic regression mostly approaches this type of data from an optimization point of view, considering neither the probabilistic aspects of the models nor the nonlinear relationships between the interval response and the interval predictors. In this article, we formulate interval-valued variables as bivariate random vectors and introduce the bivariate symbolic regression model based on the generalized linear models theory which provides much-needed exibility in practice. Important inferential aspects are investigated. Applications to synthetic and real data illustrate the usefulness of the proposed approach.  相似文献   
135.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   
136.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   
137.
ABSTRACT

Quite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses.  相似文献   
138.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
139.
赵丙祥 《社会》2019,39(1):37-70
在田野民族志和历史编纂学中,传记法是一种重要的叙事方式。本文从分析林耀华的作品入手,将其置于20世纪20年代以来的社会学和人类学思想脉络中,并在此基础上探讨传记法在中国历史和社会研究中的潜力与可能性。本文将谱系法、个人生命史和社会生命论归纳为“传记法三角”。在(扩展)谱系法和社会结构论的基础上,生命传记法可以呈现一个或多个人的生命历史,作为大历史中的片段。不过,现象学路径有其自身的限度,不能离开对中央政权本身和总体结构的考察。在“时势”和“社会结构”的变迁过程中呈现生命传记,才不会只是个体的故事,而有希望成为一种对中国“总体”社会生活的有力叙事方式。  相似文献   
140.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号