首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2059篇
  免费   50篇
  国内免费   15篇
管理学   58篇
民族学   35篇
人口学   13篇
丛书文集   159篇
理论方法论   57篇
综合类   1086篇
社会学   159篇
统计学   557篇
  2024年   2篇
  2023年   17篇
  2022年   9篇
  2021年   18篇
  2020年   33篇
  2019年   49篇
  2018年   42篇
  2017年   58篇
  2016年   61篇
  2015年   50篇
  2014年   95篇
  2013年   292篇
  2012年   115篇
  2011年   93篇
  2010年   80篇
  2009年   95篇
  2008年   108篇
  2007年   133篇
  2006年   95篇
  2005年   103篇
  2004年   95篇
  2003年   74篇
  2002年   52篇
  2001年   54篇
  2000年   38篇
  1999年   32篇
  1998年   18篇
  1997年   30篇
  1996年   30篇
  1995年   26篇
  1994年   15篇
  1993年   16篇
  1992年   15篇
  1991年   6篇
  1990年   13篇
  1989年   19篇
  1988年   13篇
  1987年   7篇
  1986年   2篇
  1985年   3篇
  1984年   3篇
  1983年   2篇
  1982年   3篇
  1980年   1篇
  1979年   2篇
  1978年   5篇
  1977年   1篇
  1975年   1篇
排序方式: 共有2124条查询结果,搜索用时 234 毫秒
1.
Multinomial logit (also termed multi-logit) models permit the analysis of the statistical relation between a categorical response variable and a set of explicative variables (called covariates or regressors). Although multinomial logit is widely used in both the social and economic sciences, the interpretation of regression coefficients may be tricky, as the effect of covariates on the probability distribution of the response variable is nonconstant and difficult to quantify. The ternary plots illustrated in this article aim at facilitating the interpretation of regression coefficients and permit the effect of covariates (either singularly or jointly considered) on the probability distribution of the dependent variable to be quantified. Ternary plots can be drawn both for ordered and for unordered categorical dependent variables, when the number of possible outcomes equals three (trinomial response variable); these plots allow not only to represent the covariate effects over the whole parameter space of the dependent variable but also to compare the covariate effects of any given individual profile. The method is illustrated and discussed through analysis of a dataset concerning the transition of master’s graduates of the University of Trento (Italy) from university to employment.  相似文献   
2.
Although field experiments have documented the contemporary relevance of discrimination in employment, theories developed to explain the dynamics of differential treatment cannot account for differences across organizational and institutional contexts. In this article, I address this shortcoming by presenting the main empirical findings from a multi‐method research project, in which a field experiment of ethnic discrimination in the Norwegian labour market was complemented with forty‐two in‐depth interviews with employers who were observed in the first stage of the study. While the experimental data support earlier findings in documenting that ethnic discrimination indeed takes place, the qualitative material suggests that theorizing in the field experiment literature have been too concerned with individual and intra‐psychic explanations. Discriminatory outcomes in employment processes seems to be more dependent on contextual factors such as the number of applications received, whether requirements are specified, and the degree to which recruitment procedures are formalized. I argue that different contexts of employment provide different opportunity structures for discrimination, a finding with important theoretical and methodological implications.  相似文献   
3.
贺建风  李宏煜 《统计研究》2021,38(4):131-144
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。  相似文献   
4.
Summary.  We detail a general method for measuring agreement between two statistics. An application is two ratios of directly standardized rates which differ only by the choice of the standard. If the statistics have a high value for the coefficient of agreement then the expected squared difference between the statistics is small relative to the variance of the average of the two statistics, and inferences vary little by changing statistics. The estimation of a coefficient of agreement between two statistics is not straightforward because there is only one pair of observed values, each statistic calculated from the data. We introduce estimators of the coefficient of agreement for two statistics and discuss their use, especially as applied to functions of standardized rates.  相似文献   
5.
高速SDRAM控制器设计的FPGA实现   总被引:1,自引:0,他引:1  
同步动态存储器(SDRAM)控制器通常用有限状态机实现,对于一般的设计方法,由于状态数量多,状态转换通常伴随大的组合逻辑而影响运行速度,因此,SDRAM控制器的速度限制了SDRAM存储器的访问速度。该文从结构优化入手来优化方法,利用状态机分解的思想将大型SDRAM控制状态机用若干小的子状态机实现,达到简化逻辑的目的,不仅提高了速度还节省了资源,对该类大型SDRAM控制器的实现有一定参考意义。  相似文献   
6.
基于非线性网状创新模型提出的“三螺旋场”和“三螺旋循环”概念进一步推进了三螺旋创新模式的理论研究。三螺旋场概念旨在解释在大学、产业和政府三股螺旋之间存在的相对独立和彼此作用的本质,说明三螺旋的生成原理、静态表现和动态演化特征。三螺旋的生成原理在于创新过程的非线性本质和多主体特征,静态表现为“内核外场模型”,而动态演化过程则在于纵向进化和横向循环。发生在三股螺旋之间的三螺旋循环揭示了在大学、产业和政府之间以人员、信息和产品流动为特征的相互作用和运行机制。  相似文献   
7.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   
8.
给出了实数域及实四元数除环上方阵有平方根的充分必要条件.  相似文献   
9.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   
10.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号