首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18242篇
  免费   803篇
  国内免费   221篇
管理学   2021篇
劳动科学   4篇
民族学   75篇
人才学   3篇
人口学   397篇
丛书文集   955篇
理论方法论   412篇
综合类   9052篇
社会学   604篇
统计学   5743篇
  2024年   18篇
  2023年   132篇
  2022年   211篇
  2021年   240篇
  2020年   383篇
  2019年   499篇
  2018年   580篇
  2017年   736篇
  2016年   621篇
  2015年   657篇
  2014年   1063篇
  2013年   2345篇
  2012年   1363篇
  2011年   1240篇
  2010年   999篇
  2009年   945篇
  2008年   1087篇
  2007年   1071篇
  2006年   962篇
  2005年   803篇
  2004年   662篇
  2003年   582篇
  2002年   478篇
  2001年   424篇
  2000年   277篇
  1999年   194篇
  1998年   108篇
  1997年   111篇
  1996年   87篇
  1995年   76篇
  1994年   53篇
  1993年   47篇
  1992年   41篇
  1991年   45篇
  1990年   27篇
  1989年   20篇
  1988年   17篇
  1987年   9篇
  1986年   7篇
  1985年   13篇
  1984年   8篇
  1983年   9篇
  1982年   5篇
  1981年   1篇
  1980年   1篇
  1979年   4篇
  1978年   2篇
  1977年   2篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
1.
资本创造模型(CC模型)忽视了要素流动对产业空间分布的影响。而发展的新的资本创造模型则认为资本集聚的过程必然伴随着工业劳动力的流动过程。另外,是资本的实际收益而不是名义收益决定资本是否创造。研究结果表明,随着贸易自由度、工业品支出份额及资本贴现率的变大,替代弹性及资本折旧率的变小,将降低对称结构的稳定性,而提高中心-外围结构的稳定性;经济地理空间的产业均衡是集聚力和分散力相互作用的结果。当企业生产工业品的规模报酬递增程度足够显著,或者工业品支出份额很高时,市场拥挤效应将彻底消失,并转化成为促进产业集聚的动力;突破点与持续点的大小比较可以形成不同的关系,这意味着随着贸易自由度的变化,本文发展的资本创造模型可以体现出多样化的产业空间动态演化行为。  相似文献   
2.
随着信息技术的发展,数字经济已经成为经济增长的"新引擎"。但由于缺乏权威的产业统计分类标准,学者们一直面临"数字经济研究缺乏数字依据"的尴尬境地。文章基于国家统计局公布并实施的《数字经济及其核心产业统计分类(2021)》中的分类标准,对各省份统计年鉴的数据进行重新整理,利用熵权法构建数字经济发展指数,测度了我国30个省份的数字经济发展水平,分析了各省份数字经济发展的差异以及时空特征。研究发现,2009—2019年我国数字经济产业发展迅猛,各项子产业都取得了长足的进步。相比较而言,数字要素驱动业发展速度略低于其他三个子产业;数字经济发展存在着明显的区域不平衡。东中部地区的数字经济发展状况明显优于西部地区,南方优于北方,而且区域不平衡有持续扩大趋势。  相似文献   
3.
部分线性模型是一类非常重要的半参数回归模型,由于它既含有参数部分又含有非参数部分,与常规的线性模型相比具有更强的适应性和解释能力。文章研究带有局部平稳协变量的固定效应部分线性面板数据模型的统计推断。首先提出一个两阶段估计方法得到模型中未知参数和非参数函数的估计,并证明估计量的渐近性质,然后运用不变原理构造出非参数函数的一致置信带,最后通过数值模拟研究和实例分析验证了该方法的有效性。  相似文献   
4.
从农户意愿角度,以福建省农民的农用承包地为研究对象,对影响农户承包地流转的因素进行分析,通过实地调查和问卷调查的形式收集到影响农户流转承包地的样本数据,并对通过建立二元Logistic模型对各个因素的显著性进行检验。通过实证研究发现,户主受教育程度、户主的职业、农户家庭的人口、家庭的收入主要来源、承包地面积和农户流转承包地的租金水平是农户流转承包地的显著性影响因素,其中户主受教育程度、家庭的收入主要来源、承包地面积和租金水平是正向影响,户主的职业和农户家庭的人口是负向影响。根据研究的结论提出相应的政策建议。  相似文献   
5.
Financial stress index (FSI) is considered to be an important risk management tool to quantify financial vulnerabilities. This paper proposes a new framework based on a hybrid classifier model that integrates rough set theory (RST), FSI, support vector regression (SVR) and a control chart to identify stressed periods. First, the RST method is applied to select variables. The outputs are used as input data for FSI–SVR computation. Empirical analysis is conducted based on monthly FSI of the Federal Reserve Bank of Saint Louis from January 1992 to June 2011. A comparison study is performed between FSI based on the principal component analysis and FSI–SVR. A control chart based on FSI–SVR and extreme value theory is proposed to identify the extremely stressed periods. Our approach identified different stressed periods including internet bubble, subprime crisis and actual financial stress episodes, along with the calmest periods, agreeing with those given by Federal Reserve System reports.  相似文献   
6.
Abstract

Accessibility of library electronic resources is a must. Its importance derives from professional ethics of librarianship, rising total costs of acquisition, and mounting legal challenges to colleges and universities that fail to provide resources accessible to users with disabilities. Library staff are responsible for ensuring the accessibility of vendor-licensed eresources. This column reviews the accessibility clauses of nine model license agreements for electronic resources. It describes terms that should go into an optimal accessibility clause and creates a composite model clause. It also provides guidance for library staff seeking to negotiate stronger accessibility language into vendor license agreements. Finally, it addresses the impact of accommodation requests on the total cost of acquiring library eresources, concluding with a call to redouble efforts to advocate for greater accessibility and educate both vendors and library staff about its importance.  相似文献   
7.
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately.  相似文献   
8.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
9.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   
10.
Researchers have been developing various extensions and modified forms of the Weibull distribution to enhance its capability for modeling and fitting different data sets. In this note, we investigate the potential usefulness of the new modification to the standard Weibull distribution called odd Weibull distribution in income economic inequality studies. Some mathematical and statistical properties of this model are proposed. We obtain explicit expressions for the first incomplete moment, quantile function, Lorenz and Zenga curves and related inequality indices. In addition to the well-known stochastic order based on Lorenz curve, the stochastic order based on Zenga curve is considered. Since the new generalized Weibull distribution seems to be suitable to model wealth, financial, actuarial and especially income distributions, these findings are fundamental in the understanding of how parameter values are related to inequality. Also, the estimation of parameters by maximum likelihood and moment methods is discussed. Finally, this distribution has been fitted to United States and Austrian income data sets and has been found to fit remarkably well in compare with the other widely used income models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号