全文获取类型
收费全文 | 8350篇 |
免费 | 269篇 |
国内免费 | 91篇 |
专业分类
管理学 | 758篇 |
劳动科学 | 2篇 |
民族学 | 36篇 |
人才学 | 1篇 |
人口学 | 133篇 |
丛书文集 | 536篇 |
理论方法论 | 238篇 |
综合类 | 4371篇 |
社会学 | 347篇 |
统计学 | 2288篇 |
出版年
2024年 | 7篇 |
2023年 | 54篇 |
2022年 | 56篇 |
2021年 | 102篇 |
2020年 | 127篇 |
2019年 | 188篇 |
2018年 | 213篇 |
2017年 | 311篇 |
2016年 | 219篇 |
2015年 | 239篇 |
2014年 | 468篇 |
2013年 | 1104篇 |
2012年 | 561篇 |
2011年 | 596篇 |
2010年 | 451篇 |
2009年 | 444篇 |
2008年 | 494篇 |
2007年 | 524篇 |
2006年 | 435篇 |
2005年 | 430篇 |
2004年 | 346篇 |
2003年 | 324篇 |
2002年 | 213篇 |
2001年 | 213篇 |
2000年 | 164篇 |
1999年 | 77篇 |
1998年 | 60篇 |
1997年 | 53篇 |
1996年 | 44篇 |
1995年 | 39篇 |
1994年 | 19篇 |
1993年 | 20篇 |
1992年 | 25篇 |
1991年 | 19篇 |
1990年 | 9篇 |
1989年 | 10篇 |
1988年 | 10篇 |
1987年 | 4篇 |
1986年 | 5篇 |
1985年 | 7篇 |
1984年 | 1篇 |
1983年 | 8篇 |
1982年 | 4篇 |
1981年 | 3篇 |
1980年 | 3篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有8710条查询结果,搜索用时 15 毫秒
1.
随着信息技术的发展,数字经济已经成为经济增长的"新引擎"。但由于缺乏权威的产业统计分类标准,学者们一直面临"数字经济研究缺乏数字依据"的尴尬境地。文章基于国家统计局公布并实施的《数字经济及其核心产业统计分类(2021)》中的分类标准,对各省份统计年鉴的数据进行重新整理,利用熵权法构建数字经济发展指数,测度了我国30个省份的数字经济发展水平,分析了各省份数字经济发展的差异以及时空特征。研究发现,2009—2019年我国数字经济产业发展迅猛,各项子产业都取得了长足的进步。相比较而言,数字要素驱动业发展速度略低于其他三个子产业;数字经济发展存在着明显的区域不平衡。东中部地区的数字经济发展状况明显优于西部地区,南方优于北方,而且区域不平衡有持续扩大趋势。 相似文献
2.
Recently, Kambo and his co-researchers (2012) proposed a method of approximation for evaluating the one-dimensional renewal function based on the first three moments. Their method is simple and elegant, which gives exact values for well-known distributions. In this article, we propose an analogous method for the evaluation of bivariate renewal function based on the first two moments of the variables and their joint moment. The proposed method yields exact results for certain widely used bivariate distributions like bivariate exponential distribution, bivariate Weibull distributions, and bivariate Pareto distributions. An illustrative example in the form of a two-dimensional warranty problem is considered and comparisons of our method are made with the results of other models. 相似文献
3.
Financial stress index (FSI) is considered to be an important risk management tool to quantify financial vulnerabilities. This paper proposes a new framework based on a hybrid classifier model that integrates rough set theory (RST), FSI, support vector regression (SVR) and a control chart to identify stressed periods. First, the RST method is applied to select variables. The outputs are used as input data for FSI–SVR computation. Empirical analysis is conducted based on monthly FSI of the Federal Reserve Bank of Saint Louis from January 1992 to June 2011. A comparison study is performed between FSI based on the principal component analysis and FSI–SVR. A control chart based on FSI–SVR and extreme value theory is proposed to identify the extremely stressed periods. Our approach identified different stressed periods including internet bubble, subprime crisis and actual financial stress episodes, along with the calmest periods, agreeing with those given by Federal Reserve System reports. 相似文献
4.
Towards Uniformly Efficient Trend Estimation Under Weak/Strong Correlation and Non‐stationary Volatility
下载免费PDF全文
![点击此处可从《Scandinavian Journal of Statistics》网站下载免费的PDF全文](/ch/ext_images/free.gif)
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators. 相似文献
5.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods. 相似文献
6.
科技企业是实现科技创新的驱动者和科技成果转化的重要载体,也是推动研究开发的重要参与者,科学评价科技企业创新能力有助于企业自身不断发展壮大。在分析国内外新区科技企业创新驱动发展相关理论研究的基础上,从研发投入、研发基础、研发效益和现代科技四个角度,运用层次分析法构建科技企业创新能力评价系统,并通过实证分析说明评价系统的可靠性;根据评价系统测算出现阶段雄安新区科技企业创新能力,通过与成熟新区科技企业的比较,发现其短板和不足,力图为决策者科学合理评价、管理科技企业创新发展提供有益参考。 相似文献
7.
Researchers have been developing various extensions and modified forms of the Weibull distribution to enhance its capability for modeling and fitting different data sets. In this note, we investigate the potential usefulness of the new modification to the standard Weibull distribution called odd Weibull distribution in income economic inequality studies. Some mathematical and statistical properties of this model are proposed. We obtain explicit expressions for the first incomplete moment, quantile function, Lorenz and Zenga curves and related inequality indices. In addition to the well-known stochastic order based on Lorenz curve, the stochastic order based on Zenga curve is considered. Since the new generalized Weibull distribution seems to be suitable to model wealth, financial, actuarial and especially income distributions, these findings are fundamental in the understanding of how parameter values are related to inequality. Also, the estimation of parameters by maximum likelihood and moment methods is discussed. Finally, this distribution has been fitted to United States and Austrian income data sets and has been found to fit remarkably well in compare with the other widely used income models. 相似文献
8.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne. 相似文献
9.
10.
Juan Kalemkerian 《统计学通讯:理论与方法》2019,48(16):3956-3975