首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12246篇
  免费   1272篇
  国内免费   149篇
管理学   1650篇
劳动科学   2篇
民族学   30篇
人才学   1篇
人口学   241篇
丛书文集   414篇
理论方法论   994篇
综合类   3321篇
社会学   2026篇
统计学   4988篇
  2024年   22篇
  2023年   108篇
  2022年   146篇
  2021年   261篇
  2020年   368篇
  2019年   640篇
  2018年   573篇
  2017年   816篇
  2016年   679篇
  2015年   646篇
  2014年   845篇
  2013年   1850篇
  2012年   951篇
  2011年   726篇
  2010年   641篇
  2009年   491篇
  2008年   606篇
  2007年   532篇
  2006年   454篇
  2005年   424篇
  2004年   379篇
  2003年   317篇
  2002年   244篇
  2001年   258篇
  2000年   214篇
  1999年   88篇
  1998年   79篇
  1997年   58篇
  1996年   42篇
  1995年   48篇
  1994年   34篇
  1993年   26篇
  1992年   23篇
  1991年   15篇
  1990年   14篇
  1989年   7篇
  1988年   10篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
  1977年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
1.
随着信息技术的发展,数字经济已经成为经济增长的"新引擎"。但由于缺乏权威的产业统计分类标准,学者们一直面临"数字经济研究缺乏数字依据"的尴尬境地。文章基于国家统计局公布并实施的《数字经济及其核心产业统计分类(2021)》中的分类标准,对各省份统计年鉴的数据进行重新整理,利用熵权法构建数字经济发展指数,测度了我国30个省份的数字经济发展水平,分析了各省份数字经济发展的差异以及时空特征。研究发现,2009—2019年我国数字经济产业发展迅猛,各项子产业都取得了长足的进步。相比较而言,数字要素驱动业发展速度略低于其他三个子产业;数字经济发展存在着明显的区域不平衡。东中部地区的数字经济发展状况明显优于西部地区,南方优于北方,而且区域不平衡有持续扩大趋势。  相似文献   
2.
部分线性模型是一类非常重要的半参数回归模型,由于它既含有参数部分又含有非参数部分,与常规的线性模型相比具有更强的适应性和解释能力。文章研究带有局部平稳协变量的固定效应部分线性面板数据模型的统计推断。首先提出一个两阶段估计方法得到模型中未知参数和非参数函数的估计,并证明估计量的渐近性质,然后运用不变原理构造出非参数函数的一致置信带,最后通过数值模拟研究和实例分析验证了该方法的有效性。  相似文献   
3.
This article highlights three dimensions to understanding children's well‐being during and after parental imprisonment which have not been fully explored in current research. A consideration of ‘time’ reveals the importance of children's past experiences and their anticipated futures. A focus on ‘space’ highlights the impact of new or altered environmental dynamics. A study of ‘agency’ illuminates how children cope within structural, material and social confines which intensify vulnerability and dependency. This integrated perspective reveals important differences in individual children's experiences and commonalities in broader systemic and social constraints on prisoners’ children. The paper analyses data from a prospective longitudinal study of 35 prisoners’ children during and after their (step) father's imprisonment to illustrate the arguments.  相似文献   
4.
Financial stress index (FSI) is considered to be an important risk management tool to quantify financial vulnerabilities. This paper proposes a new framework based on a hybrid classifier model that integrates rough set theory (RST), FSI, support vector regression (SVR) and a control chart to identify stressed periods. First, the RST method is applied to select variables. The outputs are used as input data for FSI–SVR computation. Empirical analysis is conducted based on monthly FSI of the Federal Reserve Bank of Saint Louis from January 1992 to June 2011. A comparison study is performed between FSI based on the principal component analysis and FSI–SVR. A control chart based on FSI–SVR and extreme value theory is proposed to identify the extremely stressed periods. Our approach identified different stressed periods including internet bubble, subprime crisis and actual financial stress episodes, along with the calmest periods, agreeing with those given by Federal Reserve System reports.  相似文献   
5.
Abstract

The problem of testing equality of two multivariate normal covariance matrices is considered. Assuming that the incomplete data are of monotone pattern, a quantity similar to the Likelihood Ratio Test Statistic is proposed. A satisfactory approximation to the distribution of the quantity is derived. Hypothesis testing based on the approximate distribution is outlined. The merits of the test are investigated using Monte Carlo simulation. Monte Carlo studies indicate that the test is very satisfactory even for moderately small samples. The proposed methods are illustrated using an example.  相似文献   
6.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
7.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   
8.
Researchers have been developing various extensions and modified forms of the Weibull distribution to enhance its capability for modeling and fitting different data sets. In this note, we investigate the potential usefulness of the new modification to the standard Weibull distribution called odd Weibull distribution in income economic inequality studies. Some mathematical and statistical properties of this model are proposed. We obtain explicit expressions for the first incomplete moment, quantile function, Lorenz and Zenga curves and related inequality indices. In addition to the well-known stochastic order based on Lorenz curve, the stochastic order based on Zenga curve is considered. Since the new generalized Weibull distribution seems to be suitable to model wealth, financial, actuarial and especially income distributions, these findings are fundamental in the understanding of how parameter values are related to inequality. Also, the estimation of parameters by maximum likelihood and moment methods is discussed. Finally, this distribution has been fitted to United States and Austrian income data sets and has been found to fit remarkably well in compare with the other widely used income models.  相似文献   
9.
10.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号