全文获取类型
收费全文 | 8783篇 |
免费 | 228篇 |
国内免费 | 69篇 |
专业分类
管理学 | 377篇 |
民族学 | 28篇 |
人才学 | 4篇 |
人口学 | 66篇 |
丛书文集 | 619篇 |
理论方法论 | 140篇 |
综合类 | 4979篇 |
社会学 | 151篇 |
统计学 | 2716篇 |
出版年
2024年 | 16篇 |
2023年 | 45篇 |
2022年 | 52篇 |
2021年 | 65篇 |
2020年 | 105篇 |
2019年 | 188篇 |
2018年 | 190篇 |
2017年 | 306篇 |
2016年 | 187篇 |
2015年 | 233篇 |
2014年 | 377篇 |
2013年 | 1047篇 |
2012年 | 655篇 |
2011年 | 430篇 |
2010年 | 415篇 |
2009年 | 417篇 |
2008年 | 446篇 |
2007年 | 508篇 |
2006年 | 533篇 |
2005年 | 472篇 |
2004年 | 407篇 |
2003年 | 371篇 |
2002年 | 316篇 |
2001年 | 313篇 |
2000年 | 214篇 |
1999年 | 119篇 |
1998年 | 112篇 |
1997年 | 79篇 |
1996年 | 65篇 |
1995年 | 72篇 |
1994年 | 66篇 |
1993年 | 55篇 |
1992年 | 49篇 |
1991年 | 31篇 |
1990年 | 29篇 |
1989年 | 37篇 |
1988年 | 20篇 |
1987年 | 12篇 |
1986年 | 6篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 4篇 |
1981年 | 4篇 |
1980年 | 5篇 |
1979年 | 1篇 |
1978年 | 2篇 |
排序方式: 共有9080条查询结果,搜索用时 328 毫秒
1.
随着信息技术的发展,数字经济已经成为经济增长的"新引擎"。但由于缺乏权威的产业统计分类标准,学者们一直面临"数字经济研究缺乏数字依据"的尴尬境地。文章基于国家统计局公布并实施的《数字经济及其核心产业统计分类(2021)》中的分类标准,对各省份统计年鉴的数据进行重新整理,利用熵权法构建数字经济发展指数,测度了我国30个省份的数字经济发展水平,分析了各省份数字经济发展的差异以及时空特征。研究发现,2009—2019年我国数字经济产业发展迅猛,各项子产业都取得了长足的进步。相比较而言,数字要素驱动业发展速度略低于其他三个子产业;数字经济发展存在着明显的区域不平衡。东中部地区的数字经济发展状况明显优于西部地区,南方优于北方,而且区域不平衡有持续扩大趋势。 相似文献
2.
AbstractCharacterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included. 相似文献
3.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods. 相似文献
4.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne. 相似文献
5.
6.
AbstractThis paper focuses on the inference of suitable generally non linear functions in stochastic volatility models. In this context, in order to estimate the variance of the proposed estimators, a moving block bootstrap (MBB) approach is suggested and discussed. Under mild assumptions, we show that the MBB procedure is weakly consistent. Moreover, a methodology to choose the optimal length block in the MBB is proposed. Some examples and simulations on the model are also made to show the performance of the proposed procedure. 相似文献
7.
A. M. Abd El-Raheem 《Journal of Statistical Computation and Simulation》2019,89(16):3075-3104
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion. 相似文献
8.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time. 相似文献
9.
Simulation results are reported on methods that allow both within group and between group heteroscedasticity when testing the hypothesis that independent groups have identical regression parameters. The methods are based on a combination of extant techniques, but their finite-sample properties have not been studied. Included are results on the impact of removing all leverage points or just bad leverage points. The method used to identify leverage points can be important and can improve control over the Type I error probability. Results are illustrated using data from the Well Elderly II study. 相似文献
10.
Luis Villar-Fidalgo María del Mar Espinosa Escudero Manuel Domínguez Somonte 《生产规划与管理》2019,30(8):624-638
Not only ETO (Engineering to Order), but even serial production industry should know how to deal with projects and their schedule: plant commissioning, shutdown, introducing new products and similar circumstances should be managed with adequate planning and resource allocation techniques to create a schedule useful in decision making during the execution. The current complex reality, with evolving technologies and fierce pressure to reach the market as soon as possible, pushes the project managers to use more advanced techniques than waterfall planning, such as agile or lean. It also requires them to take a holistic view and manage concurrent tasks in complex projects. The contributions of this paper are two: the proposal to control specific parallel groups of waterfall activities under uncertain environments, which can lead to iterations and reworks, as a single concurrent Activity Managed by Kanban Methods (AMKM). This activity can be subsequently embedded into traditional scheduling approaches as CPM-PERT. The second contribution is the feasibility of its application in industrial environments due to the affordability of simulation software. Two use cases are shown as evidence. It is not a disruptive proposal, but a kaizen action based on very mature technologies. Finally, it is suggested some improvements to be implemented in Project Management Software due to this ‘kaizen’ proposal. 相似文献