首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7474篇
  免费   249篇
  国内免费   24篇
管理学   703篇
民族学   10篇
人才学   1篇
人口学   93篇
丛书文集   98篇
理论方法论   138篇
综合类   843篇
社会学   277篇
统计学   5584篇
  2024年   1篇
  2023年   68篇
  2022年   39篇
  2021年   87篇
  2020年   121篇
  2019年   239篇
  2018年   296篇
  2017年   509篇
  2016年   238篇
  2015年   240篇
  2014年   268篇
  2013年   1774篇
  2012年   539篇
  2011年   281篇
  2010年   233篇
  2009年   288篇
  2008年   268篇
  2007年   254篇
  2006年   229篇
  2005年   239篇
  2004年   202篇
  2003年   163篇
  2002年   152篇
  2001年   138篇
  2000年   127篇
  1999年   117篇
  1998年   104篇
  1997年   80篇
  1996年   54篇
  1995年   51篇
  1994年   57篇
  1993年   42篇
  1992年   44篇
  1991年   33篇
  1990年   22篇
  1989年   19篇
  1988年   24篇
  1987年   14篇
  1986年   10篇
  1985年   14篇
  1984年   15篇
  1983年   18篇
  1982年   14篇
  1981年   4篇
  1980年   5篇
  1979年   5篇
  1978年   2篇
  1977年   2篇
  1976年   2篇
  1975年   2篇
排序方式: 共有7747条查询结果,搜索用时 31 毫秒
1.
部分线性模型是一类非常重要的半参数回归模型,由于它既含有参数部分又含有非参数部分,与常规的线性模型相比具有更强的适应性和解释能力。文章研究带有局部平稳协变量的固定效应部分线性面板数据模型的统计推断。首先提出一个两阶段估计方法得到模型中未知参数和非参数函数的估计,并证明估计量的渐近性质,然后运用不变原理构造出非参数函数的一致置信带,最后通过数值模拟研究和实例分析验证了该方法的有效性。  相似文献   
2.
This article considers statistical inference for the heteroscedastic partially linear varying coefficient models. We construct an efficient estimator for the parametric component by applying the weighted profile least-squares approach, and show that it is semiparametrically efficient in the sense that the inverse of the asymptotic variance of the estimator reaches the semiparametric efficiency bound. Simulation studies are conducted to illustrate the performance of the proposed method.  相似文献   
3.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
4.
Multinomial logit (also termed multi-logit) models permit the analysis of the statistical relation between a categorical response variable and a set of explicative variables (called covariates or regressors). Although multinomial logit is widely used in both the social and economic sciences, the interpretation of regression coefficients may be tricky, as the effect of covariates on the probability distribution of the response variable is nonconstant and difficult to quantify. The ternary plots illustrated in this article aim at facilitating the interpretation of regression coefficients and permit the effect of covariates (either singularly or jointly considered) on the probability distribution of the dependent variable to be quantified. Ternary plots can be drawn both for ordered and for unordered categorical dependent variables, when the number of possible outcomes equals three (trinomial response variable); these plots allow not only to represent the covariate effects over the whole parameter space of the dependent variable but also to compare the covariate effects of any given individual profile. The method is illustrated and discussed through analysis of a dataset concerning the transition of master’s graduates of the University of Trento (Italy) from university to employment.  相似文献   
5.
6.
Modern analytical models for anti-monopoly laws are a core element of the application of those laws. Since the Anti-Monopoly Law of the People’s Republic of China was promulgated in 2008, law enforcement and judicial authorities have applied different analytical models, leading to divergent legal and regulatory outcomes as similar cases receive different verdicts. To select a suitable analytical model for China’s Anti-Monopoly Law, we need to consider the possible contribution of both economic analysis and legal formalism and to learn from the mature systems and experience of foreign countries. It is also necessary to take into account such binding constraints as the current composition of China’s anti-monopoly legal system, the ability of implementing agencies and the supply of economic analysis, in order to ensure complementarity between the analytical model chosen and the complexity of economic analysis and between the professionalism of implementing agencies and the cost of compliance for participants in economic activities. In terms of institutional design, the models should provide a considered explanation of the legislative aims of the law’s provisions. It is necessary, therefore, to establish a processing model of behavioral classification that is based on China’s national conditions, applies analytical models using normative comprehensive analysis, makes use of the distribution rule of burden of proof, improves supporting systems related to analytical models and enhances the ability of public authorities to implement the law.  相似文献   
7.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
8.
本文采用深度门控循环单元(GRU)神经网络探讨三种汇率货币模型(弹性价格、前瞻性和实际利率差模型)的非线性协整关系。GRU技术在深度学习中具有智能记忆、自主学习和强逼近能力等优点。为此,本文运用该技术对6组典型浮动汇率制国别数据进行了非线性Johansen协整检验。结果表明,汇率与宏观经济基本面之间存在非线性协整关系,从而说明了货币模型在非线性条件下的有效性,以及先进的深度学习工具在检验经济理论中的优势。  相似文献   
9.
Abstract

In this article we develop the minimax estimation approach of general linear models to the semiparametric linear models when the parameters are simultaneously constrained by an ellipsoid and linear restrictions. Combining sample information and prior constraints the minimax estimator is obtained and compared with partially least square estimator by theoretical and simulation methods.  相似文献   
10.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号