首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15424篇
  免费   530篇
  国内免费   105篇
管理学   844篇
劳动科学   38篇
民族学   464篇
人才学   24篇
人口学   262篇
丛书文集   4791篇
理论方法论   696篇
综合类   6994篇
社会学   825篇
统计学   1121篇
  2024年   6篇
  2023年   59篇
  2022年   245篇
  2021年   245篇
  2020年   207篇
  2019年   150篇
  2018年   199篇
  2017年   357篇
  2016年   253篇
  2015年   467篇
  2014年   532篇
  2013年   777篇
  2012年   837篇
  2011年   1144篇
  2010年   1208篇
  2009年   1244篇
  2008年   1128篇
  2007年   1234篇
  2006年   1304篇
  2005年   1010篇
  2004年   550篇
  2003年   413篇
  2002年   532篇
  2001年   438篇
  2000年   280篇
  1999年   236篇
  1998年   149篇
  1997年   157篇
  1996年   166篇
  1995年   129篇
  1994年   84篇
  1993年   77篇
  1992年   72篇
  1991年   43篇
  1990年   44篇
  1989年   19篇
  1988年   21篇
  1987年   14篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   2篇
  1982年   4篇
  1981年   5篇
  1980年   2篇
排序方式: 共有10000条查询结果,搜索用时 9 毫秒
101.
102.
韩勇  干胜道  张伊 《统计研究》2013,30(5):71-75
 本文从现金股利政策的角度来研究机构投资者的投资行为和参与上市公司治理的情况。首先回顾了该方面的相关文献,以信号理论为基础,分析机构投资者与现金股利政策可能存在的关系;接着对机构投资者的基本特征以及我国机构投资者和上市公司现金股利政策的概况及特点进行了总结;最后本文以 2007至2010 年的机构投资者持股数据与上市公司现金股利政策数据进行统计分析、回归分析,实证检验了机构投资者异质性与上市公司股利政策之间的关系。  相似文献   
103.
农村居民消费增长比平稳更重要   总被引:1,自引:0,他引:1       下载免费PDF全文
 本文将消费习惯引入Lucas (1987)模型,采用农村五等份收入户的人均消费数据进行数值模拟,结果发现: 消费增长比消费平稳更重要,且收入等级越高,这种相对重要性就越突出。相对风险规避系数一定时,两类福利成本之比随习惯强度变化的轨迹呈倒U型; 习惯强度一定时,两类福利成本之比随相对风险规避系数的增大而递减。相对于其他等级的收入户,促进消费增长的经济政策为高收入户带来相对较多的福利,而平抑消费波动的经济政策能为低收入户带来较多的福利。因此,政府在促进农村居民消费增长的同时,也应重视消费波动给低收入群体造成的福利成本。  相似文献   
104.
The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.  相似文献   
105.
Jin Zhang 《Statistics》2013,47(4):792-799
The Pareto distribution is an important distribution in statistics, which has been widely used in finance, physics, hydrology, geology, astronomy, and so on. Even though the parameter estimation for the Pareto distribution has been well established in the literature, the estimation problem for the truncated Pareto distribution becomes complex. This article investigates the bias and mean-squared error of the maximum-likelihood estimation for the truncated Pareto distribution, and some useful results are obtained.  相似文献   
106.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   
107.
This article proposes a multivariate synthetic control chart for skewed populations based on the weighted standard deviation method. The proposed chart incorporates the weighted standard deviation method into the standard multivariate synthetic control chart. The standard multivariate synthetic chart consists of the Hotelling's T 2 chart and the conforming run length chart. The weighted standard deviation method adjusts the variance–covariance matrix of the quality characteristics and approximates the probability density function using several multivariate normal distributions. The proposed chart reduces to the standard multivariate synthetic chart when the underlying distribution is symmetric. In general, the simulation results show that the proposed chart performs better than the existing multivariate charts for skewed populations and the standard T 2 chart, in terms of false alarm rates as well as moderate and large mean shift detection rates based on the various degrees of skewnesses.  相似文献   
108.
Brownian motion has been used to derive stopping boundaries for group sequential trials, however, when we observe dependent increment in the data, fractional Brownian motion is an alternative to be considered to model such data. In this article we compared expected sample sizes and stopping times for different stopping boundaries based on the power family alpha spending function under various values of Hurst coefficient. Results showed that the expected sample sizes and stopping times will decrease and power increases when the Hurst coefficient increases. With same Hurst coefficient, the closer the boundaries are to that of O'Brien-Fleming, the higher the expected sample sizes and stopping times are; however, power has a decreasing trend for values start from H = 0.6 (early analysis), 0.7 (equal space), 0.8 (late analysis). We also illustrate study design changes using results from the BHAT study.  相似文献   
109.
SubBag is a technique by combining bagging and random subspace methods to generate ensemble classifiers with good generalization capability. In practice, a hyperparameter K of SubBag—the number of randomly selected features to create each base classifier—should be specified beforehand. In this article, we propose to employ the out-of-bag instances to determine the optimal value of K in SubBag. The experiments conducted with some UCI real-world data sets show that the proposed method can make SubBag achieve the optimal performance in nearly all the considered cases. Meanwhile, it occupied less computational sources than cross validation procedure.  相似文献   
110.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931 Neyman , J. , Pearson , E. S. ( 1931 ). On the problem of k samples . In: Neyman , J. , Pearson , E. S. , eds. Joint Statistical Papers . Cambridge : Cambridge University Press , pp. 116131 . [Google Scholar]) are all accurate, and the gamma approximation is the most accurate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号