首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19245篇
  免费   659篇
  国内免费   180篇
管理学   1027篇
劳动科学   44篇
民族学   502篇
人才学   27篇
人口学   323篇
丛书文集   5338篇
理论方法论   850篇
综合类   9839篇
社会学   911篇
统计学   1223篇
  2024年   14篇
  2023年   86篇
  2022年   324篇
  2021年   357篇
  2020年   281篇
  2019年   208篇
  2018年   263篇
  2017年   438篇
  2016年   321篇
  2015年   610篇
  2014年   698篇
  2013年   993篇
  2012年   1084篇
  2011年   1386篇
  2010年   1478篇
  2009年   1544篇
  2008年   1414篇
  2007年   1479篇
  2006年   1564篇
  2005年   1235篇
  2004年   710篇
  2003年   584篇
  2002年   744篇
  2001年   603篇
  2000年   376篇
  1999年   253篇
  1998年   155篇
  1997年   162篇
  1996年   170篇
  1995年   134篇
  1994年   88篇
  1993年   79篇
  1992年   73篇
  1991年   47篇
  1990年   45篇
  1989年   19篇
  1988年   21篇
  1987年   14篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   2篇
  1982年   5篇
  1981年   5篇
  1980年   2篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
91.
We study the genotype calling algorithms for the high-throughput single-nucleotide polymorphism (SNP) arrays. Building upon the novel SNP-robust multi-chip average preprocessing approach and the state-of-the-art corrected robust linear model with Mahalanobis distance (CRLMM) approach for genotype calling, we propose a simple modification to better model and combine the information across multiple SNPs with empirical Bayes modeling, which could often significantly improve the genotype calling of CRLMM. Through applications to the HapMap Trio data set and a non-HapMap test set of high quality SNP chips, we illustrate the competitive performance of the proposed method.  相似文献   
92.
93.
韩勇  干胜道  张伊 《统计研究》2013,30(5):71-75
 本文从现金股利政策的角度来研究机构投资者的投资行为和参与上市公司治理的情况。首先回顾了该方面的相关文献,以信号理论为基础,分析机构投资者与现金股利政策可能存在的关系;接着对机构投资者的基本特征以及我国机构投资者和上市公司现金股利政策的概况及特点进行了总结;最后本文以 2007至2010 年的机构投资者持股数据与上市公司现金股利政策数据进行统计分析、回归分析,实证检验了机构投资者异质性与上市公司股利政策之间的关系。  相似文献   
94.
The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.  相似文献   
95.
Jin Zhang 《Statistics》2013,47(4):792-799
The Pareto distribution is an important distribution in statistics, which has been widely used in finance, physics, hydrology, geology, astronomy, and so on. Even though the parameter estimation for the Pareto distribution has been well established in the literature, the estimation problem for the truncated Pareto distribution becomes complex. This article investigates the bias and mean-squared error of the maximum-likelihood estimation for the truncated Pareto distribution, and some useful results are obtained.  相似文献   
96.
In this paper, the Rosenthal-type maximal inequalities and Kolmogorov-type exponential inequality for negatively superadditive-dependent (NSD) random variables are presented. By using these inequalities, we study the complete convergence for arrays of rowwise NSD random variables. As applications, the Baum–Katz-type result for arrays of rowwise NSD random variables and the complete consistency for the estimator of nonparametric regression model based on NSD errors are obtained. Our results extend and improve the corresponding ones of Chen et al. [On complete convergence for arrays of rowwise negatively associated random variables. Theory Probab Appl. 2007;52(2):393–397] for arrays of rowwise negatively associated random variables to the case of arrays of rowwise NSD random variables.  相似文献   
97.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   
98.
This article proposes a multivariate synthetic control chart for skewed populations based on the weighted standard deviation method. The proposed chart incorporates the weighted standard deviation method into the standard multivariate synthetic control chart. The standard multivariate synthetic chart consists of the Hotelling's T 2 chart and the conforming run length chart. The weighted standard deviation method adjusts the variance–covariance matrix of the quality characteristics and approximates the probability density function using several multivariate normal distributions. The proposed chart reduces to the standard multivariate synthetic chart when the underlying distribution is symmetric. In general, the simulation results show that the proposed chart performs better than the existing multivariate charts for skewed populations and the standard T 2 chart, in terms of false alarm rates as well as moderate and large mean shift detection rates based on the various degrees of skewnesses.  相似文献   
99.
Brownian motion has been used to derive stopping boundaries for group sequential trials, however, when we observe dependent increment in the data, fractional Brownian motion is an alternative to be considered to model such data. In this article we compared expected sample sizes and stopping times for different stopping boundaries based on the power family alpha spending function under various values of Hurst coefficient. Results showed that the expected sample sizes and stopping times will decrease and power increases when the Hurst coefficient increases. With same Hurst coefficient, the closer the boundaries are to that of O'Brien-Fleming, the higher the expected sample sizes and stopping times are; however, power has a decreasing trend for values start from H = 0.6 (early analysis), 0.7 (equal space), 0.8 (late analysis). We also illustrate study design changes using results from the BHAT study.  相似文献   
100.
SubBag is a technique by combining bagging and random subspace methods to generate ensemble classifiers with good generalization capability. In practice, a hyperparameter K of SubBag—the number of randomly selected features to create each base classifier—should be specified beforehand. In this article, we propose to employ the out-of-bag instances to determine the optimal value of K in SubBag. The experiments conducted with some UCI real-world data sets show that the proposed method can make SubBag achieve the optimal performance in nearly all the considered cases. Meanwhile, it occupied less computational sources than cross validation procedure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号