首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18378篇
  免费   626篇
  国内免费   184篇
管理学   932篇
劳动科学   49篇
民族学   468篇
人才学   10篇
人口学   260篇
丛书文集   4411篇
理论方法论   926篇
综合类   10185篇
社会学   895篇
统计学   1052篇
  2024年   40篇
  2023年   85篇
  2022年   265篇
  2021年   287篇
  2020年   271篇
  2019年   185篇
  2018年   261篇
  2017年   403篇
  2016年   267篇
  2015年   550篇
  2014年   700篇
  2013年   974篇
  2012年   941篇
  2011年   1256篇
  2010年   1399篇
  2009年   1419篇
  2008年   1374篇
  2007年   1543篇
  2006年   1481篇
  2005年   1289篇
  2004年   738篇
  2003年   579篇
  2002年   663篇
  2001年   631篇
  2000年   388篇
  1999年   274篇
  1998年   124篇
  1997年   138篇
  1996年   156篇
  1995年   104篇
  1994年   83篇
  1993年   71篇
  1992年   58篇
  1991年   56篇
  1990年   36篇
  1989年   25篇
  1988年   29篇
  1987年   8篇
  1986年   11篇
  1985年   5篇
  1984年   4篇
  1983年   2篇
  1982年   4篇
  1981年   4篇
  1976年   1篇
  1974年   1篇
  1972年   1篇
  1971年   1篇
  1968年   1篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
61.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   
62.
The expectation-maximization (EM) method facilitates computation of max¬imum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservabie complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-stcp. The EM algorithm is most effective when maximizing the iunction Q{0) denned in the F-stnp is easier than maximizing the likelihood function.

The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can he computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of thc standard EM algorithm avoiding computation of conditional expectations.  相似文献   
63.
ABSTRACT

One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.  相似文献   
64.
Simultaneous estimation of parameters with p (≥ 2) components, where each component has a generalized life distribution, is considered under a sum of squared error loss function. Improved estimators are obtained which dominate the maximum likelihood and the niinimum mean square estimators. Robustness of the improved estimators is shown even when the component distributions are dependent. The result is extended to the estimation of the system reliability when the components are connected in series. Several numerical studies are performed to demonstrate the risk improvement and the Pitman closeness of the new estimators.  相似文献   
65.
When the X ¥ control chart is used to monitor a process, three parameters should be determined: the sample size, the sampling interval between successive samples, and the control limits of the chart. Duncan presented a cost model to determine the three parameters for an X ¥ chart. Alexander et al. combined Duncan's cost model with the Taguchi loss function to present a loss model for determining the three parameters. In this paper, the Burr distribution is employed to conduct the economic-statistical design of X ¥ charts for non-normal data. Alexander's loss model is used as the objective function, and the cumulative function of the Burr distribution is applied to derive the statistical constraints of the design. An example is presented to illustrate the solution procedure. From the results of the sensitivity analyses, we find that small values of the skewness coefficient have no significant effect on the optimal design; however, a larger value of skewness coefficient leads to a slightly larger sample size and sampling interval, as well as wider control limits. Meanwhile, an increase on the kurtosis coefficient results in an increase on the sample size and wider control limits.  相似文献   
66.
Summary.  We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO model of Lin and Zhang but decouple smoothing and sparsity, enabling the use of arbitrary non-parametric smoothers. We give an analysis of the theoretical properties of sparse additive models and present empirical results on synthetic and real data, showing that they can be effective in fitting sparse non-parametric models in high dimensional data.  相似文献   
67.
In this paper, we consider a Bayesian analysis of the unbalanced (general) growth curve model with AR(1) autoregressive dependence, while applying the Box-Cox power transformations. We propose exact, simple and Markov chain Monte Carlo approximate parameter estimation and prediction of future values. Numerical results are illustrated with real and simulated data.  相似文献   
68.
在中国共产党诞辰80周年前夕,我们汇聚一堂,隆重热烈地召开党建和思想政治工作理论研讨会,以此来回顾我们党的光荣历史,讴歌我们党始终代表中国先进生产力的发展要求、中国先进文化的发展方向、中国最广大人民的根本利益的光辉业绩,展示自贡师专全体教职工在党的领导下不断端正教育思想、转变教育观念,全面贯彻执行党的教育方针,实现培养一代又一代具有正确的世界观、人生观、价值观,具有创造精神和实践能力的全面发展人才的目标所取得的成绩,表明我们在党的领导下,努力探索在新形势下高等学校党建和思想政治工作新思路、新特点的…  相似文献   
69.
本文依据现代教育学和现代经济学的理论,论述“教育能否产业化以及如何产业化”等问题,认为教育服务是一种准公共产品,教育是一种产业,但具有不同于一般产业的特殊性;按照我国实际情况,当前只能逐步推进部分教育产业化。  相似文献   
70.
“中国模式”对马克思主义中国化历史发展和理论成果具有重要的作用。具体表现在:“中国模式”内容要素是马克思主义中国化理论成果的重要组成部分;“中国模式”各要素的和谐运行是马克思主义中国化理论成果尤其是中国特色社会主义理论体系发展的重要条件;“中国模式”的成功是马克思主义中国化即马克思理论基本原理与中国实际相结合正确性的强有力的证明。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号