首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7572篇
  免费   216篇
  国内免费   104篇
管理学   449篇
劳动科学   15篇
民族学   117篇
人才学   1篇
人口学   151篇
丛书文集   896篇
理论方法论   397篇
综合类   4839篇
社会学   635篇
统计学   392篇
  2024年   9篇
  2023年   47篇
  2022年   92篇
  2021年   120篇
  2020年   123篇
  2019年   120篇
  2018年   124篇
  2017年   160篇
  2016年   111篇
  2015年   235篇
  2014年   303篇
  2013年   567篇
  2012年   395篇
  2011年   475篇
  2010年   508篇
  2009年   491篇
  2008年   578篇
  2007年   578篇
  2006年   530篇
  2005年   481篇
  2004年   341篇
  2003年   275篇
  2002年   340篇
  2001年   312篇
  2000年   203篇
  1999年   83篇
  1998年   33篇
  1997年   31篇
  1996年   36篇
  1995年   23篇
  1994年   27篇
  1993年   19篇
  1992年   23篇
  1991年   18篇
  1990年   10篇
  1989年   8篇
  1988年   4篇
  1987年   9篇
  1986年   6篇
  1985年   6篇
  1984年   7篇
  1982年   3篇
  1981年   4篇
  1980年   4篇
  1979年   3篇
  1977年   2篇
  1975年   2篇
  1974年   2篇
  1973年   3篇
  1966年   2篇
排序方式: 共有7892条查询结果,搜索用时 15 毫秒
31.
A control procedure is presented in this article that is based on jointly using two separate control statistics in the detection and interpretation of signals in a multivariate normal process. The procedure detects the following three situations: (i) a mean vector shift without a shift in the covariance matrix; (ii) a shift in process variation (covariance matrix) without a mean vector shift; and (iii) both a simultaneous shift in the mean vector and covariance matrix as the result of a change in the parameters of some key process variables. It is shown that, following the occurrence of a signal on either of the separate control charts, the values from both of the corresponding signaling statistics can be decomposed into interpretable elements. Viewing the two decompositions together helps one to specifically identify the individual components and associated variables that are being affected. These components may include individual means or variances of the process variables as well as the correlations between or among variables. An industrial data set is used to illustrate the procedure.  相似文献   
32.
We consider the semiparametric profile likelihood inference for the distribution function under doubly censored data. For further developments of the statistical inference based on the profile likelihood ratio and alternative tools such as the score or Wald-type inference, we discuss the structures of the profile likelihood estimators and their derivatives included in the score function and the Fisher function of the profile likelihood, establishing the consistencies of their estimators.  相似文献   
33.
A gamma regression model with an exponential link function for the means Is considered. Moment properties of the deviance statistics based on maximum likelihood and weighted least squares fits are used to define modified deviance statistics which provide alternative global goodness of fit tests. The null distribution properties of the deviances and modified deviances are compared with those of the approximating chi-square distribution and It is shown that the use of the modified deviances gives much better control over the significance levels of the tests.  相似文献   
34.
35.
36.
37.
The expectation-maximization (EM) method facilitates computation of max¬imum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservabie complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-stcp. The EM algorithm is most effective when maximizing the iunction Q{0) denned in the F-stnp is easier than maximizing the likelihood function.

The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can he computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of thc standard EM algorithm avoiding computation of conditional expectations.  相似文献   
38.
ABSTRACT

One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.  相似文献   
39.
In this paper we assess the sensitivity of the multivariate extreme deviate test for a single multivariate outlier to non-normality in the form of heavy tails. We find that the empirical significance levels can be markedly affected by even modest departures from multivariate normality. The effects are particularly severe when the sample size is large relative to the dimension. Finally, by way of example we demonstrate that certain graphical techniques may prove useful in identifying the source of rejection for the multivariate extreme deviate test.  相似文献   
40.
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号