首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2527篇
  免费   118篇
  国内免费   12篇
管理学   139篇
劳动科学   1篇
民族学   10篇
人才学   1篇
人口学   78篇
丛书文集   71篇
理论方法论   45篇
综合类   896篇
社会学   98篇
统计学   1318篇
  2024年   15篇
  2023年   51篇
  2022年   46篇
  2021年   54篇
  2020年   84篇
  2019年   113篇
  2018年   114篇
  2017年   134篇
  2016年   116篇
  2015年   72篇
  2014年   138篇
  2013年   354篇
  2012年   186篇
  2011年   160篇
  2010年   117篇
  2009年   92篇
  2008年   93篇
  2007年   109篇
  2006年   82篇
  2005年   76篇
  2004年   68篇
  2003年   55篇
  2002年   48篇
  2001年   39篇
  2000年   38篇
  1999年   28篇
  1998年   28篇
  1997年   29篇
  1996年   12篇
  1995年   18篇
  1994年   9篇
  1993年   8篇
  1992年   12篇
  1991年   14篇
  1990年   6篇
  1989年   2篇
  1988年   8篇
  1987年   5篇
  1986年   2篇
  1985年   4篇
  1984年   2篇
  1983年   6篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有2657条查询结果,搜索用时 0 毫秒
31.
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations.  相似文献   
32.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   
33.
ABSTRACT

In queuing theory, a major interest of researchers is studying the behavior and formation process and analyzing the performance characteristics of queues, particularly the traffic intensity, which is defined as the ratio between the arrival rate and the service rate. How these parameters can be estimated using some statistical inferential method is the mathematical problem treated here. This article aims to obtain better Bayesian estimates for the traffic intensity of M/M/1 queues, which, in Kendall notation, stand for Markovian single-server infinity queues. The Jeffreys prior is proposed to obtain the posterior and predictive distributions of some parameters of interest. Samples are obtained through simulation and some performance characteristics are analyzed. It is observed from the Bayes factor that Jeffreys prior is competitive, among informative and non-informative prior distributions, and presents the best performance in many of the cases tested.  相似文献   
34.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   
35.
The Studentized maximum root (SMR) distribution is useful for constructing simultaneous confidence intervals around product interaction contrasts in replicated two-way ANOVA. A three-moment approximation to the SMR distribution is proposed. The approximation requires the first three moments of the maximum root of a central Wishart matrix. These values are obtained by means of numerical integration. The accuracy of the approximation is compared to the accuracy of a two-moment approximation for selected two-way table sizes. Both approximations are reasonably accurate. The three-moment approximation is generally superior.  相似文献   
36.
We consider a test for the equality of k population medians, θi i=1,2,….,k, when it is believed a priori, that θ i: The observations are subject to right censorhip. The distributions of the censoring variables for each population are assumed to be equal. This test is compared with the general k-sample test proposed by Breslow  相似文献   
37.
38.
39.
A generalization of the Poisson distribution was defined by Consul and Jain (Ann. Math. Statist., 41, (1970)) and was obtained as a particular family of Lagrange distributions by Consul and Shenton (SIAM. J. Appl. Math., 23, (1972)). The distribution is subsequently named the generalized Poisson distribution (GPD). This GPD reduces to the Poisson distribution for ? = 0. When the data have a one-way layout structure, the asymptotically locally optimal Neyman's C(d) test is constructed and compared with the conditional test on the hypothesis Ho? = 0. Within the framework of the generalized linear models an appropriate link function is given, and the asymptotic distributions of the estimated parameters are derived.  相似文献   
40.
In many applications, decisions are made on the basis of function of parameters g(θ). When the value of g(theta;) is calculated using estimated values for te parameters, its is important to have a measure of the uncertainty associated with that value of g(theta;). Likelihood ratio approaches to finding likelihood intervals for functions of parameters have been shown to be more reliable, in terms of coverage probability, than the linearization approach. Two approaches to the generalization of the profiling algorithm have been proposed in the literature to enable construction of likelihood intervals for a function of parameters (Chen and Jennrich, 1996; Bates and Watts, 1988). In this paper we show the equivalence of these two methods. We also provide and analysis of cases in which neither profiling algorithm is appropriate. For one of these cases an alternate approach is suggested Whereas generalized profiling is based on maximizing the likelihood function given a constraint on the value of g(θ), the alternative algorithm is based on optimizing g(θ) given a constraint on the value of the likelihood function.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号