首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3466篇
  免费   108篇
  国内免费   22篇
管理学   321篇
劳动科学   4篇
民族学   42篇
人才学   1篇
人口学   181篇
丛书文集   382篇
理论方法论   312篇
综合类   949篇
社会学   991篇
统计学   413篇
  2024年   4篇
  2023年   22篇
  2022年   32篇
  2021年   58篇
  2020年   85篇
  2019年   106篇
  2018年   108篇
  2017年   128篇
  2016年   93篇
  2015年   103篇
  2014年   145篇
  2013年   423篇
  2012年   151篇
  2011年   185篇
  2010年   201篇
  2009年   168篇
  2008年   155篇
  2007年   169篇
  2006年   192篇
  2005年   157篇
  2004年   101篇
  2003年   97篇
  2002年   114篇
  2001年   112篇
  2000年   63篇
  1999年   59篇
  1998年   37篇
  1997年   39篇
  1996年   30篇
  1995年   30篇
  1994年   26篇
  1993年   20篇
  1992年   16篇
  1991年   21篇
  1990年   11篇
  1989年   18篇
  1988年   11篇
  1987年   9篇
  1986年   8篇
  1985年   10篇
  1984年   13篇
  1983年   8篇
  1981年   8篇
  1980年   8篇
  1979年   5篇
  1978年   8篇
  1977年   5篇
  1976年   7篇
  1975年   3篇
  1968年   3篇
排序方式: 共有3596条查询结果,搜索用时 31 毫秒
31.
In this paper, we address the problem of simulating from a data-generating process for which the observed data do not follow a regular probability distribution. One existing method for doing this is bootstrapping, but it is incapable of interpolating between observed data. For univariate or bivariate data, in which a mixture structure can easily be identified, we could instead simulate from a Gaussian mixture model. In general, though, we would have the problem of identifying and estimating the mixture model. Instead of these, we introduce a non-parametric method for simulating datasets like this: Kernel Carlo Simulation. Our algorithm begins by using kernel density estimation to build a target probability distribution. Then, an envelope function that is guaranteed to be higher than the target distribution is created. We then use simple accept–reject sampling. Our approach is more flexible than others, can simulate intelligently across gaps in the data, and requires no subjective modelling decisions. With several univariate and multivariate examples, we show that our method returns simulated datasets that, compared with the observed data, retain the covariance structures and have distributional characteristics that are remarkably similar.  相似文献   
32.
Two approaches to the problem of goodness-of-fit with nuisance parameters are presented in this paper, both based on modifications of the Kolmogorov-Smirnov statistics. Improved tables of critical values originally computed by Lilliefors and Srinivasan are presented in the normal and exponential cases. Also given are tables for the uniform case, normal with known mean and normal with known variance. All tables were computed using Monte Carlo simulation with sample size n = 20000.  相似文献   
33.
A three-parameter F approximation to the distribution of a positive linear combination of central chi-squared variables is described. It is about as easy to implement as the Satterthwaite-Welsh and Hall-Buckley-Eagleson approximations. Some reassuring properties of the F approximation are derived, and numerical results are presented. The numerical results indicate that the new approximation is superior to the Satterthwaite approximation and, for some purposes, better than the Hall-Buckley-Eagleson approximation. It is not quite as good as the Gamma-Weibull approximation due to Solomon and Stephens, but is easier to implement because iterative methods are not required.  相似文献   
34.
We consider the problem of estimating the two parameters of the discrete Good distribution. We first show that the sufficient statistics for the parameters are the arithmetic and the geometric means. The maximum likelihood estimators (MLE's) of the parameters are obtained by solving numerically a system of equations involving the Lerch zeta function and the sufficient statistics. We find an expression for the asymptotic variance-covariance matrix of the MLE's, which can be evaluated numerically. We show that the probability mass function satisfies a simple recurrence equation linear in the two parameters, and propose the quadratic distance estimator (QDE) which can be computed with an ineratively reweighted least-squares algorithm. the QDE is easy to calculate and admits a simple expression for its asymptotic variance-covariance matrix. We compute this matrix for the MLE's and the QDE for various values of the parameters and see that the QDE has very high asymptotic efficiency. Finally, we present a numerical example.  相似文献   
35.
Abstract

In this paper, we discuss how to model the mean and covariancestructures in linear mixed models (LMMs) simultaneously. We propose a data-driven method to modelcovariance structures of the random effects and random errors in the LMMs. Parameter estimation in the mean and covariances is considered by using EM algorithm, and standard errors of the parameter estimates are calculated through Louis’ (1982 Louis, T.A. (1982). Finding observed information using the EM algorithm. J. Royal Stat. Soc. B 44:98130. [Google Scholar]) information principle. Kenward’s (1987 Kenward, M.G. (1987). A method for comparing profiles of repeated measurements. Appl. Stat. 36:296308.[Crossref], [Web of Science ®] [Google Scholar]) cattle data sets are analyzed for illustration,and comparison to the literature work is made through simulation studies. Our numerical analysis confirms the superiority of the proposed method to existing approaches in terms of Akaike information criterion.  相似文献   
36.
The bounds of Soms (1980a) for the tail area of the t-distribution with integral degrees of freedom are extended to arbitrary positive degrees of freedom, Comparisons are made with the bounds of Shenton and Carpenter (1965) and some numerical examples are provided.  相似文献   
37.
We develop a methodology for examining savings behavior in rural areas of developing countries that explicitly incorporates the sequential decision process in agriculture. The approach is used to examine the relative importance of alternative forms of savings in the presence and absence of formal financial intermediaries. Our results, based on stage-specific panel data from Pakistan, provide evidence that the presence of financial intermediaries importantly influences the use of formal savings and transfers for income smoothing. We also find that there are significant biases in evaluations of the savings-income relationship that are inattentive to the within-year dynamics of agricultural production.  相似文献   
38.
Several important economic time series are recorded on a particular day every week. Seasonal adjustment of such series is difficult because the number of weeks varies between 52 and 53 and the position of the recording day changes from year to year. In addition certain festivals, most notably Easter, take place at different times according to the year. This article presents a solution to problems of this kind by setting up a structural time series model that allows the seasonal pattern to evolve over time and enables trend extraction and seasonal adjustment to be carried out by means of state-space filtering and smoothing algorithms. The method is illustrated with a Bank of England series on the money supply.  相似文献   
39.
The results of Hoeffding (1956), Pledger and Proschan (1971), Gleser (1975) and Boland and Proschan (1983) are used to obtain Buehler (1957) 1-α lower confidence limits for the reliability of k of n systems of independent components when the subsystem data have equal sample sizes and the observed failures satisfy certain conditions. To the best of our knowledge, for k ≠ 1 or n, this is the first time the exact optimal lower confidence limits for system reliability have been given. The observed failure vectors are a generalization of key test results for k of n systems, k ≠ n (Soms (1984) and Winterbottom (1974)). Two examples applying the above theory are also given.  相似文献   
40.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号