首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8186篇
  免费   126篇
  国内免费   1篇
管理学   1104篇
民族学   29篇
人口学   710篇
丛书文集   31篇
理论方法论   696篇
综合类   134篇
社会学   3673篇
统计学   1936篇
  2020年   115篇
  2019年   173篇
  2018年   167篇
  2017年   258篇
  2016年   175篇
  2015年   155篇
  2014年   181篇
  2013年   1518篇
  2012年   246篇
  2011年   201篇
  2010年   185篇
  2009年   162篇
  2008年   227篇
  2007年   223篇
  2006年   183篇
  2005年   158篇
  2004年   137篇
  2003年   128篇
  2002年   144篇
  2001年   184篇
  2000年   163篇
  1999年   147篇
  1998年   128篇
  1997年   128篇
  1996年   125篇
  1995年   110篇
  1994年   83篇
  1993年   126篇
  1992年   137篇
  1991年   119篇
  1990年   130篇
  1989年   108篇
  1988年   107篇
  1987年   111篇
  1986年   101篇
  1985年   84篇
  1984年   115篇
  1983年   98篇
  1982年   94篇
  1981年   63篇
  1980年   93篇
  1979年   105篇
  1978年   72篇
  1977年   82篇
  1976年   70篇
  1975年   86篇
  1974年   68篇
  1973年   54篇
  1972年   61篇
  1971年   52篇
排序方式: 共有8313条查询结果,搜索用时 2 毫秒
981.
Maximum likelihood estimation under constraints for estimation in the Wishart class of distributions, is considered. It provides a unified approach to estimation in a variety of problems concerning covariance matrices. Virtually all covariance structures can be translated to constraints on the covariances. This includes covariance matrices with given structure such as linearly patterned covariance matrices, covariance matrices with zeros, independent covariance matrices and structurally dependent covariance matrices. The methodology followed in this paper provides a useful and simple approach to directly obtain the exact maximum likelihood estimates. These maximum likelihood estimates are obtained via an estimation procedure for the exponential class using constraints.  相似文献   
982.
We wish to test the null hypothesis if the means of N panels remain the same during the observation period of length T. A quasi-likelihood argument leads to self-normalized statistics whose limit distribution under the null hypothesis is double exponential. The main results are derived assuming that the each panel is based on independent observations and then extended to linear processes. The proofs are based on an approximation of the sum of squared CUSUM processes using the Skorokhod embedding scheme. A simulation study illustrates that our results can be used in case of small and moderate N and T. We apply our results to detect change in the “corruption index”.  相似文献   
983.
This article proposes a method for constructing confidence intervals for the impulse response function of a univariate time series with a near unit root. These confidence intervals control coverage, whereas the existing techniques can all have coverage far below the nominal level. I apply the proposed method to several measures of U.S. aggregate output.  相似文献   
984.
Let {X n , n ≥ 1} be a sequence of pairwise negatively quadrant dependent (NQD) random variables. In this study, we prove almost sure limit theorems for weighted sums of the random variables. From these results, we obtain a version of the Glivenko–Cantelli lemma for pairwise NQD random variables under some fragile conditions. Moreover, a simulation study is done to compare the convergence rates with those of Azarnoosh (Pak J Statist 19(1):15–23, 2003) and Li et al. (Bull Inst Math 1:281–305, 2006).  相似文献   
985.
986.
We investigate methods for the design of sample surveys, and address the traditional resistance of survey samplers to the use of model-based methods by incorporating model robustness at the design stage. The designs are intended to be sufficiently flexible and robust that resulting estimates, based on the designer’s best guess at an appropriate model, remain reasonably accurate in a neighbourhood of this central model. Thus, consider a finite population of N units in which a survey variable Y is related to a q dimensional auxiliary variable x. We assume that the values of x are known for all N population units, and that we will select a sample of nN population units and then observe the n corresponding values of Y. The objective is to predict the population total $T=\sum_{i=1}^{N}Y_{i}$ . The design problem which we consider is to specify a selection rule, using only the values of the auxiliary variable, to select the n units for the sample so that the predictor has optimal robustness properties. We suppose that T will be predicted by methods based on a linear relationship between Y—possibly transformed—and given functions of x. We maximise the mean squared error of the prediction of T over realistic neighbourhoods of the fitted linear relationship, and of the assumed variance and correlation structures. This maximised mean squared error is then minimised over the class of possible samples, yielding an optimally robust (‘minimax’) design. To carry out the minimisation step we introduce a genetic algorithm and discuss its tuning for maximal efficiency.  相似文献   
987.
Various programs in statistical packages for analysis of variance with unequal cell size give different results to the same data because of nonorthogonality of the main effects and interactions. This paper explains how these programs treat the problem of analysis of variance of unbalanced data.  相似文献   
988.
The two well-known and widely used multinomial selection procedures Bechhofor, Elmaghraby, and Morse (BEM) and all vector comparison (AVC) are critically compared in applications related to simulation optimization problems.

Two configurations of population probability distributions in which the best system has the greatest probability p i of yielding the largest value of the performance measure and has or does not have the largest expected performance measure were studied.

The numbers achieved by our simulations clearly show that none of the studied procedures outperform the other in all situations. The user must take into consideration the complexity of the simulations and the performance measure probability distribution properties when deciding which procedure to employ.

An important discovery was that the AVC does not work in populations in which the best system has the greatest probability p i of yielding the largest value of the performance measure but does not have the largest expected performance measure.  相似文献   
989.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time.  相似文献   
990.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号