首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22673篇
  免费   296篇
管理学   3125篇
民族学   146篇
人才学   1篇
人口学   3411篇
丛书文集   78篇
教育普及   2篇
理论方法论   1628篇
综合类   618篇
社会学   10585篇
统计学   3375篇
  2021年   67篇
  2020年   190篇
  2019年   220篇
  2018年   1850篇
  2017年   1954篇
  2016年   1313篇
  2015年   260篇
  2014年   286篇
  2013年   2075篇
  2012年   706篇
  2011年   1419篇
  2010年   1232篇
  2009年   985篇
  2008年   1053篇
  2007年   1233篇
  2006年   216篇
  2005年   567篇
  2004年   534篇
  2003年   457篇
  2002年   344篇
  2001年   318篇
  2000年   307篇
  1999年   314篇
  1998年   209篇
  1997年   204篇
  1996年   240篇
  1995年   188篇
  1994年   204篇
  1993年   177篇
  1992年   236篇
  1991年   232篇
  1990年   211篇
  1989年   219篇
  1988年   202篇
  1987年   185篇
  1986年   189篇
  1985年   174篇
  1984年   207篇
  1983年   200篇
  1982年   165篇
  1981年   137篇
  1980年   139篇
  1979年   166篇
  1978年   136篇
  1977年   119篇
  1976年   102篇
  1975年   101篇
  1974年   95篇
  1973年   93篇
  1971年   64篇
排序方式: 共有10000条查询结果,搜索用时 93 毫秒
941.
942.
943.
We deal with the double sampling plans by variables proposed by Bowker and Goode (Sampling Inspection by Variables, McGraw–Hill, New York, 1952) when the standard deviation is unknown. Using the procedure for the calculation of the OC given by Krumbholz and Rohr (Allg. Stat. Arch. 90:233–251, 2006), we present an optimization algorithm allowing to determine the ASN Minimax plan. This plan, among all double plans satisfying the classical two-point-condition on the OC, has the minimal ASN maximum.  相似文献   
944.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
945.
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again.  相似文献   
946.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   
947.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   
948.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   
949.
In many sciences researchers often meet the problem of establishing if the distribution of a categorical variable is more concentrated, or less heterogeneous, in population P 1 than in population P 2. An approximate nonparametric solution to this problem is discussed within the permutation context. Such a solution has similarities to that of testing for stochastic dominance, that is, of testing under order restrictions, for ordered categorical variables. Main properties of given solution and a Monte Carlo simulation in order to evaluate its degree of approximation and its power behaviour are examined. Two application examples are also discussed.  相似文献   
950.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号