首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22929篇
  免费   297篇
管理学   3161篇
民族学   147篇
人才学   1篇
人口学   3444篇
丛书文集   78篇
教育普及   2篇
理论方法论   1636篇
综合类   624篇
社会学   10670篇
统计学   3463篇
  2021年   67篇
  2020年   190篇
  2019年   221篇
  2018年   1881篇
  2017年   1989篇
  2016年   1334篇
  2015年   264篇
  2014年   288篇
  2013年   2079篇
  2012年   713篇
  2011年   1445篇
  2010年   1253篇
  2009年   1010篇
  2008年   1074篇
  2007年   1257篇
  2006年   216篇
  2005年   571篇
  2004年   540篇
  2003年   473篇
  2002年   350篇
  2001年   318篇
  2000年   307篇
  1999年   314篇
  1998年   209篇
  1997年   204篇
  1996年   241篇
  1995年   188篇
  1994年   204篇
  1993年   177篇
  1992年   236篇
  1991年   232篇
  1990年   211篇
  1989年   219篇
  1988年   204篇
  1987年   185篇
  1986年   189篇
  1985年   174篇
  1984年   207篇
  1983年   200篇
  1982年   165篇
  1981年   137篇
  1980年   139篇
  1979年   166篇
  1978年   136篇
  1977年   119篇
  1976年   102篇
  1975年   101篇
  1974年   95篇
  1973年   93篇
  1971年   64篇
排序方式: 共有10000条查询结果,搜索用时 500 毫秒
951.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation (MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their integration.  相似文献   
952.
953.
954.
We deal with the double sampling plans by variables proposed by Bowker and Goode (Sampling Inspection by Variables, McGraw–Hill, New York, 1952) when the standard deviation is unknown. Using the procedure for the calculation of the OC given by Krumbholz and Rohr (Allg. Stat. Arch. 90:233–251, 2006), we present an optimization algorithm allowing to determine the ASN Minimax plan. This plan, among all double plans satisfying the classical two-point-condition on the OC, has the minimal ASN maximum.  相似文献   
955.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
956.
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again.  相似文献   
957.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   
958.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   
959.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   
960.
In many sciences researchers often meet the problem of establishing if the distribution of a categorical variable is more concentrated, or less heterogeneous, in population P 1 than in population P 2. An approximate nonparametric solution to this problem is discussed within the permutation context. Such a solution has similarities to that of testing for stochastic dominance, that is, of testing under order restrictions, for ordered categorical variables. Main properties of given solution and a Monte Carlo simulation in order to evaluate its degree of approximation and its power behaviour are examined. Two application examples are also discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号