首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3081篇
  免费   81篇
管理学   488篇
民族学   24篇
人才学   1篇
人口学   269篇
丛书文集   27篇
理论方法论   308篇
综合类   34篇
社会学   1597篇
统计学   414篇
  2023年   24篇
  2022年   18篇
  2021年   30篇
  2020年   72篇
  2019年   77篇
  2018年   91篇
  2017年   129篇
  2016年   116篇
  2015年   84篇
  2014年   93篇
  2013年   460篇
  2012年   126篇
  2011年   112篇
  2010年   75篇
  2009年   96篇
  2008年   90篇
  2007年   77篇
  2006年   83篇
  2005年   98篇
  2004年   97篇
  2003年   86篇
  2002年   86篇
  2001年   85篇
  2000年   70篇
  1999年   51篇
  1998年   38篇
  1997年   33篇
  1996年   47篇
  1995年   36篇
  1994年   34篇
  1993年   52篇
  1992年   39篇
  1991年   36篇
  1990年   26篇
  1989年   22篇
  1988年   24篇
  1987年   24篇
  1986年   23篇
  1985年   23篇
  1984年   33篇
  1983年   24篇
  1982年   28篇
  1981年   20篇
  1980年   20篇
  1979年   22篇
  1978年   14篇
  1977年   22篇
  1976年   11篇
  1975年   22篇
  1974年   20篇
排序方式: 共有3162条查询结果,搜索用时 9 毫秒
71.
In many applications, decisions are made on the basis of function of parameters g(θ). When the value of g(theta;) is calculated using estimated values for te parameters, its is important to have a measure of the uncertainty associated with that value of g(theta;). Likelihood ratio approaches to finding likelihood intervals for functions of parameters have been shown to be more reliable, in terms of coverage probability, than the linearization approach. Two approaches to the generalization of the profiling algorithm have been proposed in the literature to enable construction of likelihood intervals for a function of parameters (Chen and Jennrich, 1996; Bates and Watts, 1988). In this paper we show the equivalence of these two methods. We also provide and analysis of cases in which neither profiling algorithm is appropriate. For one of these cases an alternate approach is suggested Whereas generalized profiling is based on maximizing the likelihood function given a constraint on the value of g(θ), the alternative algorithm is based on optimizing g(θ) given a constraint on the value of the likelihood function.  相似文献   
72.
A measure of multicollinearity is defined which is useful in evaluating maintained hypotheses and aiding estimator selection as it suggests when a non-traditional estimator proposed by Bock (1975) is minimax and dominates ordinary least squares. An example is used to illustrate the presented methodology.  相似文献   
73.
The smooth goodness of fit tests are generalized to singly censored data and applied to the problem of testing Weibull (or extreme value) fit. Smooth tests, Pearson-type tests, and the spacings tests proposed by Mann, Schemer, and Fertig (1973) are compared on the basis of local asymptotic relative efficiency with respect to the asymptotic best test against generalized gamma alternatives, The smooth test of order one Is found to be most efficient for the generalized gamma alternatives.  相似文献   
74.
Two nonparametric estimators o f the survival distributionare discussed. The estimators were proposed by Kaplan and Meier (1958) and Breslow (1972) and are applicable when dealing with censored data. It is known that they are asymptotically unbiased and uniformly strongly consistent, and when properly normalized that they converge weakly to the same Gaussian process. In this paper, the properties of the estimators are carefully inspected in small or moderate samples. The Breslow estimator, a shrinkage version of the Kaplan-Meier, nearly always has the smaller mean square error (MSE) whenever the truesurvival probabilityis at least 0.20, but has considerably larger MSE than the Kaplan-Meier estimator when the survivalprobability is near zero.  相似文献   
75.
Certain recurrence relations for the moments of different orders of the largest order statistic from a gamma distribution with shape parameter p are obtained. By using this it is shown that for obtaining the moment of any order of each order statistic of a sample of size n from the gamma distribution, one has to evaluate at most n-2 single integrals.  相似文献   
76.
Various mathematical and statistical models for estimation of automobile insurance pricing are reviewed. The methods are compared on their predictive ability based on two sets of automobile insurance data for two different states collected over two different periods. The issue of model complexity versus data availability is resolved through a comparison of the accuracy of prediction. The models reviewed range from the use of simple cell means to various multiplicative-additive schemes to the empirical-Bayes approach. The empirical-Bayes approach, with prediction based on both model-based and individual cell estimates, seems to yield the best forecast.  相似文献   
77.
United States statistical agencies use data from administrative record systems to develop program statistics, to establish statistical data bases, and to enhance and evaluate census and survey data. Such uses of administrative records are likely to increase as efforts to control costs and respondent burden of statistical programs continue. This review article proposes six goals for enhanced statistical uses of administrative records in the next 10 years and describes elements of an activist strategy to achieve them. The discussants, representing three agencies that make important statistical uses of administrative records, give their reactions to the proposed goals and strategy.  相似文献   
78.
Many of the available methods for estimating small-area parameters are model-based approaches in which auxiliary variables are used to predict the variable of interest. For models that are nonlinear, prediction is not straightforward. MacGibbon and Tomberlin and Farrell, MacGibbon, and Tomberlin have proposed approaches that require microdata for all individuals in a small area. In this article, we develop a method, based on a second-order Taylor-series expansion to obtain model-based predictions, that requires only local-area summary statistics for both continuous and categorical auxiliary variables. The methodology is evaluated using data based on a U.S. Census.  相似文献   
79.
This article investigates the comprehensive effects of unemployment insurance (UI) policies on the amount of time and unemployment that individuals report between jobs. The econometric model jointly determines the effects of UI on the lengths of nonemployment spells, the classification of these spells as unemployment, and the likelihood of collecting program benefits. The model carefully attempts to isolate variation in UI benefits attributable to differences in generosity across programs to avoid biases in estimating policy effects induced by other contaminating sources of benefit variation. Using data on men from the National Longitudinal Survey of Youth, the empirical results find (a) UI recipients typically experience longer spells between jobs, at least up to the exhaustion of UI benefits, and report substantially larger fractions of these spells as unemployment; (b) weekly benefit amounts exert no significant influence on the likelihood of UI recipiency, on the length of spells between jobs, or on the fraction of these spells classified as unemployment; and (c) increases in weeks of UI eligibility raise the likelihood of UI collection and lengthen the number of weeks of unemployment between jobs by inducing long spells to become longer and not by altering short-duration behavior.  相似文献   
80.
I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号