首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2277篇
  免费   68篇
管理学   264篇
民族学   20篇
人口学   244篇
丛书文集   12篇
理论方法论   266篇
综合类   27篇
社会学   1016篇
统计学   496篇
  2023年   27篇
  2022年   8篇
  2021年   29篇
  2020年   65篇
  2019年   93篇
  2018年   113篇
  2017年   134篇
  2016年   80篇
  2015年   60篇
  2014年   77篇
  2013年   460篇
  2012年   97篇
  2011年   86篇
  2010年   72篇
  2009年   57篇
  2008年   71篇
  2007年   69篇
  2006年   68篇
  2005年   57篇
  2004年   42篇
  2003年   50篇
  2002年   45篇
  2001年   45篇
  2000年   29篇
  1999年   38篇
  1998年   22篇
  1997年   20篇
  1996年   23篇
  1995年   22篇
  1994年   22篇
  1993年   24篇
  1992年   27篇
  1991年   25篇
  1990年   11篇
  1989年   13篇
  1988年   12篇
  1987年   11篇
  1986年   9篇
  1985年   15篇
  1984年   15篇
  1983年   13篇
  1982年   11篇
  1981年   17篇
  1980年   10篇
  1979年   8篇
  1978年   5篇
  1976年   5篇
  1975年   8篇
  1974年   9篇
  1966年   3篇
排序方式: 共有2345条查询结果,搜索用时 15 毫秒
31.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
32.
Two ways of modelling overdispersion in non-normal data   总被引:2,自引:0,他引:2  
For non-normal data assumed to have distributions, such as the Poisson distribution, which have an a priori dispersion parameter, there are two ways of modelling overdispersion: by a quasi-likelihood approach or with a random-effect model. The two approaches yield different variance functions for the response, which may be distinguishable if adequate data are available. The epilepsy data of Thall and Vail and the fabric data of Bissell are used to exemplify the ideas.  相似文献   
33.
A case–control study of lung cancer mortality in U.S. railroad workers in jobs with and without diesel exhaust exposure is reanalyzed using a new threshold regression methodology. The study included 1256 workers who died of lung cancer and 2385 controls who died primarily of circulatory system diseases. Diesel exhaust exposure was assessed using railroad job history from the US Railroad Retirement Board and an industrial hygiene survey. Smoking habits were available from next-of-kin and potential asbestos exposure was assessed by job history review. The new analysis reassesses lung cancer mortality and examines circulatory system disease mortality. Jobs with regular exposure to diesel exhaust had a survival pattern characterized by an initial delay in mortality, followed by a rapid deterioration of health prior to death. The pattern is seen in subjects dying of lung cancer, circulatory system diseases, and other causes. The unique pattern is illustrated using a new type of Kaplan–Meier survival plot in which the time scale represents a measure of disease progression rather than calendar time. The disease progression scale accounts for a healthy-worker effect when describing the effects of cumulative exposures on mortality.  相似文献   
34.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   
35.
In this paper, we consider some noninformative priors for the common mean in a bivariate normal population. We develop the first-order and second-order matching priors and reference priors. We find that the second-order matching prior is also an HPD matching prior, and matches the alternative coverage probabilities up to the second order. It turns out that derived reference priors do not satisfy a second-order matching criterion. Our simulation study indicates that the second-order matching prior performs better than the reference priors in terms of matching the target coverage probabilities in a frequentist sense. We also illustrate our results using real data.  相似文献   
36.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   
37.
In this paper, we consider noninformative priors for the ratio of variances in two normal populations. We develop first and second order matching priors. We find that the second order matching prior matches alternative coverage probabilities up to the second order and is also a HPD matching prior. It turns out that among the reference priors, only one-at-a-time reference prior satisfies a second order matching criterion. Our simulation study indicates that the one-at-a-time reference prior performs better than other reference priors in terms of matching the target coverage probabilities in a frequentist sense. This work is supported by Korea Research Foundation Grant (KRF-2004-002-C00041).  相似文献   
38.
We show that smoothing spline, intrinsic autoregression (IAR) and state-space model can be formulated as partially specified random-effect model with singular precision (SP). Various fitting methods have been suggested for the aforementioned models and this paper investigates the relationships among them, once the models have been placed under a single framework. Some methods have been previously shown to give the best linear unbiased predictors (BLUPs) under some random-effect models and here we show that they are in fact uniformly BLUPs (UBLUPs) under a class of models that are generated by the SP of random effects. We offer some new interpretations of the UBLUPs under models of SP and define BLUE and BLUP in these partially specified models without having to specify the covariance. We also show how the full likelihood inferences for random-effect models can be made for these models, so that the maximum likelihood (ML) and restricted maximum likelihood (REML) estimators can be used for the smoothing parameters in splines, etc.  相似文献   
39.
Box–Cox together with our newly proposed transformation were implemented in three different real world empirical problems to alleviate noisy and the volatility effect of them. Consequently, a new domain was constructed. Subsequently, universe of discourse for transformed data was established and an approach for calculating effective length of the intervals was then proposed. Considering the steps above, the initial forecasts were performed using frequently used fuzzy time series (FTS) methods on transformed data. Final forecasts were retrieved from initial forecasted values by proper inverse operation. Comparisons of the results demonstrate that the proposed method produced more accurate forecasts compared with existing FTS on original data.  相似文献   
40.
In this paper, within the framework of a Bayesian model, we consider the problem of sequentially estimating the intensity parameter of a homogeneous Poisson process with a linear exponential (LINEX) loss function and a fixed cost per unit time. An asymptotically pointwise optimal (APO) rule is proposed. It is shown to be asymptotically optimal for the arbitrary priors and asymptotically non-deficient for the conjugate priors in a similar sense of Bickel and Yahav [Asymptotically pointwise optimal procedures in sequential analysis, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley, CA, 1967, pp. 401–413; Asymptotically optimal Bayes and minimax procedures in sequential estimation, Ann. Math. Statist. 39 (1968), pp. 442–456] and Woodroofe [A.P.O. rules are asymptotically non-deficient for estimation with squared error loss, Z. Wahrsch. verw. Gebiete 58 (1981), pp. 331–341], respectively. The proposed APO rule is illustrated using a real data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号