首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4477篇
  免费   113篇
  国内免费   17篇
管理学   192篇
民族学   4篇
人口学   18篇
丛书文集   81篇
理论方法论   41篇
综合类   693篇
社会学   30篇
统计学   3548篇
  2024年   2篇
  2023年   25篇
  2022年   40篇
  2021年   30篇
  2020年   73篇
  2019年   155篇
  2018年   178篇
  2017年   270篇
  2016年   131篇
  2015年   138篇
  2014年   197篇
  2013年   1252篇
  2012年   436篇
  2011年   186篇
  2010年   177篇
  2009年   156篇
  2008年   144篇
  2007年   129篇
  2006年   117篇
  2005年   93篇
  2004年   115篇
  2003年   65篇
  2002年   58篇
  2001年   66篇
  2000年   59篇
  1999年   55篇
  1998年   52篇
  1997年   40篇
  1996年   13篇
  1995年   17篇
  1994年   17篇
  1993年   17篇
  1992年   18篇
  1991年   5篇
  1990年   14篇
  1989年   9篇
  1988年   4篇
  1987年   6篇
  1986年   5篇
  1985年   2篇
  1984年   11篇
  1983年   5篇
  1982年   7篇
  1981年   6篇
  1980年   4篇
  1979年   3篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有4607条查询结果,搜索用时 15 毫秒
991.
In conditional logspline modelling, the logarithm of the conditional density function, log f(y|x), is modelled by using polynomial splines and their tensor products. The parameters of the model (coefficients of the spline functions) are estimated by maximizing the conditional log-likelihood function. The resulting estimate is a density function (positive and integrating to one) and is twice continuously differentiable. The estimate is used further to obtain estimates of regression and quantile functions in a natural way. An automatic procedure for selecting the number of knots and knot locations based on minimizing a variant of the AIC is developed. An example with real data is given. Finally, extensions and further applications of conditional logspline models are discussed.  相似文献   
992.
The generalized likelihood plays an important role in parametric inference for prediction and empirical Bayesian models. This paper emphasizes the utility of the generalized likelihood as a summarization procedure in general prediction models. Properties of the generalized likelihood when used in this setting, and examples of its use as a data analytic tool are given in a series of numerical examples.  相似文献   
993.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   
994.
The complex Watson distribution is an important simple distribution on the complex sphere which is used in statistical shape analysis. We describe the density, obtain the integrating constant and provide large sample approximations. Maximum likelihood estimation and hypothesis testing procedures for one and two samples are described. The particular connection with shape analysis is discussed and we consider an application examining shape differences between normal and schizophrenic brains. We make some observations about Bayesian shape inference and finally we describe a more general rotationally symmetric family of distributions.  相似文献   
995.
On the use of corrections for overdispersion   总被引:3,自引:0,他引:3  
In studying fluctuations in the size of a blackgrouse ( Tetrao tetrix ) population, an autoregressive model using climatic conditions appears to follow the change quite well. However, the deviance of the model is considerably larger than its number of degrees of freedom. A widely used statistical rule of thumb holds that overdispersion is present in such situations, but model selection based on a direct likelihood approach can produce opposing results. Two further examples, of binomial and of Poisson data, have models with deviances that are almost twice the degrees of freedom and yet various overdispersion models do not fit better than the standard model for independent data. This can arise because the rule of thumb only considers a point estimate of dispersion, without regard for any measure of its precision. A reasonable criterion for detecting overdispersion is that the deviance be at least twice the number of degrees of freedom, the familiar Akaike information criterion, but the actual presence of overdispersion should then be checked by some appropriate modelling procedure.  相似文献   
996.
In designed experiments and in particular longitudinal studies, the aim may be to assess the effect of a quantitative variable such as time on treatment effects. Modelling treatment effects can be complex in the presence of other sources of variation. Three examples are presented to illustrate an approach to analysis in such cases. The first example is a longitudinal experiment on the growth of cows under a factorial treatment structure where serial correlation and variance heterogeneity complicate the analysis. The second example involves the calibration of optical density and the concentration of a protein DNase in the presence of sampling variation and variance heterogeneity. The final example is a multienvironment agricultural field experiment in which a yield–seeding rate relationship is required for several varieties of lupins. Spatial variation within environments, heterogeneity between environments and variation between varieties all need to be incorporated in the analysis. In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure. The key result that allows coherent and flexible empirical model building in complex situations is the linear mixed model representation of the cubic smoothing spline. An extension is proposed in which trend is partitioned into smooth and non-smooth components. Estimation and inference, the analysis of the three examples and a discussion of extensions and unresolved issues are also presented.  相似文献   
997.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   
998.
The kth ( 1<k 2) power expectile regression (ER) can balance robustness and effectiveness between the ordinary quantile regression and ER simultaneously. Motivated by a longitudinal ACTG 193A data with nonignorable dropouts, we propose a two-stage estimation procedure and statistical inference methods based on the kth power ER and empirical likelihood to accommodate both the within-subject correlations and nonignorable dropouts. Firstly, we construct the bias-corrected generalized estimating equations by combining the kth power ER and inverse probability weighting approaches. Subsequently, the generalized method of moments is utilized to estimate the parameters in the nonignorable dropout propensity based on sufficient instrumental estimating equations. Secondly, in order to incorporate the within-subject correlations under an informative working correlation structure, we borrow the idea of quadratic inference function to obtain the improved empirical likelihood procedures. The asymptotic properties of the corresponding estimators and their confidence regions are derived. The finite-sample performance of the proposed estimators is studied through simulation and an application to the ACTG 193A data is also presented.  相似文献   
999.
1000.
现实生活中的时间序列,通常伴随着大量的噪声和高度的波动性。对于这些非线性时间序列,运用传统的统计和计量经济模型进行分析预测,预测结果往往不够理想。文章基于经验模态分解(EMD)和人工神经网络提出改进方法。主体思想是"先分再合":先用EMD方法分解非线性时间序列,得到一系列易于分析的独立的子系列,然后利用神经网络(FNN)对每一个子系列进行分析和预测,最后再用自适应线性神经网络(ALNN)整合并得出最终结果。结合具体房价时间序列实例,证实了这种方法的优势。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号