首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1893篇
  免费   128篇
  国内免费   3篇
管理学   192篇
民族学   7篇
人才学   1篇
人口学   92篇
丛书文集   120篇
理论方法论   81篇
综合类   700篇
社会学   107篇
统计学   724篇
  2024年   1篇
  2023年   3篇
  2022年   22篇
  2021年   30篇
  2020年   49篇
  2019年   56篇
  2018年   65篇
  2017年   37篇
  2016年   51篇
  2015年   64篇
  2014年   121篇
  2013年   284篇
  2012年   194篇
  2011年   172篇
  2010年   135篇
  2009年   111篇
  2008年   87篇
  2007年   80篇
  2006年   84篇
  2005年   68篇
  2004年   62篇
  2003年   48篇
  2002年   45篇
  2001年   40篇
  2000年   25篇
  1999年   11篇
  1998年   10篇
  1997年   19篇
  1996年   8篇
  1995年   5篇
  1994年   2篇
  1993年   2篇
  1992年   2篇
  1991年   5篇
  1990年   5篇
  1989年   2篇
  1988年   2篇
  1987年   1篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1982年   5篇
  1981年   4篇
  1979年   2篇
  1966年   1篇
排序方式: 共有2024条查询结果,搜索用时 10 毫秒
941.
We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is grounded in information theory, and therefore, is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications, the proposed CE method compares favorably to existing estimators.  相似文献   
942.
In this paper we prove a consistency result for sieved maximum likelihood estimators of the density in general random censoring models with covariates. The proof is based on the method of functional estimation. The estimation error is decomposed in a deterministic approximation error and the stochastic estimation error. The main part of the proof is to establish a uniform law of large numbers for the conditional log-likelihood functional, by using results and techniques from empirical process theory.  相似文献   
943.
It is quite common that raters may need to classify a sample of subjects on a categorical scale. Perfect agreement can rarely be observed partly because of different perceptions about the meanings of the category labels between raters and partly because of factors such as intrarater variability. Usually, category indistinguishability occurs between adjacent categories. In this article, we propose a simple log-linear model combining ordinal scale information and category distinguishability between ordinal categories for modelling agreement between two raters. For the proposed model, no score assignment is required to the ordinal categories. An algorithm and statistical properties will be provided.  相似文献   
944.
Model selection aims to find the best model. Most of the usual criteria are based on goodness of fit and parsimony and aim to maximize a transformed version of likelihood. The situation is less clear when two models are equivalent: are they close to the unknown true model or are they far from it? Based on simulations, we study the results of Vuong's test, Cox's test, AIC and BIC and the ability of these four tests to discriminate between models.  相似文献   
945.
Nonparametric and parametric estimators are combined to minimize the mean squared error among their linear combinations. The combined estimator is consistent and for large sample sizes has a smaller mean squared error than the nonparametric estimator when the parametric assumption is violated. If the parametric assumption holds, the combined estimator has a smaller MSE than the parametric estimator. Our simulation examples focus on mean estimation when data may follow a lognormal distribution, or can be a mixture with an exponential or a uniform distribution. Motivating examples illustrate possible application areas.  相似文献   
946.
A.K. Basu  S Sen Roy 《Statistics》2013,47(3):461-470
This paper explores the rate at which the estimates of the unknown parameters in an autoregressive process converge in distribution to the normal variate.  相似文献   
947.
We consider the use of minimax shrinkage estimators for the linear regression mcjel under several loss functions when severe multicollinearity is present. The examples considered illustrate that little or no departure from the least squares estimates is permitted in many cases when the data is highly multicollinear and/or shrinkage is toward a point in the parameter space that does not closely agree with the sample data  相似文献   
948.
Penalized likelihood methods provide a range of practical modelling tools, including spline smoothing, generalized additive models and variants of ridge regression. Selecting the correct weights for penalties is a critical part of using these methods and in the single-penalty case the analyst has several well-founded techniques to choose from. However, many modelling problems suggest a formulation employing multiple penalties, and here general methodology is lacking. A wide family of models with multiple penalties can be fitted to data by iterative solution of the generalized ridge regression problem minimize || W 1/2 ( Xp − y ) ||2ρ+Σ i =1 m  θ i p ' S i p ( p is a parameter vector, X a design matrix, S i a non-negative definite coefficient matrix defining the i th penalty with associated smoothing parameter θ i , W a diagonal weight matrix, y a vector of data or pseudodata and ρ an 'overall' smoothing parameter included for computational efficiency). This paper shows how smoothing parameter selection can be performed efficiently by applying generalized cross-validation to this problem and how this allows non-linear, generalized linear and linear models to be fitted using multiple penalties, substantially increasing the scope of penalized modelling methods. Examples of non-linear modelling, generalized additive modelling and anisotropic smoothing are given.  相似文献   
949.
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean-squared error (MSE) properties, the selection of an effective prior is a difficult task. To address this problem, we propose empirical Bayes (EB) prior selection methods for various error distributions including the normal and the heavier-tailed Student t -distributions. Under such EB prior distributions, we obtain threshold shrinkage estimators based on model selection, and multiple-shrinkage estimators based on model averaging. These EB estimators are seen to be computationally competitive with standard classical thresholding methods, and to be robust to outliers in both the data and wavelet domains. Simulated and real examples are used to illustrate the flexibility and improved MSE performance of these methods in a wide variety of settings.  相似文献   
950.
英语动词短语被广泛地认为难以掌握因为动词和小品词的组合似乎无理可循。过去一直认为它们是任意组合起来的语言现象,只能通过死记硬背的方法来学习。然而,认知语言学却认为动词短语的组合是有理据的,可以分析并加以系统化。文章援用认知语言学中的原型范畴、理想认知模型及认知域、意象图式、概念隐喻等几个经典理论,具体探讨了短语动词语义延伸的认知理据,进而阐明小品词在建构短语动词时的原理,为短语动词理解提供了认知理据。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号