全文获取类型
收费全文 | 4058篇 |
免费 | 113篇 |
国内免费 | 18篇 |
专业分类
管理学 | 206篇 |
民族学 | 8篇 |
人口学 | 56篇 |
丛书文集 | 46篇 |
理论方法论 | 31篇 |
综合类 | 538篇 |
社会学 | 58篇 |
统计学 | 3246篇 |
出版年
2024年 | 10篇 |
2023年 | 22篇 |
2022年 | 37篇 |
2021年 | 54篇 |
2020年 | 65篇 |
2019年 | 142篇 |
2018年 | 187篇 |
2017年 | 283篇 |
2016年 | 163篇 |
2015年 | 127篇 |
2014年 | 167篇 |
2013年 | 1082篇 |
2012年 | 325篇 |
2011年 | 144篇 |
2010年 | 129篇 |
2009年 | 141篇 |
2008年 | 147篇 |
2007年 | 106篇 |
2006年 | 85篇 |
2005年 | 104篇 |
2004年 | 90篇 |
2003年 | 70篇 |
2002年 | 64篇 |
2001年 | 54篇 |
2000年 | 59篇 |
1999年 | 51篇 |
1998年 | 60篇 |
1997年 | 39篇 |
1996年 | 17篇 |
1995年 | 27篇 |
1994年 | 17篇 |
1993年 | 20篇 |
1992年 | 20篇 |
1991年 | 9篇 |
1990年 | 9篇 |
1989年 | 7篇 |
1988年 | 8篇 |
1987年 | 5篇 |
1986年 | 2篇 |
1985年 | 7篇 |
1984年 | 6篇 |
1983年 | 7篇 |
1982年 | 5篇 |
1981年 | 3篇 |
1980年 | 5篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 5篇 |
1975年 | 1篇 |
排序方式: 共有4189条查询结果,搜索用时 15 毫秒
961.
缩小农村内部收入差距是缓解相对贫困、实现共同富裕的关键。基于2009-2020年全国农村固定观察点调查数据,使用再中心化影响函数(RIF)回归与分解的方法,实证检验农机社会化服务对农村内部收入差距的影响。结果表明,农机社会化服务有利于提高中低收入农户收入,且对低收入农户的增收效应更大,进而缩小了农村内部收入差距;农机社会化服务对农村内部收入差距的缓解效应存在区域异质性,对东部地区影响较大,对东北地区影响较小;进一步分解结果表明,系数效应是农机社会化服务降低农村内部收入差距的主因,系数效应解释了收入差距的124.24%;影响机制检验表明,农机社会化服务主要是通过促进农村劳动力转移缩小农村内部收入差距。因此,应积极推动农机社会化服务发展,完善面向低收入群体的农机社会化服务支持政策,充分发挥农机社会化服务缓解农村内部收入差距的积极作用。 相似文献
962.
When spatial data are correlated, currently available data‐driven smoothing parameter selection methods for nonparametric regression will often fail to provide useful results. The authors propose a method that adjusts the generalized cross‐validation criterion for the effect of spatial correlation in the case of bivariate local polynomial regression. Their approach uses a pilot fit to the data and the estimation of a parametric covariance model. The method is easy to implement and leads to improved smoothing parameter selection, even when the covariance model is misspecified. The methodology is illustrated using water chemistry data collected in a survey of lakes in the Northeastern United States. 相似文献
963.
Over forty years ago, Grenander derived the MLE of a monotone decreasing density f with known mode. Prakasa Rao obtained the asymptotic distribution of this estimator at a fixed point x where f' (x) < 0. Here, we obtain the asymptotic distribution of this estimator at a fixed point x when f is constant and nonzero in some open neighborhood of x. This limiting distribution is expressible as the convolution of a closed-form density and a rescaled standard normal density. Groeneboom (1983) derived the aforementioned closed-form density and we provide an alternative, more direct derivation. 相似文献
964.
Samuel M. Mwalili Emmanuel Lesaffre Dominique Declerck 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):77-93
Summary. We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis. 相似文献
965.
F. Abramovich T. Sapatinas & B. W. Silverman 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1998,60(4):725-749
We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion that is common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relationship between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relationship gives insight into the meaning of the Besov space parameters. Moreover, the relationship established makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be difficult to elicit; with this in mind, we propose a standard choice of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set that was collected in an anaesthesiological study. 相似文献
966.
Ricardo Cao José A. Vilar Juan M. Vilar 《Australian & New Zealand Journal of Statistics》2012,54(3):301-324
Generalised variance function (GVF) models are data analysis techniques often used in large‐scale sample surveys to approximate the design variance of point estimators for population means and proportions. Some potential advantages of the GVF approach include operational simplicity, more stable sampling errors estimates and providing a convenient method of summarising results when a high number of survey variables is considered. In this paper, several parametric and nonparametric methods for GVF estimation with binary variables are proposed and compared. The behavior of these estimators is analysed under heteroscedasticity and in the presence of outliers and influential observations. An empirical study based on the annual survey of living conditions in Galicia (a region in the northwest of Spain) illustrates the behaviour of the proposed estimators. 相似文献
967.
《统计学通讯:模拟与计算》2012,41(6):833-851
In linear and nonparametric regression models, the problem of testing for symmetry of the distribution of errors is considered. We propose a test statistic which utilizes the empirical characteristic function of the corresponding residuals. The asymptotic null distribution of the test statistic as well as its behavior under alternatives is investigated. A simulation study compares bootstrap versions of the proposed test to other more standard procedures. 相似文献
968.
《Journal of the Korean Statistical Society》2014,43(4):513-530
This paper considers a problem of variable selection in quantile regression with autoregressive errors. Recently, Wu and Liu (2009) investigated the oracle properties of the SCAD and adaptive-LASSO penalized quantile regressions under non identical but independent error assumption. We further relax the error assumptions so that the regression model can hold autoregressive errors, and then investigate theoretical properties for our proposed penalized quantile estimators under the relaxed assumption. Optimizing the objective function is often challenging because both quantile loss and penalty functions may be non-differentiable and/or non-concave. We adopt the concept of pseudo data by Oh et al. (2007) to implement a practical algorithm for the quantile estimate. In addition, we discuss the convergence property of the proposed algorithm. The performance of the proposed method is compared with those of the majorization-minimization algorithm (Hunter and Li, 2005) and the difference convex algorithm (Wu and Liu, 2009) through numerical and real examples. 相似文献
969.
Many areas of statistical modeling are plagued by the “curse of dimensionality,” in which there are more variables than observations. This is especially true when developing functional regression models where the independent dataset is some type of spectral decomposition, such as data from near-infrared spectroscopy. While we could develop a very complex model by simply taking enough samples (such that n > p), this could prove impossible or prohibitively expensive. In addition, a regression model developed like this could turn out to be highly inefficient, as spectral data usually exhibit high multicollinearity. In this article, we propose a two-part algorithm for selecting an effective and efficient functional regression model. Our algorithm begins by evaluating a subset of discrete wavelet transformations, allowing for variation in both wavelet and filter number. Next, we perform an intermediate processing step to remove variables with low correlation to the response data. Finally, we use the genetic algorithm to perform a stochastic search through the subset regression model space, driven by an information-theoretic objective function. We allow our algorithm to develop the regression model for each response variable independently, so as to optimally model each variable. We demonstrate our method on the familiar biscuit dough dataset, which has been used in a similar context by several researchers. Our results demonstrate both the flexibility and the power of our algorithm. For each response variable, a different subset model is selected, and different wavelet transformations are used. The models developed by our algorithm show an improvement, as measured by lower mean error, over results in the published literature. 相似文献
970.
A. C. Davison 《Statistical Methods and Applications》2008,17(2):167-181
The paper gives a highly personal sketch of some current trends in statistical inference. After an account of the challenges that new forms of data bring, there is a brief overview of some topics in stochastic modelling. The paper then turns to sparsity, illustrated using Bayesian wavelet analysis based on a mixture model and metabolite profiling. Modern likelihood methods including higher order approximation and composite likelihood inference are then discussed, followed by some thoughts on statistical education. 相似文献