首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4660篇
  免费   102篇
  国内免费   18篇
管理学   259篇
民族学   2篇
人口学   59篇
丛书文集   51篇
理论方法论   82篇
综合类   414篇
社会学   153篇
统计学   3760篇
  2024年   2篇
  2023年   35篇
  2022年   39篇
  2021年   35篇
  2020年   106篇
  2019年   184篇
  2018年   204篇
  2017年   311篇
  2016年   157篇
  2015年   96篇
  2014年   132篇
  2013年   1329篇
  2012年   412篇
  2011年   129篇
  2010年   143篇
  2009年   154篇
  2008年   146篇
  2007年   110篇
  2006年   112篇
  2005年   113篇
  2004年   96篇
  2003年   76篇
  2002年   79篇
  2001年   79篇
  2000年   66篇
  1999年   68篇
  1998年   63篇
  1997年   46篇
  1996年   26篇
  1995年   22篇
  1994年   28篇
  1993年   19篇
  1992年   23篇
  1991年   9篇
  1990年   18篇
  1989年   10篇
  1988年   20篇
  1987年   10篇
  1986年   6篇
  1985年   5篇
  1984年   12篇
  1983年   15篇
  1982年   7篇
  1981年   7篇
  1980年   3篇
  1979年   8篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有4780条查询结果,搜索用时 15 毫秒
221.
Standard methods of estimation for autoregressive models are known to be biased in finite samples, which has implications for estimation, hypothesis testing, confidence interval construction and forecasting. Three methods of bias reduction are considered here: first-order bias correction, FOBC, where the total bias is approximated by the O(T-1) bias; bootstrapping; and recursive mean adjustment, RMA. In addition, we show how first-order bias correction is related to linear bias correction. The practically important case where the AR model includes an unknown linear trend is considered in detail. The fidelity of nominal to actual coverage of confidence intervals is also assessed. A simulation study covers the AR(1) model and a number of extensions based on the empirical AR(p) models fitted by Nelson & Plosser (1982). Overall, which method dominates depends on the criterion adopted: bootstrapping tends to be the best at reducing bias, recursive mean adjustment is best at reducing mean squared error, whilst FOBC does particularly well in maintaining the fidelity of confidence intervals.  相似文献   
222.
The accuracy of a binary diagnostic test is usually measured in terms of its sensitivity and its specificity, or through positive and negative predictive values. Another way to describe the validity of a binary diagnostic test is the risk of error and the kappa coefficient of the risk of error. The risk of error is the average loss that is caused when incorrectly classifying a non-diseased or a diseased patient, and the kappa coefficient of the risk of error is a measure of the agreement between the diagnostic test and the gold standard. In the presence of partial verification of the disease, the disease status of some patients is unknown, and therefore the evaluation of a diagnostic test cannot be carried out through the traditional method. In this paper, we have deduced the maximum likelihood estimators and variances of the risk of error and of the kappa coefficient of the risk of error in the presence of partial verification of the disease. Simulation experiments have been carried out to study the effect of the verification probabilities on the coverage of the confidence interval of the kappa coefficient.  相似文献   
223.
In this paper we examine maximum likelihood estimation procedures in multilevel models for two level nesting structures. Usually, for fixed effects and variance components estimation, level-one error terms and random effects are assumed to be normally distributed. Nevertheless, in some circumstances this assumption might not be realistic, especially as concerns random effects. Thus we assume for random effects the family of multivariate exponential power distributions (MEP); subsequently, by means of Monte Carlo simulation procedures, we study robustness of maximum likelihood estimators under normal assumption when, actually, random effects are MEP distributed.  相似文献   
224.
We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we show that this support is the space of all continuous functions when the usual covariance kernels are chosen and an appropriate prior is used on the smoothing parameters of the covariance kernel. We then show that a large class of Gaussian process priors achieve weak as well as strong posterior consistency (under some regularity conditions) at true densities that are either continuous or piecewise continuous.  相似文献   
225.
Abstract.  We consider estimation of the upper boundary point F −1 (1) of a distribution function F with finite upper boundary or 'frontier' in deconvolution problems, primarily focusing on deconvolution models where the noise density is decreasing on the positive halfline. Our estimates are based on the (non-parametric) maximum likelihood estimator (MLE) of F . We show that (1) is asymptotically never too small. If the convolution kernel has bounded support the estimator (1) can generally be expected to be consistent. In this case, we establish a relation between the extreme value index of F and the rate of convergence of (1) to the upper support point for the 'boxcar' deconvolution model. If the convolution density has unbounded support, (1) can be expected to overestimate the upper support point. We define consistent estimators , for appropriately chosen vanishing sequences ( β n ) and study these in a particular case.  相似文献   
226.
Suppose we have n observations from X = Y + Z, where Z is a noise component with known distribution, and Y has an unknown density f. When the characteristic function of Z is nonzero almost everywhere, we show that it is possible to construct a density estimate fn such that for all f, Iimn| |=0.  相似文献   
227.
Electronic Access to Algorithms
Applied Statistics algorithms are available on Statlib at Carnegie Mellon University and on the UK mirror of Statlib at the University of Kent. They may be accessed either via anonymous file transfer protocol (FTP) or by WWW mosaic.  相似文献   
228.
We consider the problem of minimax-variance, robust estimation of a location parameter, through the use of L- and R-estimators. We derive an easily checked necessary condition for L-estimation to be minimax, and a related sufficient condition for R-estimation to be minimax. Those cases in the literature in which L-estimation is known not to be minimax, and those in which R-estimation is minimax, are derived as consequences of these conditions. New classes of examples are given in each case. As well, we answer a question of Scholz (1974), who showed essentially that the asymptotic variance of an R-estimator never exceeds that of an L-estimator, if both are efficient at the same strongly unimodal distribution. Scholz raised the question of whether or not the assumption of strong unimodality could be dropped. We answer this question in the negative, theoretically and by examples. In the examples, the minimax property fails both for L-estimation and for R-estimation, but the variance of the L-estimator, as the distribution of the observation varies over the given neighbourhood, remains unbounded. That of the R-estimator is unbounded.  相似文献   
229.
鉴于国内的许多跨文化交际研究主要集中在纯语言的探讨和表层文化的分析上,试图从行为意识的角度出发,就思维定势、偏见意识和价值观念三大因素来分析跨文化交际中存在的障碍,从而使学习者排除跨文化交际的障碍,并从跨文化交流中获得益处。  相似文献   
230.
任务转换范式下情绪注意偏向   总被引:1,自引:0,他引:1       下载免费PDF全文
相对于其他信息来说,情绪信息往往会引起情绪障碍患者更多的注意并具有一种认知加工上的优先权。在注意实验中,与中性刺激相比,具有情绪意义的刺激更能吸引注意或占用注意资源从而引起注意偏向。通过转换代价和任务规则一致性效应两个指标阐述了任务转换范式下情绪障碍患者的情绪信息注意偏向加工机制,并指出这两种指标分别代表了个体在注意偏向中的两个不同的认知控制加工,为情绪障碍患者的情绪调节提供必要的依据。文中最后还指出了ERPs、f MRI新技术是未来任务转换范式下研究情绪与注意关系的新方向。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号