首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5936篇
  免费   278篇
  国内免费   111篇
管理学   398篇
民族学   13篇
人才学   1篇
人口学   109篇
丛书文集   237篇
理论方法论   126篇
综合类   1505篇
社会学   325篇
统计学   3611篇
  2024年   14篇
  2023年   86篇
  2022年   118篇
  2021年   139篇
  2020年   171篇
  2019年   265篇
  2018年   301篇
  2017年   386篇
  2016年   268篇
  2015年   224篇
  2014年   306篇
  2013年   1052篇
  2012年   400篇
  2011年   254篇
  2010年   217篇
  2009年   217篇
  2008年   233篇
  2007年   233篇
  2006年   200篇
  2005年   211篇
  2004年   178篇
  2003年   141篇
  2002年   103篇
  2001年   117篇
  2000年   95篇
  1999年   67篇
  1998年   66篇
  1997年   48篇
  1996年   31篇
  1995年   33篇
  1994年   29篇
  1993年   19篇
  1992年   21篇
  1991年   13篇
  1990年   13篇
  1989年   9篇
  1988年   10篇
  1987年   10篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有6325条查询结果,搜索用时 31 毫秒
991.
This paper is concerned with Bayesian estimation and prediction in the context of start-up demonstration tests in which rejection of a unit is possible when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance of the unit. A method for implementing Bayesian inference on the probability of success is developed for use when the test result of each start-up is not reported or even recorded, and only the number of trials until termination of the testing is available. Some errors in the related literature on the Bayesian analysis of start-up demonstration tests are corrected. The method developed in this paper is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation, and it additionally enables Bayesian posterior inference on the number of failures given the number of start-up trials until termination to be made, along with Bayesian predictive inferences on the number of start-up trials and the number of failures until termination for any future run of the start-up demonstration test. An illustrative example is also included.  相似文献   
992.
Variable and model selection problems are fundamental to high-dimensional statistical modeling in diverse fields of sciences. Especially in health studies, many potential factors are usually introduced to determine an outcome variable. This paper deals with the problem of high-dimensional statistical modeling through the analysis of the trauma annual data in Greece for 2005. The data set is divided into the experiment and control sets and consists of 6334 observations and 112 factors that include demographic, transport and intrahospital data used to detect possible risk factors of death. In our study, different model selection techniques are applied to the experiment set and the notion of deviance is used on the control set to assess the fit of the overall selected model. The statistical methods employed in this work were the non-concave penalized likelihood methods, smoothly clipped absolute deviation, least absolute shrinkage and selection operator, and Hard, the generalized linear logistic regression, and the best subset variable selection.The way of identifying the significant variables in large medical data sets along with the performance and the pros and cons of the various statistical techniques used are discussed. The performed analysis reveals the distinct advantages of the non-concave penalized likelihood methods over the traditional model selection techniques.  相似文献   
993.
Summary.  A multivariate non-linear time series model for road safety data is presented. The model is applied in a case-study into the development of a yearly time series of numbers of fatal accidents (inside and outside urban areas) and numbers of kilometres driven by motor vehicles in the Netherlands between 1961 and 2000. The model accounts for missing entries in the disaggregated numbers of kilometres driven although the aggregated numbers are observed throughout. We consider a multivariate non-linear time series model for the analysis of these data. The model consists of dynamic unobserved factors for exposure and risk that are related in a non-linear way to the number of fatal accidents. The multivariate dimension of the model is due to its inclusion of multiple time series for inside and outside urban areas. Approximate maximum likelihood methods based on the extended Kalman filter are utilized for the estimation of unknown parameters. The latent factors are estimated by extended smoothing methods. It is concluded that the salient features of the observed time series are captured by the model in a satisfactory way.  相似文献   
994.
The analysis of word frequency count data can be very useful in authorship attribution problems. Zero-truncated generalized inverse Gaussian–Poisson mixture models are very helpful in the analysis of these kinds of data because their model-mixing density estimates can be used as estimates of the density of the word frequencies of the vocabulary. It is found that this model provides excellent fits for the word frequency counts of very long texts, where the truncated inverse Gaussian–Poisson special case fails because it does not allow for the large degree of over-dispersion in the data. The role played by the three parameters of this truncated GIG-Poisson model is also explored. Our second goal is to compare the fit of the truncated GIG-Poisson mixture model with the fit of the model that results from switching the order of the mixing and truncation stages. A heuristic interpretation of the mixing distribution estimates obtained under this alternative GIG-truncated Poisson mixture model is also provided.  相似文献   
995.
In this paper we consider the estimation of a density function on the basis of a random stratified sample from weighted distributions. We propose a linear wavelet density estimator and prove its consistency. The behavior of the proposed estimator and its smoothed versions is eventually illustrated by simulated examples and a case study involving alcohol blood level in DUI cases.  相似文献   
996.
Abstract. First, to test the existence of random effects in semiparametric mixed models (SMMs) under only moment conditions on random effects and errors, we propose a very simple and easily implemented non‐parametric test based on a difference between two estimators of the error variance. One test is consistent only under the null and the other can be so under both the null and alternatives. Instead of erroneously solving the non‐standard two‐sided testing problem, as in most papers in the literature, we solve it correctly and prove that the asymptotic distribution of our test statistic is standard normal. This avoids Monte Carlo approximations to obtain p ‐values, as is needed for many existing methods, and the test can detect local alternatives approaching the null at rates up to root n. Second, as the higher moments of the error are necessarily estimated because the standardizing constant involves these quantities, we propose a general method to conveniently estimate any moments of the error. Finally, a simulation study and a real data analysis are conducted to investigate the properties of our procedures.  相似文献   
997.
Abstract. We consider the functional non‐parametric regression model Y= r( χ )+?, where the response Y is univariate, χ is a functional covariate (i.e. valued in some infinite‐dimensional space), and the error ? satisfies E(? | χ ) = 0. For this model, the pointwise asymptotic normality of a kernel estimator of r (·) has been proved in the literature. To use this result for building pointwise confidence intervals for r (·), the asymptotic variance and bias of need to be estimated. However, the functional covariate setting makes this task very hard. To circumvent the estimation of these quantities, we propose to use a bootstrap procedure to approximate the distribution of . Both a naive and a wild bootstrap procedure are studied, and their asymptotic validity is proved. The obtained consistency results are discussed from a practical point of view via a simulation study. Finally, the wild bootstrap procedure is applied to a food industry quality problem to compute pointwise confidence intervals.  相似文献   
998.
近几十年以来,国际上在对"风险的处理和效益的优化"这两个现代金融学的中心议题的分析和处理过程中,金融时间序列的计量学模型及其相应的分析越来越起到非常重要的作用.对于线性时间序列模型如AR(p),MA(q),ARMA(p,q)等,已经为我们所熟知.具体到模型的参数估计在数据没有缺失时,也有很多经典的办法,如最小二乘法、极大似然法等.但是当数据在中间有缺失时,上述方法将无能为力.本文将详细讨论在数据有缺失时的ARMA(1,1)模型,即Zt=αZt-1t-βεt-1的参数的估计方法.  相似文献   
999.
传统DEA模型将投入、产出权重视为固定变量。为了更合理地对决策单元的效率进行评价,本文研究了如何将有关投入、产出变量的权重的更多信息融入效率评价模型,提出了可变权重的概念,给出了一种基于可变权重的DEA效率评价模型。本文模型是CCR模型的推广。在有关权重信息可得的前提下,本文模型较之CCR模型中效率评价上更为合理。但是,如何准确度量投入、产出权重并将其函数化,是本文方法应用的难点。一个算例分析演示了本文模型。  相似文献   
1000.
大数据的广泛应用对社会产生了深远影响,也对政府治理变革起着重要推动作用,大数据将推动政府治理理念、治理体系、治理方式的创新。政府治理的目标就是应用大数据实现法治政府、创新政府、廉洁政府和服务型政府。在运用大数据推动政府治理创新的过程中,应通过数据共享优化政府治理结构,通过政务形态信息化调整政府治理关系,通过政务平台技术化重塑政府流程,借用大数据来提升政府创新能力,通过数据应用法制化提升政府法治水平。所以,政府必须要主动适应信息公开和数据共享的大趋势,以此推动政府治理的变革与创新,进一步提高政府治理能力。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号