首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15457篇
  免费   456篇
  国内免费   203篇
管理学   998篇
劳动科学   2篇
民族学   194篇
人才学   1篇
人口学   188篇
丛书文集   1852篇
理论方法论   789篇
综合类   10180篇
社会学   1391篇
统计学   521篇
  2024年   33篇
  2023年   133篇
  2022年   153篇
  2021年   170篇
  2020年   240篇
  2019年   296篇
  2018年   287篇
  2017年   320篇
  2016年   305篇
  2015年   387篇
  2014年   778篇
  2013年   1095篇
  2012年   946篇
  2011年   1082篇
  2010年   844篇
  2009年   866篇
  2008年   973篇
  2007年   1103篇
  2006年   1004篇
  2005年   974篇
  2004年   936篇
  2003年   873篇
  2002年   772篇
  2001年   608篇
  2000年   361篇
  1999年   152篇
  1998年   80篇
  1997年   57篇
  1996年   46篇
  1995年   40篇
  1994年   31篇
  1993年   27篇
  1992年   28篇
  1991年   27篇
  1990年   19篇
  1989年   22篇
  1988年   12篇
  1987年   11篇
  1986年   3篇
  1985年   6篇
  1984年   2篇
  1983年   5篇
  1982年   2篇
  1981年   2篇
  1979年   1篇
  1978年   1篇
  1975年   3篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
We consider the maximum likelihood estimator $\hat{F}_n$ of a distribution function in a class of deconvolution models where the known density of the noise variable is of bounded variation. This class of noise densities contains in particular bounded, decreasing densities. The estimator $\hat{F}_n$ is defined, characterized in terms of Fenchel optimality conditions and computed. Under appropriate conditions, various consistency results for $\hat{F}_n$ are derived, including uniform strong consistency. The Canadian Journal of Statistics 41: 98–110; 2013 © 2012 Statistical Society of Canada  相似文献   
62.
基于核和灰度的双重异构数据序列预测建模方法研究   总被引:1,自引:2,他引:1  
通过建立灰色异构数据"核"序列的DGM(1,1)模型,实现双重异构数据"核"的预测;以"核"为基础、以双重异构数据序列中较大的区间灰数信息域作为预测结果的信息域,构建基于区间灰数与实数的双重异构数据序列灰色预测模型,有效地将灰色预测模型建模对象从"同质数据"拓展至"双重异构数据"。研究成果对丰富灰色预测模型理论体系具有积极意义。  相似文献   
63.
The most popular approach in extreme value statistics is the modelling of threshold exceedances using the asymptotically motivated generalised Pareto distribution. This approach involves the selection of a high threshold above which the model fits the data well. Sometimes, few observations of a measurement process might be recorded in applications and so selecting a high quantile of the sample as the threshold leads to almost no exceedances. In this paper we propose extensions of the generalised Pareto distribution that incorporate an additional shape parameter while keeping the tail behaviour unaffected. The inclusion of this parameter offers additional structure for the main body of the distribution, improves the stability of the modified scale, tail index and return level estimates to threshold choice and allows a lower threshold to be selected. We illustrate the benefits of the proposed models with a simulation study and two case studies.  相似文献   
64.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   
65.
精算是保险发展的基础,是保险经营的技术支持。精算在国外有四百年的发展历史,引入中国只有二十年。要使精算技术在中国得到发展创新并为社会需要服务,必须了解精算思想产生的历史背景,厘清精算理论发展的脉络,真正把握精算思想的实质。基于此,介绍了精算各发展时期的主要代表人物及其学术思想,阐述精算技术对各时期保险发展的影响,同时对精算学与复利理论、数学、统计学、计算技术、金融经济学交叉融合的历史过程进行了分析述评。  相似文献   
66.
针对科技奖励评价的特点,根据改进D—S证据合成规则,将专家的评价指标值转化为指标的综合得分并形成决策矩阵;结合TOPSIS模型求解理想解和负理想解,计算距离和贴近度,对各评价项目进行综合排名。实证结果表明:该模型能够很好地解决科技奖励评价过程中的不确定性问题,为科技奖励综合评价提供了一种新的思路。  相似文献   
67.
A. Ferreira  ?  L. de Haan  L. Peng? 《Statistics》2013,47(5):401-434
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998).  相似文献   
68.
Egmar Rödel 《Statistics》2013,47(4):573-585
Normed bivariate density funtions were introduced by HOEFFDING (1940/41). In the present paper estimators for normed bivariate ranks and on a FOURIER series expansion in LEGENDRE polynomials. The estimation of normed bivarate density functions under positive dependence is also described  相似文献   
69.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
70.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997 Tang , B. , Wu , C. F. J. (1997). A method for constructing supersaturated designs and its E(s 2)-optimality. Canadian Journal of Statistics 25:191201.[Crossref], [Web of Science ®] [Google Scholar]) as well as by Koukouvinos et al. (2008 Koukouvinos , C. , Mylona , K. , Simos , D. E. ( 2008 ). E(s 2)-optimal and minimax-optimal cyclic supersaturated designs via multi-objective simulated annealing . Journal of Statistical Planning and Inference 138 : 16391646 .[Crossref], [Web of Science ®] [Google Scholar]). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号