首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6553篇
  免费   262篇
  国内免费   123篇
管理学   450篇
民族学   14篇
人才学   1篇
人口学   112篇
丛书文集   300篇
理论方法论   140篇
综合类   2040篇
社会学   335篇
统计学   3546篇
  2024年   15篇
  2023年   104篇
  2022年   139篇
  2021年   153篇
  2020年   187篇
  2019年   272篇
  2018年   321篇
  2017年   389篇
  2016年   283篇
  2015年   240篇
  2014年   327篇
  2013年   1052篇
  2012年   430篇
  2011年   295篇
  2010年   244篇
  2009年   245篇
  2008年   264篇
  2007年   290篇
  2006年   270篇
  2005年   252篇
  2004年   227篇
  2003年   186篇
  2002年   135篇
  2001年   143篇
  2000年   108篇
  1999年   66篇
  1998年   65篇
  1997年   45篇
  1996年   29篇
  1995年   31篇
  1994年   27篇
  1993年   16篇
  1992年   20篇
  1991年   11篇
  1990年   12篇
  1989年   6篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6938条查询结果,搜索用时 15 毫秒
31.
一、引言数据挖掘(Data Mining)是近年来随着人工智能和数据库技术的发展而出现的一门新兴学科。它是从大量的数据中筛选出隐含的、可信的、新颖的、有效的信息的高级处理过程。关联规则(Association Rule)是其中重要的研究课题,是数据挖掘的主要技术之一,也是在无指导学习系统  相似文献   
32.
文章试图将统计思想与(Rough)粗糙集理论相结合,针对事务性数据库属性项压缩问题提出了一些行之有效的方法,即基于重要性的属性压缩、基于相依性的属性压缩、属性项的广义线形分析及压缩、基于多重相关性的属性项压缩,以此达到数据库压缩之目的。  相似文献   
33.
在数理统计的发展史上,最重要的事件也许就是观测误差分布的发现。统计学家们在寻找观测误差分布的过程中,创造了许多有用的统计理论和方法,不过观测误差分布发现的优先权最终属于伟大的德国科学家高斯。文章主要介绍了高斯发现观测误差分布的思考过程,并期望读者能从高斯天才的思想中获得有益的启迪。  相似文献   
34.
Bayesian palaeoclimate reconstruction   总被引:1,自引:0,他引:1  
Summary.  We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.  相似文献   
35.
This is a comparative study of various clustering and classification algorithms as applied to differentiate cancer and non-cancer protein samples using mass spectrometry data. Our study demonstrates the usefulness of a feature selection step prior to applying a machine learning tool. A natural and common choice of a feature selection tool is the collection of marginal p-values obtained from t-tests for testing the intensity differences at each m/z ratio in the cancer versus non-cancer samples. We study the effect of selecting a cutoff in terms of the overall Type 1 error rate control on the performance of the clustering and classification algorithms using the significant features. For the classification problem, we also considered m/z selection using the importance measures computed by the Random Forest algorithm of Breiman. Using a data set of proteomic analysis of serum from ovarian cancer patients and serum from cancer-free individuals in the Food and Drug Administration and National Cancer Institute Clinical Proteomics Database, we undertake a comparative study of the net effect of the machine learning algorithm–feature selection tool–cutoff criteria combination on the performance as measured by an appropriate error rate measure.  相似文献   
36.
The commonly used survey technique of clustering introduces dependence into sample data. Such data is frequently used in economic analysis, though the dependence induced by the sample structure of the data is often ignored. In this paper, the effect of clustering on the non-parametric, kernel estimate of the density, f(x), is examined. The window width commonly used for density estimation for the case of i.i.d. data is shown to no longer be optimal. A new optimal bandwidth using a higher-order kernel is proposed and is shown to give a smaller integrated mean squared error than two window widths which are widely used for the case of i.i.d. data. Several illustrations from simulation are provided.  相似文献   
37.
Manufacturers want to assess the quality andreliability of their products. Specifically, they want to knowthe exact number of failures from the sales transacted duringa particular month. Information available today is sometimesincomplete as many companies analyze their failure data simplycomparing sales for a total month from a particular departmentwith the total number of claims registered for that given month.This information—called marginal count data—is, thus,incomplete as it does not give the exact number of failures ofthe specific products that were sold in a particular month. Inthis paper we discuss nonparametric estimation of the mean numbersof failures for repairable products and the failure probabilitiesfor nonrepairable products. We present a nonhomogeneous Poissonprocess model for repairable products and a multinomial modeland its Poisson approximation for nonrepairable products. A numericalexample is given and a simulation is carried out to evaluatethe proposed methods of estimating failure probabilities undera number of possible situations.  相似文献   
38.
Pan  Wei  Connett  John E. 《Lifetime data analysis》2001,7(2):111-123
Weextend Wei and Tanner's (1991) multiple imputation approach insemi-parametric linear regression for univariate censored datato clustered censored data. The main idea is to iterate the followingtwo steps: 1) using the data augmentation to impute for censoredfailure times; 2) fitting a linear model with imputed completedata, which takes into consideration of clustering among failuretimes. In particular, we propose using the generalized estimatingequations (GEE) or a linear mixed-effects model to implementthe second step. Through simulation studies our proposal comparesfavorably to the independence approach (Lee et al., 1993), whichignores the within-cluster correlation in estimating the regressioncoefficient. Our proposal is easy to implement by using existingsoftwares.  相似文献   
39.
Most regression problems in practice require flexible semiparametric forms of the predictor for modelling the dependence of responses on covariates. Moreover, it is often necessary to add random effects accounting for overdispersion caused by unobserved heterogeneity or for correlation in longitudinal or spatial data. We present a unified approach for Bayesian inference via Markov chain Monte Carlo simulation in generalized additive and semiparametric mixed models. Different types of covariates, such as the usual covariates with fixed effects, metrical covariates with non-linear effects, unstructured random effects, trend and seasonal components in longitudinal data and spatial covariates, are all treated within the same general framework by assigning appropriate Markov random field priors with different forms and degrees of smoothness. We applied the approach in several case-studies and consulting cases, showing that the methods are also computationally feasible in problems with many covariates and large data sets. In this paper, we choose two typical applications.  相似文献   
40.
The use of complex sampling designs in population-based case–control studies is becoming more common, particularly for sampling the control population. This is prompted by all the usual cost and logistical benefits that are conferred by multistage sampling. Complex sampling has often been ignored in analysis but, with the advent of packages like SUDAAN, survey-weighted analyses that take account of the sample design can be carried out routinely. This paper explores this approach and more efficient alternatives, which can also be implemented by using readily available software.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号