首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16198篇
  免费   278篇
管理学   2137篇
民族学   82篇
人才学   2篇
人口学   1405篇
丛书文集   109篇
教育普及   2篇
理论方法论   1503篇
综合类   358篇
社会学   8172篇
统计学   2706篇
  2021年   87篇
  2020年   246篇
  2019年   296篇
  2018年   324篇
  2017年   456篇
  2016年   375篇
  2015年   308篇
  2014年   326篇
  2013年   2699篇
  2012年   486篇
  2011年   419篇
  2010年   300篇
  2009年   307篇
  2008年   370篇
  2007年   385篇
  2006年   293篇
  2005年   438篇
  2004年   383篇
  2003年   341篇
  2002年   378篇
  2001年   394篇
  2000年   371篇
  1999年   364篇
  1998年   266篇
  1997年   258篇
  1996年   256篇
  1995年   243篇
  1994年   255篇
  1993年   225篇
  1992年   274篇
  1991年   280篇
  1990年   252篇
  1989年   257篇
  1988年   247篇
  1987年   239篇
  1986年   212篇
  1985年   209篇
  1984年   251篇
  1983年   255篇
  1982年   193篇
  1981年   165篇
  1980年   165篇
  1979年   196篇
  1978年   171篇
  1977年   150篇
  1976年   133篇
  1975年   124篇
  1974年   112篇
  1973年   107篇
  1972年   81篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
We introduce and study the so-called Kumaraswamy generalized gamma distribution that is capable of modeling bathtub-shaped hazard rate functions. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a large number of well-known lifetime special sub-models such as the exponentiated generalized gamma, exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma, generalized Rayleigh, among others. Some structural properties of the new distribution are studied. We obtain two infinite sum representations for the moments and an expansion for the generating function. We calculate the density function of the order statistics and an expansion for their moments. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. The usefulness of the new distribution is illustrated in two real data sets.  相似文献   
992.
A.R. Montazemi  K.M. Gupta 《Omega》1997,25(6):643-658
The objective of this study was to determine the impact of task information (TI) provided by an interface agent during the idea evaluation and integration step of the problem formulation stage of the problem solving process. The effectiveness assessment was based on solving diagnostic decision problems in the domain of complex industrial machinery. Ten domain experts participated in this study. Decision support was provided by a case-based reasoning system. Findings suggest that TI provided by the interface agent had no effect on the decision maker's performance, nor on the associated cognitive effort. However, a verbal protocol analysis revealed that the ten subjects used the interface agent to verify their decision processes. The results and their implications are discussed with respect to current findings in the area of decision support systems.  相似文献   
993.
This paper proposes and empirically validates a stages of growth model for the evolution of Information Systems Planning (ISP). A questionnaire survey of senior IS executives is used to gather information pertaining to the stages of growth model, which includes measurement of the nature and level of integration between business planning (BP) and ISP. The del test is used to validate empirically benchmark variables for each stage of BP-ISP integration. The results support the stages of growth model of BP-ISP integration and the benchmark variables are generally found to be successful in predicting the stage of integration.  相似文献   
994.
Mass spectrometry-based proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. Though recent years have seen a tremendous improvement in instrument performance and the computational tools used, significant challenges remain, and there are many opportunities for statisticians to make important contributions. In the most widely used "bottom-up" approach to proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and analyzed using a mass spectrometer. The two fundamental challenges in the analysis of bottom-up MS-based proteomics are: (1) Identifying the proteins that are present in a sample, and (2) Quantifying the abundance levels of the identified proteins. Both of these challenges require knowledge of the biological and technological context that gives rise to observed data, as well as the application of sound statistical principles for estimation and inference. We present an overview of bottom-up proteomics and outline the key statistical issues that arise in protein identification and quantification.  相似文献   
995.
We estimate individual potential income with stochastic earnings frontiers to measure overqualification as the ratio between actual income and potential income. To do this, we remove a drawback of the IAB employment sample, the censoring of the income data, by multiple imputation. The measurement of overqualification by the income ratio is also a valuable addition to the overeducation literature because the well-established objective or subjective overeducation measures focus on some ordinal matching aspects and ignore the metric income and efficiency aspects of overqualification.  相似文献   
996.
In this paper, we introduce a new probability model known as Marshall–Olkin q-Weibull distribution. Various properties of the distribution and hazard rate functions are considered. The distribution is applied to model a biostatistical data. The corresponding time series models are developed to illustrate its application in times series modeling. We also develop different types of autoregressive processes with minification structure and max–min structure which can be applied to a rich variety of contexts in real life. Sample path properties are examined and generalization to higher orders are also made. The model is applied to a time series data on daily discharge of Neyyar river in Kerala, India.  相似文献   
997.
Non-randomized trials can give a biased impression of the effectiveness of any intervention. We consider trials in which incidence rates are compared in two areas over two periods. Typically, one area receives an intervention, whereas the other does not. We outline and illustrate a method to estimate the bias in such trials under two different bivariate models. The illustrations use data in which no particular intervention is operating. The purpose is to illustrate the size of the bias that could be observed purely due to regression towards the mean (RTM). The illustrations show that the bias can be appreciably different from zero, and even when centred on zero, the variance of the bias can be large. We conclude that the results of non-randomized trials should be treated with caution, as interventions which show small effects could be explained as artefacts of RTM.  相似文献   
998.
Abstract. In numerous applications data are observed at random times and an estimated graph of the spectral density may be relevant for characterizing and explaining phenomena. By using a wavelet analysis, one derives a non‐parametric estimator of the spectral density of a Gaussian process with stationary increments (or a stationary Gaussian process) from the observation of one path at random discrete times. For every positive frequency, this estimator is proved to satisfy a central limit theorem with a convergence rate depending on the roughness of the process and the moment of random durations between successive observations. In the case of stationary Gaussian processes, one can compare this estimator with estimators based on the empirical periodogram. Both estimators reach the same optimal rate of convergence, but the estimator based on wavelet analysis converges for a different class of random times. Simulation examples and an application to biological data are also provided.  相似文献   
999.
The success of interventions designed to address important issues in social and medical science is best addressed by randomized experiments. With human beings there are often complications, however, such as noncompliance and missing data. Such complications are often addressed by statistically invalid methods of analysis, in particular, intention-to-treat and per-protocol analyses. Here we address these two complications using a statistically valid approach based on principal stratification with a fully Bayesian analysis. This analysis is applied to a randomized trial of a potentially important intervention designed to reduce the transmission of bacterial colonization between mothers and their infants through vaginal delivery in South Africa: the Prevention of Perinatal Sepsis (PoPs).  相似文献   
1000.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号