首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17883篇
  免费   306篇
管理学   2433篇
民族学   84篇
人才学   3篇
人口学   1583篇
丛书文集   104篇
教育普及   2篇
理论方法论   1835篇
综合类   383篇
社会学   8895篇
统计学   2867篇
  2021年   85篇
  2020年   266篇
  2019年   322篇
  2018年   331篇
  2017年   495篇
  2016年   416篇
  2015年   322篇
  2014年   381篇
  2013年   2921篇
  2012年   541篇
  2011年   462篇
  2010年   335篇
  2009年   368篇
  2008年   410篇
  2007年   435篇
  2006年   362篇
  2005年   494篇
  2004年   449篇
  2003年   417篇
  2002年   441篇
  2001年   452篇
  2000年   406篇
  1999年   422篇
  1998年   304篇
  1997年   294篇
  1996年   293篇
  1995年   270篇
  1994年   284篇
  1993年   268篇
  1992年   298篇
  1991年   309篇
  1990年   269篇
  1989年   272篇
  1988年   258篇
  1987年   232篇
  1986年   234篇
  1985年   231篇
  1984年   262篇
  1983年   249篇
  1982年   218篇
  1981年   183篇
  1980年   178篇
  1979年   209篇
  1978年   176篇
  1977年   161篇
  1976年   143篇
  1975年   140篇
  1974年   125篇
  1973年   120篇
  1971年   83篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
861.
Twenty-one volunteers tested the usability of revisions to the Texas A&M University Libraries' SFX® OpenURL link resolver menus, including the addition of Ex Libris' new bX™ recommendation service and a plug-in which pulls additional information about the journal into the menu. The volunteers also evaluated the quality and desirability of the bX recommendations and discussed their preferences for help options and full-text format. Results of the usability testing are reported along with the resultant menu changes. This study will be of interest to librarians implementing or redesigning OpenURL menus as well as those interested in the user experience.  相似文献   
862.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients.  相似文献   
863.
We introduce and study the so-called Kumaraswamy generalized gamma distribution that is capable of modeling bathtub-shaped hazard rate functions. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a large number of well-known lifetime special sub-models such as the exponentiated generalized gamma, exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma, generalized Rayleigh, among others. Some structural properties of the new distribution are studied. We obtain two infinite sum representations for the moments and an expansion for the generating function. We calculate the density function of the order statistics and an expansion for their moments. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. The usefulness of the new distribution is illustrated in two real data sets.  相似文献   
864.
A prevalence of heavy-tailed, peaked and skewed uncertainty phenomena have been cited in literature dealing with economic, physics, and engineering data. This fact has invigorated the search for continuous distributions of this nature. In this paper we shall generalize the two-sided framework presented in Kotz and van Dorp (Beyond beta: other continuous families of distributions with bounded support and applications. World Scientific Press, Singapore, 2004) for the construction of families of distributions with bounded support via a mixture technique utilizing two generating densities instead of one. The family of Elevated Two-Sided Power (ETSP) distributions is studied as an instance of this generalized framework. Through a moment ratio diagram comparison, we demonstrate that the ETSP family allows for a remarkable flexibility when modeling heavy-tailed and peaked, but skewed, uncertainty phenomena. We shall demonstrate its applicability via an illustrative example utilizing 2008 US income data.  相似文献   
865.
The purpose of this paper is to highlight some classic issues in the measurement of change and to show how contemporary solutions can be used to deal with some of these issues. Five classic issues will be raised here: (1) Separating individual changes from group differences; (2) options for incomplete longitudinal data over time, (3) options for nonlinear changes over time; (4) measurement invariance in studies of changes over time; and (5) new opportunities for modeling dynamic changes. For each issue we will describe the problem, and then review some contemporary solutions to these problems base on Structural Equation Models (SEM). We will fit these SEM to using existing panel data from the Health & Retirement Study (HRS) cognitive variables. This is not intended as an overly technical treatment, so only a few basic equations are presented, examples will be displayed graphically, and more complete references to the contemporary solutions will be given throughout.  相似文献   
866.
The authors "consider the problem of adjusting provisional time series using a bivariate structural model with correlated measurement errors. Maximum likelihood estimators and a minimum mean squared error adjustment procedure are derived for a provisional and final series containing common trend and seasonal components. The model also includes measurement errors common to both series and errors that are specific to the provisional series. [The authors] illustrate the technique by using provisional data to forecast ischemic heart disease mortality."  相似文献   
867.
Mass spectrometry-based proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. Though recent years have seen a tremendous improvement in instrument performance and the computational tools used, significant challenges remain, and there are many opportunities for statisticians to make important contributions. In the most widely used "bottom-up" approach to proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and analyzed using a mass spectrometer. The two fundamental challenges in the analysis of bottom-up MS-based proteomics are: (1) Identifying the proteins that are present in a sample, and (2) Quantifying the abundance levels of the identified proteins. Both of these challenges require knowledge of the biological and technological context that gives rise to observed data, as well as the application of sound statistical principles for estimation and inference. We present an overview of bottom-up proteomics and outline the key statistical issues that arise in protein identification and quantification.  相似文献   
868.
869.
Non-randomized trials can give a biased impression of the effectiveness of any intervention. We consider trials in which incidence rates are compared in two areas over two periods. Typically, one area receives an intervention, whereas the other does not. We outline and illustrate a method to estimate the bias in such trials under two different bivariate models. The illustrations use data in which no particular intervention is operating. The purpose is to illustrate the size of the bias that could be observed purely due to regression towards the mean (RTM). The illustrations show that the bias can be appreciably different from zero, and even when centred on zero, the variance of the bias can be large. We conclude that the results of non-randomized trials should be treated with caution, as interventions which show small effects could be explained as artefacts of RTM.  相似文献   
870.
We estimate individual potential income with stochastic earnings frontiers to measure overqualification as the ratio between actual income and potential income. To do this, we remove a drawback of the IAB employment sample, the censoring of the income data, by multiple imputation. The measurement of overqualification by the income ratio is also a valuable addition to the overeducation literature because the well-established objective or subjective overeducation measures focus on some ordinal matching aspects and ignore the metric income and efficiency aspects of overqualification.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号