首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1314篇
  免费   36篇
  国内免费   5篇
管理学   101篇
民族学   1篇
人口学   8篇
丛书文集   20篇
理论方法论   19篇
综合类   223篇
社会学   7篇
统计学   976篇
  2023年   9篇
  2022年   4篇
  2021年   8篇
  2020年   17篇
  2019年   38篇
  2018年   46篇
  2017年   63篇
  2016年   30篇
  2015年   28篇
  2014年   47篇
  2013年   424篇
  2012年   92篇
  2011年   33篇
  2010年   46篇
  2009年   36篇
  2008年   38篇
  2007年   38篇
  2006年   29篇
  2005年   38篇
  2004年   35篇
  2003年   31篇
  2002年   20篇
  2001年   36篇
  2000年   27篇
  1999年   26篇
  1998年   22篇
  1997年   14篇
  1996年   8篇
  1995年   13篇
  1994年   4篇
  1993年   7篇
  1992年   6篇
  1991年   5篇
  1990年   4篇
  1989年   6篇
  1988年   3篇
  1987年   4篇
  1986年   3篇
  1985年   3篇
  1984年   4篇
  1983年   1篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有1355条查询结果,搜索用时 406 毫秒
21.
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence.  相似文献   
22.
This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland.  相似文献   
23.
Varying-coefficient models are very useful for longitudinal data analysis. In this paper, we focus on varying-coefficient models for longitudinal data. We develop a new estimation procedure using Cholesky decomposition and profile least squares techniques. Asymptotic normality for the proposed estimators of varying-coefficient functions has been established. Monte Carlo simulation studies show excellent finite-sample performance. We illustrate our methods with a real data example.  相似文献   
24.
This paper discusses the regression analysis of current status failure time data arising from the additive hazards model with auxiliary covariates. As often occurs in practice, it is impossible or impractical to measure the exact magnitude of covariates for all subjects in a study. To compensate the missing information, some auxiliary covariates are utilized instead. We propose two easy-to-implement procedures for estimation of regression parameters by making use of auxiliary information. The asymptotic properties of the resulting estimators are established and extensive numerical studies indicate that both procedures work well in practice.  相似文献   
25.
There is currently much discussion about lasso-type regularized regression which is a useful tool for simultaneous estimation and variable selection. Although the lasso-type regularization has several advantages in regression modelling, owing to its sparsity, it suffers from outliers because of using penalized least-squares methods. To overcome this issue, we propose a robust lasso-type estimation procedure that uses the robust criteria as the loss function, imposing L1-type penalty called the elastic net. We also introduce to use the efficient bootstrap information criteria for choosing optimal regularization parameters and a constant in outlier detection. Simulation studies and real data analysis are given to examine the efficiency of the proposed robust sparse regression modelling. We observe that our modelling strategy performs well in the presence of outliers.  相似文献   
26.
27.
The additive Cox model is flexible and powerful for modelling the dynamic changes of regression coefficients in the survival analysis. This paper is concerned with feature screening for the additive Cox model with ultrahigh-dimensional covariates. The proposed screening procedure can effectively identify active predictors. That is, with probability tending to one, the selected variable set includes the actual active predictors. In order to carry out the proposed procedure, we propose an effective algorithm and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. Furthermore, we examine the finite sample performance of the proposed procedure via Monte Carlo simulations, and illustrate the proposed procedure by a real data example.  相似文献   
28.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   
29.
30.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号