首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5222篇
  免费   109篇
  国内免费   35篇
管理学   273篇
民族学   16篇
人口学   74篇
丛书文集   105篇
理论方法论   42篇
综合类   1241篇
社会学   66篇
统计学   3549篇
  2024年   11篇
  2023年   30篇
  2022年   59篇
  2021年   70篇
  2020年   84篇
  2019年   163篇
  2018年   216篇
  2017年   312篇
  2016年   189篇
  2015年   154篇
  2014年   213篇
  2013年   1197篇
  2012年   402篇
  2011年   215篇
  2010年   182篇
  2009年   191篇
  2008年   211篇
  2007年   174篇
  2006年   167篇
  2005年   176篇
  2004年   152篇
  2003年   133篇
  2002年   119篇
  2001年   83篇
  2000年   90篇
  1999年   66篇
  1998年   63篇
  1997年   42篇
  1996年   21篇
  1995年   30篇
  1994年   20篇
  1993年   22篇
  1992年   22篇
  1991年   9篇
  1990年   11篇
  1989年   9篇
  1988年   9篇
  1987年   6篇
  1986年   2篇
  1985年   7篇
  1984年   6篇
  1983年   7篇
  1982年   4篇
  1981年   3篇
  1980年   5篇
  1979年   1篇
  1978年   2篇
  1977年   5篇
  1975年   1篇
排序方式: 共有5366条查询结果,搜索用时 0 毫秒
161.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best’ representation, which we shall explore.  相似文献   
162.
我国少数民族构成要素的因子分析   总被引:2,自引:0,他引:2  
黄行 《民族研究》2001,4(1):45-51
本文用总人口、自治地方本民族人口、自治地方少数民族人口、人均GDP、农业人口、识字人口、在校生、母语人、单语人、双语人人数等 10项指标作为民族的地区、经济、教育和语言构成要素 ,对这些要素在我国现有少数民族中的构成情况作了描述、相关和因子提取等统计分析 ,以说明目前我国少数民族构成的基本社会状况 ,以及这些要素对少数民族整体和个体的构成的作用或代表的信息量有多大。文章还依据已有的经验知识对统计分析结果作了相应的说明和解释。结论是 ,目前我国少数民族之间的区别特征主要在于经济发展水平、人口的教育水平、居住地区、使用语言等方面的社会差别 ;但是从发展趋势看 ,传统意义上的各民族构成要素的作用将逐渐减弱 ,民族间的差别将逐渐缩小  相似文献   
163.
The Hodrick–Prescott (HP) filtering is widely applied to decompose macroeconomic time series, such as real Gross Domestic Product, into cyclical and trend components. This paper presents a small but practically useful modification to this approach. The reason why this modified filtering is of practical use is that it provides not only identical trend estimates as the HP filtering but also extrapolations of the trend. We provide a proof based on a ridge regression representation of the modified HP filtering. This is mainly because it enhances our understanding of the approach.  相似文献   
164.
165.
We focus on the construction of confidence corridors for multivariate nonparametric generalized quantile regression functions. This construction is based on asymptotic results for the maximal deviation between a suitable nonparametric estimator and the true function of interest, which follow after a series of approximation steps including a Bahadur representation, a new strong approximation theorem, and exponential tail inequalities for Gaussian random fields. As a byproduct we also obtain multivariate confidence corridors for the regression function in the classical mean regression. To deal with the problem of slowly decreasing error in coverage probability of the asymptotic confidence corridors, which results in meager coverage for small sample sizes, a simple bootstrap procedure is designed based on the leading term of the Bahadur representation. The finite-sample properties of both procedures are investigated by means of a simulation study and it is demonstrated that the bootstrap procedure considerably outperforms the asymptotic bands in terms of coverage accuracy. Finally, the bootstrap confidence corridors are used to study the efficacy of the National Supported Work Demonstration, which is a randomized employment enhancement program launched in the 1970s. This article has supplementary materials online.  相似文献   
166.
A variety of primary endpoints are used in clinical trials treating patients with severe infectious diseases, and existing guidelines do not provide a consistent recommendation. We propose to study simultaneously two primary endpoints, cure and death, in a comprehensive multistate cure‐death model as starting point for a treatment comparison. This technique enables us to study the temporal dynamic of the patient‐relevant probability to be cured and alive. We describe and compare traditional and innovative methods suitable for a treatment comparison based on this model. Traditional analyses using risk differences focus on one prespecified timepoint only. A restricted logrank‐based test of treatment effect is sensitive to ordered categories of responses and integrates information on duration of response. The pseudo‐value regression provides a direct regression model for examination of treatment effect via difference in transition probabilities. Applied to a topical real data example and simulation scenarios, we demonstrate advantages and limitations and provide an insight into how these methods can handle different kinds of treatment imbalances. The cure‐death model provides a suitable framework to gain a better understanding of how a new treatment influences the time‐dynamic cure and death process. This might help the future planning of randomised clinical trials, sample size calculations, and data analyses.  相似文献   
167.
In this paper, we investigate the commonality of nonparametric component functions among different quantile levels in additive regression models. We propose two fused adaptive group Least Absolute Shrinkage and Selection Operator penalties to shrink the difference of functions between neighbouring quantile levels. The proposed methodology is able to simultaneously estimate the nonparametric functions and identify the quantile regions where functions are unvarying, and thus is expected to perform better than standard additive quantile regression when there exists a region of quantile levels on which the functions are unvarying. Under some regularity conditions, the proposed penalised estimators can theoretically achieve the optimal rate of convergence and identify the true varying/unvarying regions consistently. Simulation studies and a real data application show that the proposed methods yield good numerical results.  相似文献   
168.
The paper investigates various nonparametric models including regression, conditional distribution, conditional density and conditional hazard function, when the covariates are infinite dimensional. The main contribution is to prove uniform in bandwidth asymptotic results for kernel estimators of these functional operators. Then, the application issues, involving data-driven bandwidth selection, are discussed.  相似文献   
169.
Generalised estimating equations (GEE) for regression problems with vector‐valued responses are examined. When the response vectors are of mixed type (e.g. continuous–binary response pairs), the GEE approach is a semiparametric alternative to full‐likelihood copula methods, and is closely related to Prentice & Zhao's mean‐covariance estimation equations approach. When the response vectors are of the same type (e.g. measurements on left and right eyes), the GEE approach can be viewed as a ‘plug‐in’ to existing methods, such as the vglm function from the state‐of‐the‐art VGAM package in R. In either scenario, the GEE approach offers asymptotically correct inferences on model parameters regardless of whether the working variance–covariance model is correctly or incorrectly specified. The finite‐sample performance of the method is assessed using simulation studies based on a burn injury dataset and a sorbinil eye trial dataset. The method is applied to data analysis examples using the same two datasets, as well as to a trivariate binary dataset on three plant species in the Hunua ranges of Auckland.  相似文献   
170.
Linear mixed models have been widely used to analyze repeated measures data which arise in many studies. In most applications, it is assumed that both the random effects and the within-subjects errors are normally distributed. This can be extremely restrictive, obscuring important features of within-and among-subject variations. Here, quantile regression in the Bayesian framework for the linear mixed models is described to carry out the robust inferences. We also relax the normality assumption for the random effects by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in the linear mixed models. For posterior inference, we propose a Gibbs sampling algorithm based on a mixture representation of the asymmetric Laplace distribution and multivariate skew-normal distribution. The procedures are demonstrated by both simulated and real data examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号