首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4065篇
  免费   118篇
  国内免费   15篇
管理学   388篇
民族学   6篇
人口学   74篇
丛书文集   54篇
理论方法论   96篇
综合类   370篇
社会学   261篇
统计学   2949篇
  2024年   1篇
  2023年   28篇
  2022年   24篇
  2021年   44篇
  2020年   66篇
  2019年   115篇
  2018年   161篇
  2017年   254篇
  2016年   122篇
  2015年   130篇
  2014年   126篇
  2013年   998篇
  2012年   306篇
  2011年   152篇
  2010年   133篇
  2009年   159篇
  2008年   151篇
  2007年   146篇
  2006年   124篇
  2005年   148篇
  2004年   117篇
  2003年   98篇
  2002年   74篇
  2001年   78篇
  2000年   72篇
  1999年   61篇
  1998年   55篇
  1997年   42篇
  1996年   24篇
  1995年   20篇
  1994年   28篇
  1993年   19篇
  1992年   20篇
  1991年   16篇
  1990年   12篇
  1989年   8篇
  1988年   12篇
  1987年   7篇
  1986年   5篇
  1985年   9篇
  1984年   6篇
  1983年   9篇
  1982年   8篇
  1981年   1篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有4198条查询结果,搜索用时 31 毫秒
51.
Summary. Standard goodness-of-fit tests for a parametric regression model against a series of nonparametric alternatives are based on residuals arising from a fitted model. When a parametric regression model is compared with a nonparametric model, goodness-of-fit testing can be naturally approached by evaluating the likelihood of the parametric model within a nonparametric framework. We employ the empirical likelihood for an α -mixing process to formulate a test statistic that measures the goodness of fit of a parametric regression model. The technique is based on a comparison with kernel smoothing estimators. The empirical likelihood formulation of the test has two attractive features. One is its automatic consideration of the variation that is associated with the nonparametric fit due to empirical likelihood's ability to Studentize internally. The other is that the asymptotic distribution of the test statistic is free of unknown parameters, avoiding plug-in estimation. We apply the test to a discretized diffusion model which has recently been considered in financial market analysis.  相似文献   
52.
The authors consider the issue of map positional error, or the difference between location as represented in a spatial database (i.e., a map) and the corresponding unobservable true location. They propose a fully model‐based approach that incorporates aspects of the map registration process commonly performed by users of geographic informations systems, including rubber‐sheeting. They explain how estimates of positional error can be obtained, hence estimates of true location. They show that with multiple maps of varying accuracy along with ground truthing data, suitable model averaging offers a strategy for using all of the maps to learn about true location.  相似文献   
53.
This paper documents situations where the variance inflation model for outliers has undesirable properties. The model is commonly used to accommodate outliers in a Bayesian analysis of regression and time series models. The alternative approach provided here does not suffer from these undesirable properties but gives inferences similar to those of the variance inflation model when this is appropriate. It can be used with regression, time series, and regression with correlated errors in a unified way, and adheres to the scientific principle that inference should be based on the data after obvious outliers have been discarded. Only one parameter is required for outliers; it is interpretable as the a priori willingness to remove observations from the analysis.  相似文献   
54.
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.  相似文献   
55.
The author characterizes the copula associated with the bivariate survival model of Clayton (1978) as the only absolutely continuous copula that is preserved under bivariate truncation.  相似文献   
56.
Current status data arise when the death of every subject in a study cannot be determined precisely, but is known only to have occurred before or after a random monitoring time. The authors discuss the analysis of such data under semiparametric linear transformation models for which they propose a general inference procedure based on estimating functions. They determine the properties of the estimates they propose for the regression parameters of the model and illustrate their technique using tumorigenicity data.  相似文献   
57.
本文分析了传统FAGM(1,1)模型建模过程中存在的误差,提出了一种基于Simpson公式改进的FAGM(1,1)模型。首先,基于分数阶累加生成算子和分数阶累减生成算子建立分数阶FAGM(1,1)模型。其次,利用Simpson积分公式对FAGM(1,1)模型的背景值进行改进,建立SFAGM(1,1)模型。进一步,应用遗传算法确定SFAGM(1,1)模型的最优阶数以提高模型的预测精度。最后,以中国人均GDP为例,对比分析GM(1,1)模型、Simpson改进的GM(1,1)模型(SGM(1,1))、FAGM(1,1)模型、SFAGM(1,1)模型的模拟结果,并对"十三五"时期的人均GDP进行预测,其结果表明SFAGM(1,1)模型比GM(1,1)、SGM(1,1)、FAGM(1,1)在人均GDP的预测方面有更高的精度,"十三五"期间人均GDP年平均增长率为10.64%,到2020年达到83146.97元,是2010年人均GDP的2.69倍,以2010年的人均GDP为基准,到2020年将能够实现翻一番的目标。  相似文献   
58.
59.
Several authors have contributed to what can now be considered a rather complete theory for analysis of variance in cases with orthogonal factors. By using this theory on an assumed basic reference population, the orthogonality concept gives a natural definition of independence between factors in the population. By looking upon the treated units in designed experiments as a formal sample from a future population about which we want to make inference, a natural parametrization of expectations and variances connected to such experiments arises. This approach seems to throw light upon several controversial questions in the theory of mixed models. Also, it gives a framework for discussing the choice of conditioning in models  相似文献   
60.
Generalized Leverage and its Applications   总被引:2,自引:0,他引:2  
The generalized leverage of an estimator is defined in regression models as a measure of the importance of individual observations. We derive a simple but powerful result, developing an explicit expression for leverage in a general M -estimation problem, of which the maximum likelihood problems are special cases. A variety of applications are considered, most notably to the exponential family non-linear models. The relationship between leverage and local influence is also discussed. Numerical examples are given to illustrate our results  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号