首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5885篇
  免费   277篇
  国内免费   111篇
管理学   385篇
民族学   13篇
人才学   1篇
人口学   112篇
丛书文集   237篇
理论方法论   127篇
综合类   1515篇
社会学   337篇
统计学   3546篇
  2024年   15篇
  2023年   99篇
  2022年   128篇
  2021年   145篇
  2020年   176篇
  2019年   264篇
  2018年   313篇
  2017年   379篇
  2016年   268篇
  2015年   225篇
  2014年   304篇
  2013年   1032篇
  2012年   391篇
  2011年   249篇
  2010年   214篇
  2009年   215篇
  2008年   231篇
  2007年   235篇
  2006年   203篇
  2005年   206篇
  2004年   176篇
  2003年   139篇
  2002年   100篇
  2001年   110篇
  2000年   96篇
  1999年   65篇
  1998年   66篇
  1997年   44篇
  1996年   27篇
  1995年   31篇
  1994年   25篇
  1993年   16篇
  1992年   19篇
  1991年   11篇
  1990年   11篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6273条查询结果,搜索用时 17 毫秒
291.
292.
Many directional data such as wind directions can be collected extremely easily so that experiments typically yield a huge number of data points that are sequentially collected. To deal with such big data, the traditional nonparametric techniques rapidly require a lot of time to be computed and therefore become useless in practice if real time or online forecasts are expected. In this paper, we propose a recursive kernel density estimator for directional data which (i) can be updated extremely easily when a new set of observations is available and (ii) keeps asymptotically the nice features of the traditional kernel density estimator. Our methodology is based on Robbins–Monro stochastic approximations ideas. We show that our estimator outperforms the traditional techniques in terms of computational time while being extremely competitive in terms of efficiency with respect to its competitors in the sequential context considered here. We obtain expressions for its asymptotic bias and variance together with an almost sure convergence rate and an asymptotic normality result. Our technique is illustrated on a wind dataset collected in Spain. A Monte‐Carlo study confirms the nice properties of our recursive estimator with respect to its non‐recursive counterpart.  相似文献   
293.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   
294.
Quadratic inference function (QIF) is an alternative methodology to the popular generalized estimating equations (GEE) approach, it does not involve direct estimation of the correlation parameter, and thus remains optimal even if the working correlation structure is misspecified. The idea is to represent the inverse of the working correlation matrix by a linear combination of some basis matrices. In this article, we present a modification of QIF with a robust variance estimator of the extended score function. Theoretical and numerical results show that the modified QIF attains better efficiency and achieves better small sample performance than the original QIF method.  相似文献   
295.
296.
This article deals with the estimation of R = P{X < Y}, where X and Y are independent random variables from geometric and exponential distribution, respectively. For complete samples, the MLE of R, its asymptotic distribution, and confidence interval based on it are obtained. The procedure for deriving bootstrap-p confidence interval is presented. The UMVUE of R and UMVUE of its variance are derived. The Bayes estimator of R is investigated and its Lindley's approximation is obtained. A simulation study is performed in order to compare these estimators. Finally, all point estimators for right censored sample from the exponential distribution, are obtained.  相似文献   
297.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   
298.
In this article, dichotomous variables are used to compare between linear and nonlinear Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Statistical inferences that involve estimation of parameters and their standard deviations and residuals analysis for testing the selected model are discussed. Hidden continuous normal distribution (censored normal distribution) is used to solve the problem of dichotomous variables. The proposed procedure is illustrated by a simulation data obtained from R program. Analyses are done by using R2WinBUGS package in R-program.  相似文献   
299.
In the article, properties of the Bennett test and Miller test are analyzed. Assuming that the sample size is the same for each sample and considering the null hypothesis that the coefficients of variation for k populations are equal against the hypothesis that k ? 1 coefficients of variation are the same but differ from the coefficient of variation for the kth population, the empirical significance level and the power of the test are studied. Moreover, the dependence of the test statistic and the power of the test on the ratio of coefficients of variation are considered. The analyses are performed on simulated data.  相似文献   
300.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号