首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19744篇
  免费   844篇
  国内免费   278篇
管理学   1804篇
劳动科学   2篇
民族学   126篇
人才学   2篇
人口学   473篇
丛书文集   1164篇
理论方法论   616篇
综合类   9623篇
社会学   1451篇
统计学   5605篇
  2024年   33篇
  2023年   204篇
  2022年   220篇
  2021年   288篇
  2020年   437篇
  2019年   551篇
  2018年   633篇
  2017年   798篇
  2016年   630篇
  2015年   679篇
  2014年   1047篇
  2013年   2439篇
  2012年   1361篇
  2011年   1225篇
  2010年   1008篇
  2009年   965篇
  2008年   1085篇
  2007年   1171篇
  2006年   1082篇
  2005年   952篇
  2004年   844篇
  2003年   702篇
  2002年   571篇
  2001年   453篇
  2000年   342篇
  1999年   209篇
  1998年   139篇
  1997年   127篇
  1996年   106篇
  1995年   91篇
  1994年   81篇
  1993年   67篇
  1992年   59篇
  1991年   31篇
  1990年   40篇
  1989年   37篇
  1988年   31篇
  1987年   28篇
  1986年   26篇
  1985年   20篇
  1984年   21篇
  1983年   12篇
  1982年   6篇
  1981年   9篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
861.
Many directional data such as wind directions can be collected extremely easily so that experiments typically yield a huge number of data points that are sequentially collected. To deal with such big data, the traditional nonparametric techniques rapidly require a lot of time to be computed and therefore become useless in practice if real time or online forecasts are expected. In this paper, we propose a recursive kernel density estimator for directional data which (i) can be updated extremely easily when a new set of observations is available and (ii) keeps asymptotically the nice features of the traditional kernel density estimator. Our methodology is based on Robbins–Monro stochastic approximations ideas. We show that our estimator outperforms the traditional techniques in terms of computational time while being extremely competitive in terms of efficiency with respect to its competitors in the sequential context considered here. We obtain expressions for its asymptotic bias and variance together with an almost sure convergence rate and an asymptotic normality result. Our technique is illustrated on a wind dataset collected in Spain. A Monte‐Carlo study confirms the nice properties of our recursive estimator with respect to its non‐recursive counterpart.  相似文献   
862.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   
863.
One of the important topics in morphometry that received high attention recently is the longitudinal analysis of shape variation. According to Kendall's definition of shape, the shape of object appertains on non-Euclidean space, making the longitudinal study of configuration somehow difficult. However, to simplify this task, triangulation of the objects and then constructing a non-parametric regression-type model on the unit sphere is pursued in this paper. The prediction of the configurations in some time instances is done using both properties of triangulation and the size of great baselines. Moreover, minimizing a Euclidean risk function is proposed to select feasible weights in constructing smoother functions in a non-parametric smoothing manner. These will provide some proper shape growth models to analysis objects varying in time. The proposed models are applied to analysis of two real-life data sets.  相似文献   
864.
Quadratic inference function (QIF) is an alternative methodology to the popular generalized estimating equations (GEE) approach, it does not involve direct estimation of the correlation parameter, and thus remains optimal even if the working correlation structure is misspecified. The idea is to represent the inverse of the working correlation matrix by a linear combination of some basis matrices. In this article, we present a modification of QIF with a robust variance estimator of the extended score function. Theoretical and numerical results show that the modified QIF attains better efficiency and achieves better small sample performance than the original QIF method.  相似文献   
865.
866.
This article deals with the estimation of R = P{X < Y}, where X and Y are independent random variables from geometric and exponential distribution, respectively. For complete samples, the MLE of R, its asymptotic distribution, and confidence interval based on it are obtained. The procedure for deriving bootstrap-p confidence interval is presented. The UMVUE of R and UMVUE of its variance are derived. The Bayes estimator of R is investigated and its Lindley's approximation is obtained. A simulation study is performed in order to compare these estimators. Finally, all point estimators for right censored sample from the exponential distribution, are obtained.  相似文献   
867.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk.  相似文献   
868.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   
869.
In this article, dichotomous variables are used to compare between linear and nonlinear Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Statistical inferences that involve estimation of parameters and their standard deviations and residuals analysis for testing the selected model are discussed. Hidden continuous normal distribution (censored normal distribution) is used to solve the problem of dichotomous variables. The proposed procedure is illustrated by a simulation data obtained from R program. Analyses are done by using R2WinBUGS package in R-program.  相似文献   
870.
In the article, properties of the Bennett test and Miller test are analyzed. Assuming that the sample size is the same for each sample and considering the null hypothesis that the coefficients of variation for k populations are equal against the hypothesis that k ? 1 coefficients of variation are the same but differ from the coefficient of variation for the kth population, the empirical significance level and the power of the test are studied. Moreover, the dependence of the test statistic and the power of the test on the ratio of coefficients of variation are considered. The analyses are performed on simulated data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号