首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   40篇
管理学   1篇
丛书文集   1篇
综合类   3篇
社会学   2篇
统计学   531篇
  2020年   1篇
  2019年   14篇
  2018年   19篇
  2017年   10篇
  2016年   3篇
  2015年   12篇
  2014年   14篇
  2013年   64篇
  2012年   18篇
  2011年   11篇
  2010年   25篇
  2009年   53篇
  2008年   46篇
  2007年   50篇
  2006年   2篇
  2005年   1篇
  2003年   1篇
  1998年   1篇
  1997年   6篇
  1996年   3篇
  1995年   3篇
  1994年   8篇
  1993年   1篇
  1992年   1篇
  1988年   1篇
  1985年   24篇
  1984年   26篇
  1983年   27篇
  1982年   25篇
  1981年   19篇
  1980年   12篇
  1979年   14篇
  1978年   19篇
  1977年   4篇
排序方式: 共有538条查询结果,搜索用时 15 毫秒
21.
We study the least-square regression learning algorithm generated by regularization schemes in reproducing kernel Hilbert spaces. A non-iid setting is considered: the sequence of probability measures for sampling is not identical and the sampling may be dependent. When the sequence of marginal distributions for sampling converges exponentially fast in the dual of a Hölder space and the sampling process satisfies a polynomial strong mixing condition, we derive learning rates for the learning algorithm.  相似文献   
22.
23.
Consider a process satisfying a stochastic differential equation with unknown drift parameter, and suppose that discrete observations are given. It is known that a simple least squares estimator (LSE) can be consistent but numerically unstable in the sense of large standard deviations under finite samples when the noise process has jumps. We propose a filter to cut large shocks from data and construct the same LSE from data selected by the filter. The proposed estimator can be asymptotically equivalent to the usual LSE, whose asymptotic distribution strongly depends on the noise process. However, in numerical study, it looked asymptotically normal in an example where filter was chosen suitably, and the noise was a Lévy process. We will try to justify this phenomenon mathematically, under certain restricted assumptions.  相似文献   
24.
We propose an efficient and robust method for variance function estimation in semiparametric longitudinal data analysis. The method utilizes a local log‐linear approximation for the variance function and adopts a generalized estimating equation approach to account for within subject correlations. We show theoretically and empirically that our method outperforms estimators using working independence that ignores the correlations. The Canadian Journal of Statistics 39: 656–670; 2011. © 2011 Statistical Society of Canada  相似文献   
25.
For normal populations with unequal variances, we develop matching priors and reference priors for a linear combination of the means. Here, we find three second-order matching priors: a highest posterior density (HPD) matching prior, a cumulative distribution function (CDF) matching prior, and a likelihood ratio (LR) matching prior. Furthermore, we show that the reference priors are all first-order matching priors, but that they do not satisfy the second-order matching criterion that establishes the symmetry and the unimodality of the posterior under the developed priors. The results of a simulation indicate that the second-order matching prior outperforms the reference priors in terms of matching the target coverage probabilities, in a frequentist sense. Finally, we compare the Bayesian credible intervals based on the developed priors with the confidence intervals derived from real data.  相似文献   
26.
27.
In this paper, three competing survival function estimators are compared under the assumptions of the so-called Koziol– Green model, which is a simple model of informative random censoring. It is shown that the model specific estimators of Ebrahimi and Abdushukurov, Cheng, and Lin are asymptotically equivalent. Further, exact expressions for the (noncentral) moments of these estimators are given, and their biases are analytically compared with the bias of the familiar Kaplan–Meier estimator. Finally, MSE comparisons of the three estimators are given for some selected rates of censoring.  相似文献   
28.
Josef Kozák 《Statistics》2013,47(3):363-371
Working with the linear regression model (1.1) and having the extraneous information (1.2) about regression coefficients the problem exists how to build estimators (1.3) with the risk (1.4) which enable to utilize the known information in order to reduce their risk as compared with the risk (1.6) of the LSE (1.5). Solution of this problem is known for the positive definite matrix T, namely in form for estimators (1.8) and (1.10).First, it is shown that the proposed estimators (2.6),(2.9) and (2.16) based on psedoinversions of the matrix L represent the solution of the problem of the positive semidefinite matrix T=L'L.Further, the problem of interpretability of estimators in the sense of the inequality (3.1) exists; it is shown that all mentioned estimators are at least partially interpretable in the sense of requirements (3.2) or (3.10).  相似文献   
29.
J. Anděl  I. Netuka 《Statistics》2013,47(4):279-287
The article deals with methods for computing the stationary marginal distribution in linear models of time series. Two approaches are described. First, an algorithm based on approximation of solution of the corresponding integral equation is briefly reviewed. Then, we study the limit behaviour of the partial sums c 1 η1+c 2 η2+···+c n η n where η i are i.i.d. random variables and c i real constants. We generalize procedure of Haiman (1998) [Haiman, G., 1998, Upper and lower bounds for the tail of the invariant distribution of some AR(1) processes. Asymptotic Methods in Probability and Statistics, 45, 723–730.] to an arbitrary causal linear process and relax the assumptions of his result significantly. This is achieved by investigating the properties of convolution of densities.  相似文献   
30.
In this article, we develop regression models with cross‐classified responses. Conditional independence structures can be explored/exploited through the selective inclusion/exclusion of terms in a certain functional ANOVA decomposition, and the estimation is done nonparametrically via the penalized likelihood method. A cohort of computational and data analytical tools are presented, which include cross‐validation for smoothing parameter selection, Kullback–Leibler projection for model selection, and Bayesian confidence intervals for odds ratios. Random effects are introduced to model possible correlations such as those found in longitudinal and clustered data. Empirical performances of the methods are explored in simulation studies of limited scales, and a real data example is presented using some eyetracking data from linguistic studies. The techniques are implemented in a suite of R functions, whose usage is briefly described in the appendix. The Canadian Journal of Statistics 39: 591–609; 2011. © 2011 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号