首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   163篇
  免费   6篇
管理学   3篇
人口学   1篇
丛书文集   1篇
综合类   29篇
统计学   135篇
  2021年   1篇
  2020年   2篇
  2019年   4篇
  2018年   5篇
  2017年   6篇
  2016年   7篇
  2015年   5篇
  2014年   8篇
  2013年   37篇
  2012年   13篇
  2011年   5篇
  2010年   6篇
  2009年   5篇
  2008年   3篇
  2007年   3篇
  2006年   5篇
  2005年   3篇
  2004年   8篇
  2003年   4篇
  2002年   5篇
  2001年   6篇
  2000年   1篇
  1999年   4篇
  1998年   6篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1988年   2篇
  1987年   1篇
  1981年   1篇
  1979年   1篇
排序方式: 共有169条查询结果,搜索用时 250 毫秒
21.
This paper continues earlier work of the authors in carrying out the program discussed in Kiefer (1975), of comparing the performance of designs under various optimality criteria. Designs for extrapolation problems are also obtained. The setting is that in which the controllable variable takes on values in the q-dimensional unit ball, and the regression is cubic. Thus, the ideas of comparison are tested for a model more complex than the quadratic models discussed previously. The E-optimum design performs well in terms of other criteria, as well as for extrapolation to larger balls. A method of simplifying the calculations to obtain approximately optimum designs, is illustrated.  相似文献   
22.
23.
24.
Maximum penalized likelihood estimation is applied in non(semi)-para-metric regression problems, and enables us exploratory identification and diagnostics of nonlinear regression relationships. The smoothing parameter A controls trade-off between the smoothness and the goodness-of-fit of a function. The method of cross-validation is used for selecting A, but the generalized cross-validation, which is based on the squared error criterion, shows bad be¬havior in non-normal distribution and can not often select reasonable A. The purpose of this study is to propose a method which gives more suitable A and to evaluate the performance of it.

A method of simple calculation for the delete-one estimates in the likeli¬hood-based cross-validation (LCV) score is described. A score of similar form to the Akaike information criterion (AIC) is also derived. The proposed scores are compared with the ones of standard procedures by using data sets in liter¬atures. Simulations are performed to compare the patterns of selecting A and overall goodness-of-fit and to evaluate the effects of some factors. The LCV-scores by the simple calculation provide good approximation to the exact one if λ is not extremeiy smaii Furthermore the LCV scores by the simple size it possible to select X adaptively They have the effect, of reducing the bias of estimates and provide better performance in the sense of overall goodness-of fit. These scores are useful especially in the case of small sample size and in the case of binary logistic regression.  相似文献   
25.
This article introduces a semiparametric autoregressive conditional heteroscedasticity (ARCH) model that has conditional first and second moments given by autoregressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator. For several particular conditional densities, the relative efficiency of the quasi-maximum likelihood estimator is compared with maximum likelihood under correct specification. These potential efficiency gains for a fully adaptive procedure are compared in a Monte Carlo experiment with the observed gains from using the proposed semiparametric procedure, and it is found that the estimator captures a substantial proportion of the potential. The estimator is applied to daily stock returns from small firms that are found to exhibit conditional skewness and kurtosis and to the British pound to dollar exchange rate.  相似文献   
26.
Several important economic time series are recorded on a particular day every week. Seasonal adjustment of such series is difficult because the number of weeks varies between 52 and 53 and the position of the recording day changes from year to year. In addition certain festivals, most notably Easter, take place at different times according to the year. This article presents a solution to problems of this kind by setting up a structural time series model that allows the seasonal pattern to evolve over time and enables trend extraction and seasonal adjustment to be carried out by means of state-space filtering and smoothing algorithms. The method is illustrated with a Bank of England series on the money supply.  相似文献   
27.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
28.
In this article, the approaches for exploiting mixtures of mixtures are expanded by using the Multiresolution family of probability density functions (MR pdf). The flexibility and the properties of local analysis of the MR pdf facilitate the location of subpopulations into a given population. In order to do this, two algorithms are provided.

The MR model is more flexible in adapting to the different subpopulations than the traditional mixtures. In addition, the problems of identification of mixtures distributions and the label-switching do not appear in the MR pdf context.  相似文献   
29.
Regular smoothing splines are known to have a type of boundary bias problem that can reduce their estimation efficiency. In this paper, a boundary corrected smoothing spline with general order is designed in a way that the risk will decay at an optimal rate. An O(n) algorithm is also developed to compute the resultant estimator efficiently.  相似文献   
30.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号