共查询到20条相似文献,搜索用时 15 毫秒
1.
Zheng Wang 《Journal of applied statistics》2000,27(4):495-507
In this paper, an algorithm for Generalized Monotonic Smoothing (GMS) is developed as an extension to exponential family models of the monotonic smoothing techniques proposed by Ramsay (1988, 1998a,b). A two-step algorithm is used to estimate the coefficients of bases and the linear term. We show that the algorithm can be embedded into the iterative re-weighted least square algorithm that is typically used to estimate the coefficients in Generalized Linear Models. Thus, the GMS estimator can be computed using existing routines in S-plus and other statistical software. We apply the GMS model to the Down's syndrome data set and compare the results with those from Generalized Additive Model estimation. The choice of smoothing parameter and testing of monotonicity are also discussed. 相似文献
2.
《Journal of statistical planning and inference》2005,127(1-2):53-69
In nonparametric regression the smoothing parameter can be selected by minimizing a Mean Squared Error (MSE) based criterion. For spline smoothing one can also rewrite the smooth estimation as a Linear Mixed Model where the smoothing parameter appears as the a priori variance of spline basis coefficients. This allows to employ Maximum Likelihood (ML) theory to estimate the smoothing parameter as variance component. In this paper the relation between the two approaches is illuminated for penalized spline smoothing (P-spline) as suggested in Eilers and Marx Statist. Sci. 11(2) (1996) 89. Theoretical and empirical arguments are given showing that the ML approach is biased towards undersmoothing, i.e. it chooses a too complex model compared to the MSE. The result is in line with classical spline smoothing, even though the asymptotic arguments are different. This is because in P-spline smoothing a finite dimensional basis is employed while in classical spline smoothing the basis grows with the sample size. 相似文献
3.
We consider the problem of the computation of smoothed additive functionals, which are some integrals with respect to the joint smoothing distribution. It is a key issue in inference for general state-space models as these quantities appear naturally for maximum likelihood parameter inference. The computation of smoothed additive functionals is very challenging as exact computations are not possible for non-linear non-Gaussian state-space models. It becomes even more difficult when the hidden state lies in a high dimensional space because traditional numerical methods suffer from the curse of dimensionality. We propose a new algorithm to efficiently calculate the smoothed additive functionals in an online manner for a specific family of high-dimensional state-space models in discrete time, which is named the Space–Time Forward Smoothing (STFS) algorithm. The cost of this algorithm is at least , which is polynomial in . and denote the number of time steps and the number of particles respectively, while is the dimension of the hidden state space. Its superior performance over other existing methods is illustrated by various simulation studies. Moreover, STFS algorithm is successfully applied to perform Maximum Likelihood estimation for static model parameters both in an online and an offline manner. 相似文献
4.
ABSTRACTThis article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations. 相似文献
5.
H. Linhart 《Statistical Papers》1989,30(1):197-211
The simplification of complex models which were originally envisaged to explain some data is considered as a discrete form of smoothing. In this sense data based model selection techniques lead to minimal and unavoidable initial smoothing. The same techniques may also be used for further smoothing if this seems necessary. For deterministic data parametric models which are usually used for stochastic data also provide convenient notches in the process of smoothing. The usual discrepancies can be used to measure the degree of smoothing. The methods for tables of means and tables of frequencies are described in more detail and examples of applications are given. 相似文献
6.
Franz Lehner 《Journal of statistical planning and inference》2011,141(4):1448-1454
A formula expressing cumulants in terms of iterated integrals of the distribution function is derived. It generalizes results of Jones and Balakrishnan who computed expressions for cumulants up to order 4. 相似文献
7.
Ronald J. Bosch 《统计学通讯:理论与方法》2013,42(11):3075-3083
When cubic smoothing splines are used to estimate the conditional quantile function, thereby balancing fidelity to the data with a smoothness requirement, the resulting curve is the solution to a quadratic program. Using this quadratic characterization and through comparison with the sample conditional quan-tiles, we show strong consistency and asymptotic normality for the quantile smoothing spline. 相似文献
8.
Robust automatic selection techniques for the smoothing parameter of a smoothing spline are introduced. They are based on a robust predictive error criterion and can be viewed as robust versions of C
p and cross-validation. They lead to smoothing splines which are stable and reliable in terms of mean squared error over a large spectrum of model distributions. 相似文献
9.
Cross-validation as a means of choosing the smoothing parameter in spline regression has achieved a wide popularity. Its appeal comprises of an automatic method based on an attractive criterion and along with many other methods it has been shown to minimize predictive mean square error asymptotically. However, in practice there may be a substantial proportion of applications where a cross-validation style choice may lead to drastic undersmoothing often as far as interpolation. Furthermore, because the criterion is so appealing the user may be misled by an inappropriate, automatically-chosen value. In this paper we investigate the nature of cross-validatory methods in spline smoothing regression and suggest variants which provide small sample protection against undersmoothing. 相似文献
10.
《Journal of Statistical Computation and Simulation》2012,82(3-4):291-312
Methods for estimating probabilities on sample spaces for ordered-categorical variables are surveyed. The methods all involve smoothing the relative frequencies in manners which recognise the ordering among categories. Approaches of this type include convex smoothing, weighting-function and kernel-based methods, near neighbour methods, Bayes-based methods and penalized minimum-distance methods. The relationships among the methods are brought out, application is made to a medical example and a simulation study is reported which compares the methods on univariate and bivariate examples. Links with smoothing procedures in other contexts are indicated. 相似文献
11.
Logistic-normal models can be applied for analysis of longitudinal binary data. The aim of this article is to propose a goodness-of-fit test using nonparametric smoothing techniques for checking the adequacy of logistic-normal models. Moreover, the leave-one-out cross-validation method for selecting the suitable bandwidth is developed. The quadratic form of the proposed test statistic based on smoothing residuals provides a global measure for checking the model with categorical and continuous covariates. The formulae of expectation and variance of the proposed statistics are derived, and their asymptotic distribution is approximated by a scaled chi-squared distribution. The power performance of the proposed test for detecting the interaction term or the squared term of continuous covariates is examined by simulation studies. A longitudinal dataset is utilized to illustrate the application of the proposed test. 相似文献
12.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods. 相似文献
13.
The authors present a consistent lack‐of‐fit test in nonlinear regression models. The proposed procedure possesses some nice properties of Zheng's test such as the consistency, the ability to detect any local alternatives approaching the null at rates slower than the parametric rate. What's more, for a predetermined kernel function, the proposed test is more powerful than Zheng's test and the validity of these findings is confirmed by the simulation studies and a real data example. In addition, the authors find out a close connection between the choices of normal kernel functions and the bandwidths. The Canadian Journal of Statistics 39: 108–125; 2011 © 2011 Statistical Society of Canada 相似文献
14.
15.
Consider a sequence of NA identically distributed random variables with the underlying distribution in the domain of attraction of the normal distribution. This paper proves that law of the iterated logarithm holds for sequences of NA random variables. 相似文献
16.
We Formulate sufficienct conditions for the existonce of the expectation of iterated generalized expectation of the iterated generalized least squares estimator, which consequently guarantee its unbiasedness, The analysis is applied to the maximum likelihood estimator in the general linear model with normal disturbances, where a set of assumptions ensures convergence of the iteration as well as unbiasedness. 相似文献
17.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient. 相似文献
18.
《Journal of statistical planning and inference》1997,57(1):29-38
In this paper we show that versions of statistical functionals which are obtained by smoothing the corresponding empirical d.f. with an appropriate kernel can reduce the variance and the mean square error of the statistic. This is shown by studying the influence function of the functional. The smaller variance is achieved when the influence function is either discontinuous or piecewise linear with convexity towards the x-axis. Examples for M- and L-estimators are given. 相似文献
19.
We identify a role for smooth curve provision in the finite population context. The performance of kernel density estimates in this scenario is explored, and they are tailored to the finite population situation especially by developing a method of data-based selection of the smoothing parameter appropriate to this problem. Simulated examples are given, including some from the particular context of permutation distributions which first motivated this investigation. 相似文献
20.
《Journal of Statistical Computation and Simulation》2012,82(6):915-926
In models using categorical data, one may use adjacency relations to justify smoothing to improve upon simple histogram approximations of the probabilities. This is particularly convenient for sparsely observed or rather peaked distributions. Moreover, in a few models, prior knowledge of a marginal distribution is available. We adapt local polynomial estimators to include this partial information about the underlying distribution and give explicit representations for the proposed estimators. An application to a set of anthropological data is included. 相似文献