首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract. While it is a popular selection criterion for spline smoothing, generalized cross‐validation (GCV) occasionally yields severely undersmoothed estimates. Two extensions of GCV called robust GCV (RGCV) and modified GCV have been proposed as more stable criteria. Each involves a parameter that must be chosen, but the only guidance has come from simulation results. We investigate the performance of the criteria analytically. In most studies, the mean square prediction error is the only loss function considered. Here, we use both the prediction error and a stronger Sobolev norm error, which provides a better measure of the quality of the estimate. A geometric approach is used to analyse the superior small‐sample stability of RGCV compared to GCV. In addition, by deriving the asymptotic inefficiency for both the prediction error and the Sobolev error, we find intervals for the parameters of RGCV and modified GCV for which the criteria have optimal performance.  相似文献   

2.
Many different methods have been proposed to construct nonparametric estimates of a smooth regression function, including local polynomial, (convolution) kernel and smoothing spline estimators. Each of these estimators uses a smoothing parameter to control the amount of smoothing performed on a given data set. In this paper an improved version of a criterion based on the Akaike information criterion (AIC), termed AICC, is derived and examined as a way to choose the smoothing parameter. Unlike plug-in methods, AICC can be used to choose smoothing parameters for any linear smoother, including local quadratic and smoothing spline estimators. The use of AICC avoids the large variability and tendency to undersmooth (compared with the actual minimizer of average squared error) seen when other 'classical' approaches (such as generalized cross-validation (GCV) or the AIC) are used to choose the smoothing parameter. Monte Carlo simulations demonstrate that the AICC-based smoothing parameter is competitive with a plug-in method (assuming that one exists) when the plug-in method works well but also performs well when the plug-in approach fails or is unavailable.  相似文献   

3.
Typically, an optimal smoothing parameter in a penalized spline regression is determined by minimizing an information criterion, such as one of the C p , CV and GCV criteria. Since an explicit solution to the minimization problem for an information criterion cannot be obtained, it is necessary to carry out an iterative procedure to search for the optimal smoothing parameter. In order to avoid such extra calculation, a non-iterative optimization method for smoothness in penalized spline regression is proposed using the formulation of generalized ridge regression. By conducting numerical simulations, we verify that our method has better performance than other methods which optimize the number of basis functions and the single smoothing parameter by means of the CV or GCV criteria.  相似文献   

4.
ABSTRACT

This article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations.  相似文献   

5.
We consider a Cox-type regression model with change-points in the covariates. A change-point specifies the unknown threshold at which the influence of a covariate shifts smoothly, i.e., the regression parameter may change over the range of a covariate and the underlying regression function is continuous but not differentiable. The model can be used to describe change-points in different covariates but also to model more than one change-point in a single covariate. Estimates of the change-points and of the regression parameters are derived and their properties are investigated. It is shown that not only the estimates of the regression parameters are [Formula: see text] -consistent but also the estimates of the change-points in contrast to the conjecture of other authors. Asymptotic normality is shown by using results developed for M-estimators. At the end of this paper we apply our model to an actuarial dataset, the PBC dataset of Fleming and Harrington (Counting processes and survival analysis, 1991) and to a dataset of electric motors.  相似文献   

6.
Summary.  The objective is to estimate the period and the light curve (or periodic function) of a variable star. Previously, several methods have been proposed to estimate the period of a variable star, but they are inaccurate especially when a data set contains outliers. We use a smoothing spline regression to estimate the light curve given a period and then find the period which minimizes the generalized cross-validation (GCV). The GCV method works well, matching an intensive visual examination of a few hundred stars, but the GCV score is still sensitive to outliers. Handling outliers in an automatic way is important when this method is applied in a 'data mining' context to a vary large star survey. Therefore, we suggest a robust method which minimizes a robust cross-validation criterion induced by a robust smoothing spline regression. Once the period has been determined, a nonparametric method is used to estimate the light curve. A real example and a simulation study suggest that the robust cross-validation and GCV methods are superior to existing methods.  相似文献   

7.
This paper develops a new Bayesian approach to change-point modeling that allows the number of change-points in the observed autocorrelated times series to be unknown. The model we develop assumes that the number of change-points have a truncated Poisson distribution. A genetic algorithm is used to estimate a change-point model, which allows for structural changes with autocorrelated errors. We focus considerable attention on the construction of autocorrelated structure for each regime and for the parameters that characterize each regime. Our techniques are found to work well in the simulation with a few change-points. An empirical analysis is provided involving the annual flow of the Nile River and the monthly total energy production in South Korea to lead good estimates for structural change-points.  相似文献   

8.
Global optimization of the generalized cross-validation criterion   总被引:6,自引:0,他引:6  
Generalized cross-validation is a method for choosing the smoothing parameter in smoothing splines and related regularization problems. This method requires the global minimization of the generalized cross-validation function. In this paper an algorithm based on interval analysis is presented to find the globally optimal value for the smoothing parameter, and a numerical example illustrates the performance of the algorithm.  相似文献   

9.
Spatially-adaptive Penalties for Spline Fitting   总被引:2,自引:0,他引:2  
The paper studies spline fitting with a roughness penalty that adapts to spatial heterogeneity in the regression function. The estimates are p th degree piecewise polynomials with p − 1 continuous derivatives. A large and fixed number of knots is used and smoothing is achieved by putting a quadratic penalty on the jumps of the p th derivative at the knots. To be spatially adaptive, the logarithm of the penalty is itself a linear spline but with relatively few knots and with values at the knots chosen to minimize the generalized cross validation (GCV) criterion. This locally-adaptive spline estimator is compared with other spline estimators in the literature such as cubic smoothing splines and knot-selection techniques for least squares regression. Our estimator can be interpreted as an empirical Bayes estimate for a prior allowing spatial heterogeneity. In cases of spatially heterogeneous regression functions, empirical Bayes confidence intervals using this prior achieve better pointwise coverage probabilities than confidence intervals based on a global-penalty parameter. The method is developed first for univariate models and then extended to additive models.  相似文献   

10.
Generalized additive models represented using low rank penalized regression splines, estimated by penalized likelihood maximisation and with smoothness selected by generalized cross validation or similar criteria, provide a computationally efficient general framework for practical smooth modelling. Various authors have proposed approximate Bayesian interval estimates for such models, based on extensions of the work of Wahba, G. (1983) [Bayesian confidence intervals for the cross validated smoothing spline. J. R. Statist. Soc. B 45 , 133–150] and Silverman, B.W. (1985) [Some aspects of the spline smoothing approach to nonparametric regression curve fitting. J. R. Statist. Soc. B 47 , 1–52] on smoothing spline models of Gaussian data, but testing of such intervals has been rather limited and there is little supporting theory for the approximations used in the generalized case. This paper aims to improve this situation by providing simulation tests and obtaining asymptotic results supporting the approximations employed for the generalized case. The simulation results suggest that while across‐the‐model performance is good, component‐wise coverage probabilities are not as reliable. Since this is likely to result from the neglect of smoothing parameter variability, a simple and efficient simulation method is proposed to account for smoothing parameter uncertainty: this is demonstrated to substantially improve the performance of component‐wise intervals.  相似文献   

11.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

12.
These Fortran-77 subroutines provide building blocks for Generalized Cross-Validation (GCV) (Craven and Wahba, 1979) calculations in data analysis and data smoothing including ridge regression (Golub, Heath, and Wahba, 1979), thin plate smoothing splines (Wahba and Wendelberger, 1980), deconvolution (Wahba, 1982d), smoothing of generalized linear models (O'sullivan, Yandell and Raynor 1986, Green 1984 and Green and Yandell 1985), and ill-posed problems (Nychka et al., 1984, O'sullivan and Wahba, 1985). We present some of the types of problems for which GCV is a useful method of choosing a smoothing or regularization parameter and we describe the structure of the subroutines.Ridge Regression: A familiar example of a smoothing parameter is the ridge parameter X in the ridge regression problem which we write.  相似文献   

13.
In this work, we present a computational method to approximate the occurrence of the change-points in a temporal series consisting of independent and normally distributed observations, with equal mean and two possible variance values. This type of temporal series occurs in the investigation of electric signals associated to rhythmic activity patterns of nerves and muscles of animals, in which the change-points represent the actual moments when the electrical activity passes from a phase of silence to one of activity, or vice versa. We confront the hypothesis that there is no change-point in the temporal series, against the alternative hypothesis that there exists at least one change-point, employing the corresponding likelihood ratio as the test statistic; a computational implementation of the technique of quadratic penalization is employed in order to approximate the quotient of the logarithmic likelihood associated to the set of hypotheses. When the null hypothesis is rejected, the method provides estimations of the localization of the change-points in the temporal series. Moreover, the method proposed in this work employs a posteriori processing in order to avoid the generation of relatively short periods of silence or activity. The method is applied to the determination of change-points in both experimental and synthetic data sets; in either case, the results of our computations are more than satisfactory.  相似文献   

14.
Maximum penalized likelihood estimation is applied in non(semi)-para-metric regression problems, and enables us exploratory identification and diagnostics of nonlinear regression relationships. The smoothing parameter A controls trade-off between the smoothness and the goodness-of-fit of a function. The method of cross-validation is used for selecting A, but the generalized cross-validation, which is based on the squared error criterion, shows bad be¬havior in non-normal distribution and can not often select reasonable A. The purpose of this study is to propose a method which gives more suitable A and to evaluate the performance of it.

A method of simple calculation for the delete-one estimates in the likeli¬hood-based cross-validation (LCV) score is described. A score of similar form to the Akaike information criterion (AIC) is also derived. The proposed scores are compared with the ones of standard procedures by using data sets in liter¬atures. Simulations are performed to compare the patterns of selecting A and overall goodness-of-fit and to evaluate the effects of some factors. The LCV-scores by the simple calculation provide good approximation to the exact one if λ is not extremeiy smaii Furthermore the LCV scores by the simple size it possible to select X adaptively They have the effect, of reducing the bias of estimates and provide better performance in the sense of overall goodness-of fit. These scores are useful especially in the case of small sample size and in the case of binary logistic regression.  相似文献   

15.
We consider a random regression model with several-fold change-points. The results for one change-point are generalized. The maximum likelihood estimator of the parameters is shown to be consistent, and the asymptotic distribution for the estimators of the coefficients is shown to be Gaussian. The estimators of the change-points converge, with n ?1 rate, to the vector whose components are the left end points of the maximizing interval with respect to each change-point. The likelihood process is asymptotically equivalent to the sum of independent compound Poisson processes.  相似文献   

16.
Suppose the observations (ti,yi), i = 1,… n, follow the model where gj are unknown functions. The estimation of the additive components can be done by approximating gj, with a function made up of the sum of a linear fit and a truncated Fourier series of cosines and minimizing a penalized least-squares loss function over the coefficients. This finite-dimensional basis approximation, when fitting an additive model with r predictors, has the advantage of reducing the computations drastically, since it does not require the use of the backfitting algorithm. The cross-validation (CV) [or generalized cross-validation (GCV)] for the additive fit is calculated in a further 0(n) operations. A search path in the r-dimensional space of degrees of freedom is proposed along which the CV (GCV) continuously decreases. The path ends when an increase in the degrees of freedom of any of the predictors yields an increase in CV (GCV). This procedure is illustrated on a meteorological data set.  相似文献   

17.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   

18.
Abstract. We study the coverage properties of Bayesian confidence intervals for the smooth component functions of generalized additive models (GAMs) represented using any penalized regression spline approach. The intervals are the usual generalization of the intervals first proposed by Wahba and Silverman in 1983 and 1985, respectively, to the GAM component context. We present simulation evidence showing these intervals have close to nominal ‘across‐the‐function’ frequentist coverage probabilities, except when the truth is close to a straight line/plane function. We extend the argument introduced by Nychka in 1988 for univariate smoothing splines to explain these results. The theoretical argument suggests that close to nominal coverage probabilities can be achieved, provided that heavy oversmoothing is avoided, so that the bias is not too large a proportion of the sampling variability. The theoretical results allow us to derive alternative intervals from a purely frequentist point of view, and to explain the impact that the neglect of smoothing parameter variability has on confidence interval performance. They also suggest switching the target of inference for component‐wise intervals away from smooth components in the space of the GAM identifiability constraints.  相似文献   

19.
This paper considers a non linear quantile model with change-points. The quantile estimation method, which as a particular case includes median model, is more robust with respect to other traditional methods when model errors contain outliers. Under relatively weak assumptions, the convergence rate and asymptotic distribution of change-point and of regression parameter estimators are obtained. Numerical study by Monte Carlo simulations shows the performance of the proposed method for non linear model with change-points.  相似文献   

20.
This paper evaluates the ability of a Markov regime-switching log-normal (RSLN) model to capture the time-varying features of stock return and volatility. The model displays a better ability to depict a fat tail distribution as compared with using a log-normal model, which means that the RSLN model can describe observed market behavior better. Our major objective is to explore the capability of the model to capture stock market behavior over time. By analyzing the behavior of calibrated regime-switching parameters over different lengths of time intervals, the change-point concept is introduced and an algorithm is proposed for identifying the change-points in the series corresponding to the times when there are changes in parameter estimates. This algorithm for identifying change-points is tested on the Standard and Poor's 500 monthly index data from 1971 to 2008, and the Nikkei 225 monthly index data from 1984 to 2008. It is evident that the change-points we identify match the big events observed in the US stock market and the Japan stock market (e.g., the October 1987 stock market crash), and that the segmentations of stock index series, which are defined as the periods between change-points, match the observed bear–bull market phases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号