共查询到20条相似文献,搜索用时 906 毫秒
1.
2.
3.
文章研究了纵向数据非参数模型y=f(t)+ε,其中f(t)为未知平滑函数,ε为零均值随机误差项.我们选取一组基函数对f(t)进行基函数展开近似,然后构造关于基函数系数的二次推断函数,利用New-ton-Raphson迭代方法得到基函数系数的估计值,进而得到未知平滑函数f(t)的拟合估计.理论结果显示,所得到的基函数系数估计有相合性和渐近正态性.最后通过数值方法得到了较好的模拟结果. 相似文献
4.
文章研究纵向数据非参数模型y=f(t)+ε,其中f(t)为未知平滑函数,ε为零均值随机误差项.我们选取一组基函数对f(t)进行展开近似,然后构造关于基函数系数的修正二次推断函数,利用割线法得到基函数系数的估计值,进而得到未知平滑函数f(t)的拟合估计.最后给出基函数系数估计的相合性和渐近正态性,并通过数值方法得到了较好的模拟结果. 相似文献
5.
霍特指数平滑方法是一种高级的指数平滑方法,它有两个平滑参数需要确定.用霍特指数平滑方法进行预测时,最重要、而且最因难的工作是确定平滑参数α、β的取值问题.笔者利用EXCEL模拟运算表的功能很容易实现了在离差平方和最小(或绝对离差和最小)条件下的参数估计.为霍特指数平滑方法在实际中的广泛应用提供一种有效的途径. 相似文献
6.
7.
指数平滑预测公式与平滑系数 总被引:6,自引:0,他引:6
文章探讨了使用不同的平滑预测公式会有不同的最佳平滑系数的问题,提出了一种预测误差更小、适用范围更广又更加简单的指数平滑短中期预测公式,并提出直接平滑系数和间接平滑系数的概念,推导出计算最佳间接平滑系数的近似计算公式。 相似文献
8.
FAR(p)与指数平滑的组合预测算法 总被引:1,自引:1,他引:0
一、引言
梅炽、姚俊峰等在<粗铜冶炼中铜铳品位的动态预测模式>一文中(见中南工业大学学报,2000,31(1):34-36)和邵义元在一文中(见鄂州大学学报,2002,9(4):38-39)提出了一种对铜统品位进行预测的方法,即以采集的现场数据为基础,采用系统辨识动态地建立了AR(p)模型与三次指数平滑模型.并将两种模型按最小二乘原理,以组合预测误差平方和为目标函数,通过使误差平方和极小化来确定两种预测方法的最优加权系数,建立一种新的组合模型,其预测误差最小.结果表明,在当时数据下,AR(p)与指数平滑组合模型比AR(p)与指数平滑模型单独使用时精确度都要高.本文在此基础上,对AR(p)与指数平滑组合预测模型做了改进,将AR(p)模型中的时间序列模糊化,便成为模糊时间序列,进而建立模糊时间序列AR(p)模型,即FAR(p)模型.从而提出一种新的组合预测模型--FAR(p)与指数平滑组合预测模型.最后将两种组合模型用于预测油田产油量,结果表明,FAR(p)与指数平滑组合预测模型比AR(p)与指数平滑组合预测模型有更高的预测精度. 相似文献
9.
Excel变量与三次指数平滑模拟预测方法 总被引:3,自引:0,他引:3
指数平滑法是对预测对象的全部历史序列数据,通过加权平均从而进行预测的一种方法.在进行指数平滑预测时一般要通过对加权系数α取不同的值,经过多次模拟运算并比较预测误差,从而选择适当的预测结果.然而,当采用二次或三次指数平滑模型,进行预测分析时,由于计算公式较为复杂,模拟运算过程参数的改变就非常繁琐.本文以我国肉类产量三次指数平滑预测为例,利用Excel变量和工作表相结合,建立数据、图表间的链接关系,从而实现了方便、快捷的模拟运算预测分析. 相似文献
10.
为提高质量控制图监测微小到中等过程偏移的灵敏度,文章提出一种新的用于过程均值监控的通用型指数加权移动平均(UEWMA)控制图,该控制图是EWMA控制图的一般性推广,根据数据特征,自定义选取平滑系数λ1,λ2,…,λs,使控制效果达到最优;给出UEWMA控制图的均值与控制限的求取方法,并推导出平均运行长度(ARL)和运行长度标准偏差(SDRL);最后,研究平滑系数对该控制图性能的影响,并将其与现有控制图监测微小到中等过程偏移的灵敏度进行比较。研究结果表明,UEWMA控制图通过数据特征设计平滑系数,具有灵活性好、灵敏度高、扩展性强和控制效果好的特点。 相似文献
11.
Generalized cross-validation is a method for choosing the smoothing parameter in smoothing splines and related regularization problems. This method requires the global minimization of the generalized cross-validation function. In this paper an algorithm based on interval analysis is presented to find the globally optimal value for the smoothing parameter, and a numerical example illustrates the performance of the algorithm. 相似文献
12.
ABSTRACTThis article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations. 相似文献
13.
Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion 总被引:1,自引:0,他引:1
Clifford M. Hurvich Jeffrey S. Simonoff & Chih-Ling Tsai 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1998,60(2):271-293
Many different methods have been proposed to construct nonparametric estimates of a smooth regression function, including local polynomial, (convolution) kernel and smoothing spline estimators. Each of these estimators uses a smoothing parameter to control the amount of smoothing performed on a given data set. In this paper an improved version of a criterion based on the Akaike information criterion (AIC), termed AICC , is derived and examined as a way to choose the smoothing parameter. Unlike plug-in methods, AICC can be used to choose smoothing parameters for any linear smoother, including local quadratic and smoothing spline estimators. The use of AICC avoids the large variability and tendency to undersmooth (compared with the actual minimizer of average squared error) seen when other 'classical' approaches (such as generalized cross-validation (GCV) or the AIC) are used to choose the smoothing parameter. Monte Carlo simulations demonstrate that the AICC -based smoothing parameter is competitive with a plug-in method (assuming that one exists) when the plug-in method works well but also performs well when the plug-in approach fails or is unavailable. 相似文献
14.
In a smoothing spline model with unknown change-points, the choice of the smoothing parameter strongly influences the estimation of the change-point locations and the function at the change-points. In a tumor biology example, where change-points in blood flow in response to treatment were of interest, choosing the smoothing parameter based on minimizing generalized cross-validation (GCV) gave unsatisfactory estimates of the change-points. We propose a new method, aGCV, that re-weights the residual sum of squares and generalized degrees of freedom terms from GCV. The weight is chosen to maximize the decrease in the generalized degrees of freedom as a function of the weight value, while simultaneously minimizing aGCV as a function of the smoothing parameter and the change-points. Compared with GCV, simulation studies suggest that the aGCV method yields improved estimates of the change-point and the value of the function at the change-point. 相似文献
15.
These Fortran-77 subroutines provide building blocks for Generalized Cross-Validation (GCV) (Craven and Wahba, 1979) calculations in data analysis and data smoothing including ridge regression (Golub, Heath, and Wahba, 1979), thin plate smoothing splines (Wahba and Wendelberger, 1980), deconvolution (Wahba, 1982d), smoothing of generalized linear models (O'sullivan, Yandell and Raynor 1986, Green 1984 and Green and Yandell 1985), and ill-posed problems (Nychka et al., 1984, O'sullivan and Wahba, 1985). We present some of the types of problems for which GCV is a useful method of choosing a smoothing or regularization parameter and we describe the structure of the subroutines.Ridge Regression: A familiar example of a smoothing parameter is the ridge parameter X in the ridge regression problem which we write. 相似文献
16.
针对纵向数据半参数模型E(y|x,t)=XTβ+f(t),采用惩罚二次推断函数方法同时估计模型中的回归参数β和未知光滑函数f(t)。首先利用截断幂函数基对未知光滑函数进行基函数展开近似,然后利用惩罚样条的思想构造关于回归参数和基函数系数的惩罚二次推断函数,最小化惩罚二次推断函数便可得到回归参数和基函数系数的惩罚二次推断函数估计。理论结果显示,估计结果具有相合性和渐近正态性,通过数值方法也得到了较好的模拟结果。 相似文献
17.
Hirokazu Yanagihara 《Statistics and Computing》2012,22(2):527-544
Typically, an optimal smoothing parameter in a penalized spline regression is determined by minimizing an information criterion,
such as one of the C
p
, CV and GCV criteria. Since an explicit solution to the minimization problem for an information criterion cannot be obtained,
it is necessary to carry out an iterative procedure to search for the optimal smoothing parameter. In order to avoid such
extra calculation, a non-iterative optimization method for smoothness in penalized spline regression is proposed using the
formulation of generalized ridge regression. By conducting numerical simulations, we verify that our method has better performance
than other methods which optimize the number of basis functions and the single smoothing parameter by means of the CV or GCV
criteria. 相似文献
18.
Shujie MaLijian Yang 《Journal of statistical planning and inference》2011,141(1):204-219
A spline-backfitted kernel smoothing method is proposed for partially linear additive model. Under assumptions of stationarity and geometric mixing, the proposed function and parameter estimators are oracally efficient and fast to compute. Such superior properties are achieved by applying to the data spline smoothing and kernel smoothing consecutively. Simulation experiments with both moderate and large number of variables confirm the asymptotic results. Application to the Boston housing data serves as a practical illustration of the method. 相似文献
19.
《统计学通讯:理论与方法》2013,42(10):2033-2044
The smoothing parameter selection by the one-sided cross-validation (OSCV) method is completely automatic in that it does not require extra parameters estimation. Also it reduces the variability comparable to that of plug-in rules. In this paper we derive analytically the asymptotic variance of the smoothing parameter selected by OSCV. It shows the dependency of the stability on the one-sided kerenl and tells the possibility of the optimal one-sided kernel which minimizes the asymptotic variability. 相似文献
20.
A method of regularized discriminant analysis for discrete data, denoted DRDA, is proposed. This method is related to the regularized discriminant analysis conceived by Friedman (1989) in a Gaussian framework for continuous data. Here, we are concerned with discrete data and consider the classification problem using the multionomial distribution. DRDA has been conceived in the small-sample, high-dimensional setting. This method has a median position between multinomial discrimination, the first-order independence model and kernel discrimination. DRDA is characterized by two parameters, the values of which are calculated by minimizing a sample-based estimate of future misclassification risk by cross-validation. The first parameter is acomplexity parameter which provides class-conditional probabilities as a convex combination of those derived from the full multinomial model and the first-order independence model. The second parameter is asmoothing parameter associated with the discrete kernel of Aitchison and Aitken (1976). The optimal complexity parameter is calculated first, then, holding this parameter fixed, the optimal smoothing parameter is determined. A modified approach, in which the smoothing parameter is chosen first, is discussed. The efficiency of the method is examined with other classical methods through application to data. 相似文献