首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Abstract. Similar to variable selection in the linear model, selecting significant components in the additive model is of great interest. However, such components are unknown, unobservable functions of independent variables. Some approximation is needed. We suggest a combination of penalized regression spline approximation and group variable selection, called the group‐bridge‐type spline method (GBSM), to handle this component selection problem with a diverging number of correlated variables in each group. The proposed method can select significant components and estimate non‐parametric additive function components simultaneously. To make the GBSM stable in computation and adaptive to the level of smoothness of the component functions, weighted power spline bases and projected weighted power spline bases are proposed. Their performance is examined by simulation studies. The proposed method is extended to a partial linear regression model analysis with real data, and gives reliable results.  相似文献   

2.
A Bayesian approach is presented for nonparametric estimation of an additive regression model with autocorrelated errors. Each of the potentially non-linear components is modelled as a regression spline using many knots, while the errors are modelled by a high order stationary autoregressive process parameterized in terms of its autocorrelations. The distribution of significant knots and partial autocorrelations is accounted for using subset selection. Our approach also allows the selection of a suitable transformation of the dependent variable. All aspects of the model are estimated simultaneously by using the Markov chain Monte Carlo method. It is shown empirically that the approach proposed works well on several simulated and real examples.  相似文献   

3.
Summary.  We propose a lag selection method for non-linear additive autoregressive models that is based on spline estimation and the Bayes information criterion. The additive structure of the autoregression function is used to overcome the 'curse of dimensionality', whereas the spline estimators effectively take into account such a structure in estimation. A stepwise procedure is suggested to implement the method proposed. A comprehensive Monte Carlo study demonstrates good performance of the method proposed and a substantial computational advantage over existing local-polynomial-based methods. Consistency of the lag selection method based on the Bayes information criterion is established under the assumption that the observations are from a stochastic process that is strictly stationary and strongly mixing, which provides the first theoretical result of this kind for spline smoothing of weakly dependent data.  相似文献   

4.
In this paper, we consider partially linear additive models with an unknown link function, which include single‐index models and additive models as special cases. We use polynomial spline method for estimating the unknown link function as well as the component functions in the additive part. We establish that convergence rates for all nonparametric functions are the same as in one‐dimensional nonparametric regression. For a faster rate of the parametric part, we need to define appropriate ‘projection’ that is more complicated than that defined previously for partially linear additive models. Compared to previous approaches, a distinct advantage of our estimation approach in implementation is that estimation directly reduces estimation in the single‐index model and can thus deal with much larger dimensional problems than previous approaches for additive models with unknown link functions. Simulations and a real dataset are used to illustrate the proposed model.  相似文献   

5.
A spline-backfitted kernel smoothing method is proposed for partially linear additive model. Under assumptions of stationarity and geometric mixing, the proposed function and parameter estimators are oracally efficient and fast to compute. Such superior properties are achieved by applying to the data spline smoothing and kernel smoothing consecutively. Simulation experiments with both moderate and large number of variables confirm the asymptotic results. Application to the Boston housing data serves as a practical illustration of the method.  相似文献   

6.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration.  相似文献   

7.
In some applications of statistical quality control, quality of a process or a product is best characterized by a functional relationship between a response variable and one or more explanatory variables. This relationship is referred to as a profile. In certain cases, the quality of a process or a product is better described by a non-linear profile which does not follow a specific parametric model. In these circumstances, nonparametric approaches with greater flexibility in modeling the complicated profiles are adopted. In this research, the spline smoothing method is used to model a complicated non-linear profile and the Hotelling T2 control chart based on the spline coefficients is used to monitor the process. After receiving an out-of-control signal, a maximum likelihood estimator is employed for change point estimation. The simulation studies, which include both global and local shifts, provide appropriate evaluation of the performance of the proposed estimation and monitoring procedure. The results indicate that the proposed method detects large global shifts while it is very sensitive in detecting local shifts.  相似文献   

8.
An algorithm for sampling from non-log-concave multivariate distributions is proposed, which improves the adaptive rejection Metropolis sampling (ARMS) algorithm by incorporating the hit and run sampling. It is not rare that the ARMS is trapped away from some subspace with significant probability in the support of the multivariate distribution. While the ARMS updates samples only in the directions that are parallel to dimensions, our proposed method, the hit and run ARMS (HARARMS), updates samples in arbitrary directions determined by the hit and run algorithm, which makes it almost not possible to be trapped in any isolated subspaces. The HARARMS performs the same as ARMS in a single dimension while more reliable in multidimensional spaces. Its performance is illustrated by a Bayesian free-knot spline regression example. We showed that it overcomes the well-known ‘lethargy’ property and decisively find the global optimal number and locations of the knots of the spline function.  相似文献   

9.
Residual life (RL) estimation plays an important role in prognostics and health management. In operating conditions, components usually experience stresses continuously varying over time, which have an impact on the degradation processes. This paper investigates a Wiener process model to track and predict the RL under time-varying conditions. The item-to-item variation is captured by the drift parameter and the degradation characteristic of the whole population is described by the diffusion parameter. The bootstrap method and Bayesian theorem are employed to estimate and update the distribution parameters of ‘a’ and ‘b’, which are the coefficients of the linear drifting process in the degradation model. Once new degradation information becomes available, the RL distributions considering the future operating condition are derived. The proposed method is tested on Lithium-ion battery devices under three levels of charging/discharging rates. The results are further validated by a simulation method.  相似文献   

10.
This article deals with a semisupervised learning based on naive Bayes assumption. A univariate Gaussian mixture density is used for continuous input variables whereas a histogram type density is adopted for discrete input variables. The EM algorithm is used for the computation of maximum likelihood estimators of parameters in the model when we fix the number of mixing components for each continuous input variable. We carry out a model selection for choosing a parsimonious model among various fitted models based on an information criterion. A common density method is proposed for the selection of significant input variables. Simulated and real datasets are used to illustrate the performance of the proposed method.  相似文献   

11.
Motivated by the need to analyze the National Longitudinal Surveys data, we propose a new semiparametric longitudinal mean‐covariance model in which the effects on dependent variable of some explanatory variables are linear and others are non‐linear, while the within‐subject correlations are modelled by a non‐stationary autoregressive error structure. We develop an estimation machinery based on least squares technique by approximating non‐parametric functions via B‐spline expansions and establish the asymptotic normality of parametric estimators as well as the rate of convergence for the non‐parametric estimators. We further advocate a new model selection strategy in the varying‐coefficient model framework, for distinguishing whether a component is significant and subsequently whether it is linear or non‐linear. Besides, the proposed method can also be employed for identifying the true order of lagged terms consistently. Monte Carlo studies are conducted to examine the finite sample performance of our approach, and an application of real data is also illustrated.  相似文献   

12.
The penalized spline is a popular method for function estimation when the assumption of “smoothness” is valid. In this paper, methods for estimation and inference are proposed using penalized splines under additional constraints of shape, such as monotonicity or convexity. The constrained penalized spline estimator is shown to have the same convergence rates as the corresponding unconstrained penalized spline, although in practice the squared error loss is typically smaller for the constrained versions. The penalty parameter may be chosen with generalized cross‐validation, which also provides a method for determining if the shape restrictions hold. The method is not a formal hypothesis test, but is shown to have nice large‐sample properties, and simulations show that it compares well with existing tests for monotonicity. Extensions to the partial linear model, the generalized regression model, and the varying coefficient model are given, and examples demonstrate the utility of the methods. The Canadian Journal of Statistics 40: 190–206; 2012 © 2012 Statistical Society of Canada  相似文献   

13.
Generalized additive models represented using low rank penalized regression splines, estimated by penalized likelihood maximisation and with smoothness selected by generalized cross validation or similar criteria, provide a computationally efficient general framework for practical smooth modelling. Various authors have proposed approximate Bayesian interval estimates for such models, based on extensions of the work of Wahba, G. (1983) [Bayesian confidence intervals for the cross validated smoothing spline. J. R. Statist. Soc. B 45 , 133–150] and Silverman, B.W. (1985) [Some aspects of the spline smoothing approach to nonparametric regression curve fitting. J. R. Statist. Soc. B 47 , 1–52] on smoothing spline models of Gaussian data, but testing of such intervals has been rather limited and there is little supporting theory for the approximations used in the generalized case. This paper aims to improve this situation by providing simulation tests and obtaining asymptotic results supporting the approximations employed for the generalized case. The simulation results suggest that while across‐the‐model performance is good, component‐wise coverage probabilities are not as reliable. Since this is likely to result from the neglect of smoothing parameter variability, a simple and efficient simulation method is proposed to account for smoothing parameter uncertainty: this is demonstrated to substantially improve the performance of component‐wise intervals.  相似文献   

14.
In order to study developmental variables, for example, neuromotor development of children and adolescents, monotone fitting is typically needed. Most methods, to estimate a monotone regression function non-parametrically, however, are not straightforward to implement, a difficult issue being the choice of smoothing parameters. In this paper, a convenient implementation of the monotone B-spline estimates of Ramsay [Monotone regression splines in action (with discussion), Stat. Sci. 3 (1988), pp. 425–461] and Kelly and Rice [Montone smoothing with application to dose-response curves and the assessment of synergism, Biometrics 46 (1990), pp. 1071–1085] is proposed and applied to neuromotor data. Knots are selected adaptively using ideas found in Friedman and Silverman [Flexible parsimonous smoothing and additive modelling (with discussion), Technometrics 31 (1989), pp. 3–39] yielding a flexible algorithm to automatically and accurately estimate a monotone regression function. Using splines also simultaneously allows to include other aspects in the estimation problem, such as modeling a constant difference between two groups or a known jump in the regression function. Finally, an estimate which is not only monotone but also has a ‘levelling-off’ (i.e. becomes constant after some point) is derived. This is useful when the developmental variable is known to attain a maximum/minimum within the interval of observation.  相似文献   

15.
In this article, we study a nonparametric approach regarding a general nonlinear reduced form equation to achieve a better approximation of the optimal instrument. Accordingly, we propose the nonparametric additive instrumental variable estimator (NAIVE) with the adaptive group Lasso. We theoretically demonstrate that the proposed estimator is root-n consistent and asymptotically normal. The adaptive group Lasso helps us select the valid instruments while the dimensionality of potential instrumental variables is allowed to be greater than the sample size. In practice, the degree and knots of B-spline series are selected by minimizing the BIC or EBIC criteria for each nonparametric additive component in the reduced form equation. In Monte Carlo simulations, we show that the NAIVE has the same performance as the linear instrumental variable (IV) estimator for the truly linear reduced form equation. On the other hand, the NAIVE performs much better in terms of bias and mean squared errors compared to other alternative estimators under the high-dimensional nonlinear reduced form equation. We further illustrate our method in an empirical study of international trade and growth. Our findings provide a stronger evidence that international trade has a significant positive effect on economic growth.  相似文献   

16.
A method based on the principle of unbiased risk estimation is used to select the splined variables in an exploratory partial spline model proposed by Wahba (1985). The probability of correct selection based on the proposed procedure is discussed under regularity conditions. Furthermore, the resulting estimate of the regression function achieves the optimal rates of convergence over a general class of smooth regression functions (Stone 1982) when its underlying smoothness condition is not known.  相似文献   

17.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS   总被引:4,自引:0,他引:4  
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is "small" relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method.  相似文献   

18.
The paper proposes a cross-validation method to address the question of specification search in a multiple nonlinear quantile regression framework. Linear parametric, spline-based partially linear and kernel-based fully nonparametric specifications are contrasted as competitors using cross-validated weighted L 1-norm based goodness-of-fit and prediction error criteria. The aim is to provide a fair comparison with respect to estimation accuracy and/or predictive ability for different semi- and nonparametric specification paradigms. This is challenging as the model dimension cannot be estimated for all competitors and the meta-parameters such as kernel bandwidths, spline knot numbers and polynomial degrees are difficult to compare. General issues of specification comparability and automated data-driven meta-parameter selection are discussed. The proposed method further allows us to assess the balance between fit and model complexity. An extensive Monte Carlo study and an application to a well-known data set provide empirical illustration of the method.  相似文献   

19.
This paper presents a method of fitting factorial models to recidivism data consisting of the (possibly censored) time to ‘fail’ of individuals, in order to test for differences between groups. Here ‘failure’ means rearrest, reconviction or reincarceration, etc. A proportion P of the sample is assumed to be ‘susceptible’ to failure, i.e. to fail eventually, while the remaining 1-P are ‘immune’, and never fail. Thus failure may be described in two ways: by the probability P that an individual ever fails again (‘probability of recidivism’), and by the rate of failure Λ for the susceptibles. Related analyses have been proposed previously: this paper argues that a factorial approach, as opposed to regression approaches advocated previously, offers simplified analysis and interpretation of these kinds of data. The methods proposed, which are also applicable in medical statistics and reliability analyses, are demonstrated on data sets in which the factors are Parole Type (released to freedom or on parole), Age group (≤ 20 years, 20–40 years, > 40 years), and Marital Status. The outcome (failure) is a return to prison following first or second release.  相似文献   

20.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号