首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Penalized likelihood method has been developed previously for hazard function estimation using standard left-truncated, right-censored lifetime data with covariates, and the functional ANOVA structures built into the log hazard allows for versatile nonparametric modeling in the setting. The computation of the method can be time-consuming in the presence of continuous covariates; however, due to the repeated numerical integrations involved. Adapting a device developed by Jeon and Lin [An effective method for high dimensional log-density ANOVA estimation, with application to nonparametric graphical model building. Statist. Sinica 16, 353–374] for penalized likelihood density estimation, we explore an alternative approach to hazard estimation where the log likelihood is replaced by some computationally less demanding pseudo-likelihood. An assortment of issues are addressed concerning the practical implementations of the approach including the selection of smoothing parameters, and extensive simulations are presented to assess the inferential efficiency of the “pseudo” method as compared to the “real” one. Also noted is an asymptotic theory concerning the convergence rates of the estimates parallel to that for the original penalized likelihood estimation.  相似文献   

2.
We propose a general family of nonparametric mixed effects models. Smoothing splines are used to model the fixed effects and are estimated by maximizing the penalized likelihood function. The random effects are generic and are modelled parametrically by assuming that the covariance function depends on a parsimonious set of parameters. These parameters and the smoothing parameter are estimated simultaneously by the generalized maximum likelihood method. We derive a connection between a nonparametric mixed effects model and a linear mixed effects model. This connection suggests a way of fitting a nonparametric mixed effects model by using existing programs. The classical two-way mixed models and growth curve models are used as examples to demonstrate how to use smoothing spline analysis-of-variance decompositions to build nonparametric mixed effects models. Similarly to the classical analysis of variance, components of these nonparametric mixed effects models can be interpreted as main effects and interactions. The penalized likelihood estimates of the fixed effects in a two-way mixed model are extensions of James–Stein shrinkage estimates to correlated observations. In an example three nested nonparametric mixed effects models are fitted to a longitudinal data set.  相似文献   

3.
In this article we propose a penalized likelihood approach for the semiparametric density model with parametric and nonparametric components. An efficient iterative procedure is proposed for estimation. Approximate generalized maximum likelihood criterion from Bayesian point of view is derived for selecting the smoothing parameter. The finite sample performance of the proposed estimation approach is evaluated through simulation. Two real data examples, suicide study data and Old Faithful geyser data, are analyzed to demonstrate use of the proposed method.  相似文献   

4.
Cox’s proportional hazards model is the most common way to analyze survival data. The model can be extended in the presence of collinearity to include a ridge penalty, or in cases where a very large number of coefficients (e.g. with microarray data) has to be estimated. To maximize the penalized likelihood, optimal weights of the ridge penalty have to be obtained. However, there is no definite rule for choosing the penalty weight. One approach suggests maximization of the weights by maximizing the leave-one-out cross validated partial likelihood, however this is time consuming and computationally expensive, especially in large datasets. We suggest modelling survival data through a Poisson model. Using this approach, the log-likelihood of a Poisson model is maximized by standard iterative weighted least squares. We will illustrate this simple approach, which includes smoothing of the hazard function and move on to include a ridge term in the likelihood. We will then maximize the likelihood by considering tools from generalized mixed linear models. We will show that the optimal value of the penalty is found simply by computing the hat matrix of the system of linear equations and dividing its trace by a product of the estimated coefficients.  相似文献   

5.
Generalized additive models represented using low rank penalized regression splines, estimated by penalized likelihood maximisation and with smoothness selected by generalized cross validation or similar criteria, provide a computationally efficient general framework for practical smooth modelling. Various authors have proposed approximate Bayesian interval estimates for such models, based on extensions of the work of Wahba, G. (1983) [Bayesian confidence intervals for the cross validated smoothing spline. J. R. Statist. Soc. B 45 , 133–150] and Silverman, B.W. (1985) [Some aspects of the spline smoothing approach to nonparametric regression curve fitting. J. R. Statist. Soc. B 47 , 1–52] on smoothing spline models of Gaussian data, but testing of such intervals has been rather limited and there is little supporting theory for the approximations used in the generalized case. This paper aims to improve this situation by providing simulation tests and obtaining asymptotic results supporting the approximations employed for the generalized case. The simulation results suggest that while across‐the‐model performance is good, component‐wise coverage probabilities are not as reliable. Since this is likely to result from the neglect of smoothing parameter variability, a simple and efficient simulation method is proposed to account for smoothing parameter uncertainty: this is demonstrated to substantially improve the performance of component‐wise intervals.  相似文献   

6.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

7.
This paper surveys the different uses of Kalman filtering in the estimation of statistical (econometric) models. The Kalman filter will be portrayed as (i) a natural generalization of exponential smoothing with a time-dependent smoothing factor, (ii) a recursive estimation technique for a variety of econometric models amenable to a state space formulation in particular for econometric models with time varying coefficients (iii) an instrument for the recursive calculation of the likelihood of the (constant) state space coefficients (iv) a means of helping to implement the scoring and EM-method for iteratively maximizing this likelihood (v) an analytical tool in asymptotic estimation theory. The concluding section points to the importance of Kalman filtering for alternatives to maximum likelihood estimation of state space parameters.  相似文献   

8.
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model with right-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.  相似文献   

9.
The negative binomial (NB) is frequently used to model overdispersed Poisson count data. To study the effect of a continuous covariate of interest in an NB model, a flexible procedure is used to model the covariate effect by fixed-knot cubic basis-splines or B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A penalized likelihood is used to estimate parameters of the model. A penalized likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the continuous covariate effect. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. The smoothing parameter value is determined by setting a specified value equal to the asymptotic expectation of the test statistic under the null hypothesis. The power performance of the proposed test is studied with simulation experiments.  相似文献   

10.
This article introduces principal component analysis for multidimensional sparse functional data, utilizing Gaussian basis functions. Our multidimensional model is estimated by maximizing a penalized log-likelihood function, while previous mixed-type models were estimated by maximum likelihood methods for one-dimensional data. The penalized estimation performs well for our multidimensional model, while maximum likelihood methods yield unstable parameter estimates and some of the parameter estimates are infinite. Numerical experiments are conducted to investigate the effectiveness of our method for some types of missing data. The proposed method is applied to handwriting data, which consist of the XY coordinates values in handwritings.  相似文献   

11.
孙燕 《统计研究》2013,30(4):92-98
 在颇具争议的收入差距和健康关系研究中,为了降低可能存在的模型设定和遗漏变量偏误,本文提出了随机效应半参数logit模型,其中非参数的设定还可用于数据的初探性分析。随后本文提出了模型非参数和参数部分的估计方法。这里涉及的难点是随机效应的存在导致似然函数中的积分没有解析式,而非参数的存在更加大了估计难度。本文基于惩罚样条非参数估计方法和四阶Laplace近似方法建立了惩罚对数似然函数,其最大化采用了Newton_Raphson近似方法。文章还建立了惩罚样条中重要光滑参数的选取准则。模型在收入差距和健康实例中的估计结果表明数据支持收入差距弱假说,且非参数估计结果表明其具有U型形式,与实例估计结果的比较指出本文提出的估计方法是较准确的。  相似文献   

12.
Recurrent event data arise in many biomedical and engineering studies when failure events can occur repeatedly over time for each study subject. In this article, we are interested in nonparametric estimation of the hazard function for gap time. A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate. Method for smoothing parameter selection is developed from subject-wise cross-validation. Confidence intervals for the hazard function are derived using the Bayes model of the penalized likelihood. An eigenvalue analysis establishes the asymptotic convergence rates of the relevant estimates. Empirical studies are performed to evaluate various aspects of the method. The proposed technique is demonstrated through an application to the well-known bladder tumor cancer data.  相似文献   

13.
针对纵向数据半参数模型E(y|x,t)=XTβ+f(t),采用惩罚二次推断函数方法同时估计模型中的回归参数β和未知光滑函数f(t)。首先利用截断幂函数基对未知光滑函数进行基函数展开近似,然后利用惩罚样条的思想构造关于回归参数和基函数系数的惩罚二次推断函数,最小化惩罚二次推断函数便可得到回归参数和基函数系数的惩罚二次推断函数估计。理论结果显示,估计结果具有相合性和渐近正态性,通过数值方法也得到了较好的模拟结果。  相似文献   

14.
Functional regression models that relate functional covariates to a scalar response are becoming more common due to the availability of functional data and computational advances. We introduce a functional nonlinear model with a scalar response where the true parameter curve is monotone. Using the Newton-Raphson method within a backfitting procedure, we discuss a penalized least squares criterion for fitting the functional nonlinear model with the smoothing parameter selected using generalized cross validation. Connections between a nonlinear mixed effects model and our functional nonlinear model are discussed, thereby providing an additional model fitting procedure using restricted maximum likelihood for smoothing parameter selection. Simulated relative efficiency gains provided by a monotone parameter curve estimator relative to an unconstrained parameter curve estimator are presented. In addition, we provide an application of our model with data from ozonesonde measurements of stratospheric ozone in which the measurements are biased as a function of altitude.  相似文献   

15.
Varying-coefficient models are useful extensions of classical linear models. They arise from multivariate nonparametric regression, nonlinear time series modeling and forecasting, longitudinal data analysis, and others. This article proposes the penalized spline estimation for the varying-coefficient models. Assuming a fixed but potentially large number of knots, the penalized spline estimators are shown to be strong consistency and asymptotic normality. A systematic optimization algorithm for the selection of multiple smoothing parameters is developed. One of the advantages of the penalized spline estimation is that it can accommodate varying degrees of smoothness among coefficient functions due to multiple smoothing parameters being used. Some simulation studies are presented to illustrate the proposed methods.  相似文献   

16.
High-dimensional sparse modeling with censored survival data is of great practical importance, as exemplified by applications in high-throughput genomic data analysis. In this paper, we propose a class of regularization methods, integrating both the penalized empirical likelihood and pseudoscore approaches, for variable selection and estimation in sparse and high-dimensional additive hazards regression models. When the number of covariates grows with the sample size, we establish asymptotic properties of the resulting estimator and the oracle property of the proposed method. It is shown that the proposed estimator is more efficient than that obtained from the non-concave penalized likelihood approach in the literature. Based on a penalized empirical likelihood ratio statistic, we further develop a nonparametric likelihood approach for testing the linear hypothesis of regression coefficients and constructing confidence regions consequently. Simulation studies are carried out to evaluate the performance of the proposed methodology and also two real data sets are analyzed.  相似文献   

17.
Thin plate regression splines   总被引:2,自引:0,他引:2  
Summary. I discuss the production of low rank smoothers for d  ≥ 1 dimensional data, which can be fitted by regression or penalized regression methods. The smoothers are constructed by a simple transformation and truncation of the basis that arises from the solution of the thin plate spline smoothing problem and are optimal in the sense that the truncation is designed to result in the minimum possible perturbation of the thin plate spline smoothing problem given the dimension of the basis used to construct the smoother. By making use of Lanczos iteration the basis change and truncation are computationally efficient. The smoothers allow the use of approximate thin plate spline models with large data sets, avoid the problems that are associated with 'knot placement' that usually complicate modelling with regression splines or penalized regression splines, provide a sensible way of modelling interaction terms in generalized additive models, provide low rank approximations to generalized smoothing spline models, appropriate for use with large data sets, provide a means for incorporating smooth functions of more than one variable into non-linear models and improve the computational efficiency of penalized likelihood models incorporating thin plate splines. Given that the approach produces spline-like models with a sparse basis, it also provides a natural way of incorporating unpenalized spline-like terms in linear and generalized linear models, and these can be treated just like any other model terms from the point of view of model selection, inference and diagnostics.  相似文献   

18.
In this paper, we propose a penalized likelihood method to simultaneous select covariate, and mixing component and obtain parameter estimation in the localized mixture of experts models. We develop an expectation maximization algorithm to solve the proposed penalized likelihood procedure, and introduce a data-driven procedure to select the tuning parameters. Extensive numerical studies are carried out to compare the finite sample performances of our proposed method and other existing methods. Finally, we apply the proposed methodology to analyze the Boston housing price data set and the baseball salaries data set.  相似文献   

19.
Penalized likelihood methods provide a range of practical modelling tools, including spline smoothing, generalized additive models and variants of ridge regression. Selecting the correct weights for penalties is a critical part of using these methods and in the single-penalty case the analyst has several well-founded techniques to choose from. However, many modelling problems suggest a formulation employing multiple penalties, and here general methodology is lacking. A wide family of models with multiple penalties can be fitted to data by iterative solution of the generalized ridge regression problem minimize || W 1/2 ( Xp − y ) ||2ρ+Σ i =1 m  θ i p ' S i p ( p is a parameter vector, X a design matrix, S i a non-negative definite coefficient matrix defining the i th penalty with associated smoothing parameter θ i , W a diagonal weight matrix, y a vector of data or pseudodata and ρ an 'overall' smoothing parameter included for computational efficiency). This paper shows how smoothing parameter selection can be performed efficiently by applying generalized cross-validation to this problem and how this allows non-linear, generalized linear and linear models to be fitted using multiple penalties, substantially increasing the scope of penalized modelling methods. Examples of non-linear modelling, generalized additive modelling and anisotropic smoothing are given.  相似文献   

20.
Penalized Maximum Likelihood Estimator for Normal Mixtures   总被引:1,自引:0,他引:1  
The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator. Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号