首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
There are relatively few discussions about measurement error in the accelerated failure time (AFT) model, particularly for the semiparametric AFT model. In this article, we propose an adjusted estimation procedure for the semiparametric AFT model with covariates subject to measurement error, based on the profile likelihood approach and simulation and exploration (SIMEX) method. The simulation studies show that the proposed semiparametric SIMEX approach performs well. The proposed approach is applied to a coronary heart disease dataset from the Busselton Health study for illustration.  相似文献   

2.
The accelerated failure time (AFT) model is an important regression tool to study the association between failure time and covariates. In this paper, we propose a robust weighted generalized M (GM) estimation for the AFT model with right-censored data by appropriately using the Kaplan–Meier weights in the GM–type objective function to estimate the regression coefficients and scale parameter simultaneously. This estimation method is computationally simple and can be implemented with existing software. Asymptotic properties including the root-n consistency and asymptotic normality are established for the resulting estimator under suitable conditions. We further show that the method can be readily extended to handle a class of nonlinear AFT models. Simulation results demonstrate satisfactory finite sample performance of the proposed estimator. The practical utility of the method is illustrated by a real data example.  相似文献   

3.
Smoothed Gehan rank estimation methods are widely used in accelerated failure time (AFT) models with/without clusters. However, most methods are sensitive to outliers in the covariates. In order to solve this problem, we propose robust approaches based on the smoothed Gehan rank estimation methods for the AFT model, allowing for clusters by employing two different weight functions. Simulation studies show that the proposed methods outperform existing smoothed rank estimation methods regarding their biases and standard deviations when there are outliers in the covariates. The proposed methods are also applied to a real dataset from the “Major cardiovascular interventions” study.  相似文献   

4.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

5.
The varying coefficient model (VCM) is an important generalization of the linear regression model and many existing estimation procedures for VCM were built on L 2 loss, which is popular for its mathematical beauty but is not robust to non-normal errors and outliers. In this paper, we address the problem of both robustness and efficiency of estimation and variable selection procedure based on the convex combined loss of L 1 and L 2 instead of only quadratic loss for VCM. By using local linear modeling method, the asymptotic normality of estimation is driven and a useful selection method is proposed for the weight of composite L 1 and L 2. Then the variable selection procedure is given by combining local kernel smoothing with adaptive group LASSO. With appropriate selection of tuning parameters by Bayesian information criterion (BIC) the theoretical properties of the new procedure, including consistency in variable selection and the oracle property in estimation, are established. The finite sample performance of the new method is investigated through simulation studies and the analysis of body fat data. Numerical studies show that the new method is better than or at least as well as the least square-based method in terms of both robustness and efficiency for variable selection.  相似文献   

6.
Stute (1993, Consistent estimation under random censorship when covariables are present. Journal of Multivariate Analysis 45, 89–103) proposed a new method to estimate regression models with a censored response variable using least squares and showed the consistency and asymptotic normality for his estimator. This article proposes a new bootstrap-based methodology that improves the performance of the asymptotic interval estimation for the small sample size case. Therefore, we compare the behavior of Stute's asymptotic confidence interval with that of several confidence intervals that are based on resampling bootstrap techniques. In order to build these confidence intervals, we propose a new bootstrap resampling method that has been adapted for the case of censored regression models. We use simulations to study the improvement the performance of the proposed bootstrap-based confidence intervals show when compared to the asymptotic proposal. Simulation results indicate that, for the new proposals, coverage percentages are closer to the nominal values and, in addition, intervals are narrower.  相似文献   

7.
We propose a data-driven method to select significant variables in additive model via spline estimation. The additive structure of the regression model is imposed to overcome the ‘curse of dimensionality’, while the spline estimators provide a good approximation to the additive components of the model. The additive components are ordered according to their empirical strengths, and the significant variables are chosen at the first crossing of a predetermined threshold by the CUmulative Ratios of Empirical Strengths Total of the components. Consistency of the proposed method is established when the number of variables are allowed to diverge with sample size, while extensive Monte-Carlo study demonstrates superior performance of the proposed method and its advantages over the BIC method of Huang and Yang [(2004), ‘Identification of Nonlinear: Additive Autoregressive Models’, Journal of the Royal Statistical Society Series B, 66, 463–477] in terms of speed and accuracy.  相似文献   

8.
Failures of highly reliable units are rare and it may be not possible to gather the failure time data needed for reliability estimation. One way of obtaining failures during the time given for experiments is to apply methods of accelerated life testing (ALT). In ALT units are tested at higher than usual (design) stress conditions. The purpose is to give estimators of the main reliability characteristics of units functioning under the usual stress using data of accelerated experiments. To treat such data accelerated life models are used. Here we consider special plans of experiments and the statistical analysis of the ALT data by numerical methods and simulation using the changing shape and scale (CHSS) model proposed by Bagdonavičius and Nikulin (1999). The CHSS model is a natural extension of the standard accelerated failure time (AFT) model. We give parametric and semiparametric estimation procedures for the CHSS model and a goodness-of-fit test for the AFT model.  相似文献   

9.
Often in practice one is interested in the situation where the lifetime data are censored. Censorship is a common phenomenon frequently encountered when analyzing lifetime data due to time constraints. In this paper, the flexible Weibull distribution proposed in Bebbington et al. [A flexible Weibull extension, Reliab. Eng. Syst. Safety 92 (2007), pp. 719–726] is studied using maximum likelihood technics based on three different algorithms: Newton Raphson, Levenberg Marquardt and Trust Region reflective. The proposed parameter estimation method is introduced and proved to work from theoretical and practical point of view. On one hand, we apply a maximum likelihood estimation method using complete simulated and real data. On the other hand, we study for the first time the model using simulated and real data for type I censored samples. The estimation results are approved by a statistical test.  相似文献   

10.
Two-component mixture cure rate model is popular in cure rate data analysis with the proportional hazards and accelerated failure time (AFT) models being the major competitors for modelling the latency component. [Wang, L., Du, P., and Liang, H. (2012), ‘Two-Component Mixture Cure Rate Model with Spline Estimated Nonparametric Components’, Biometrics, 68, 726–735] first proposed a nonparametric mixture cure rate model where the latency component assumes proportional hazards with nonparametric covariate effects in the relative risk. Here we consider a mixture cure rate model where the latency component assumes AFTs with nonparametric covariate effects in the acceleration factor. Besides the more direct physical interpretation than the proportional hazards, our model has an additional scalar parameter which adds more complication to the computational algorithm as well as the asymptotic theory. We develop a penalised EM algorithm for estimation together with confidence intervals derived from the Louis formula. Asymptotic convergence rates of the parameter estimates are established. Simulations and the application to a melanoma study shows the advantages of our new method.  相似文献   

11.
Coefficient estimation in linear regression models with missing data is routinely carried out in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy‐tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an ICQ ‐type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.  相似文献   

12.
The analysis of survival endpoints subject to right-censoring is an important research area in statistics, particularly among econometricians and biostatisticians. The two most popular semiparametric models are the proportional hazards model and the accelerated failure time (AFT) model. Rank-based estimation in the AFT model is computationally challenging due to optimization of a non-smooth loss function. Previous work has shown that rank-based estimators may be written as solutions to linear programming (LP) problems. However, the size of the LP problem is O(n 2+p) subject to n 2 linear constraints, where n denotes sample size and p denotes the dimension of parameters. As n and/or p increases, the feasibility of such solution in practice becomes questionable. Among data mining and statistical learning enthusiasts, there is interest in extending ordinary regression coefficient estimators for low-dimensions into high-dimensional data mining tools through regularization. Applying this recipe to rank-based coefficient estimators leads to formidable optimization problems which may be avoided through smooth approximations to non-smooth functions. We review smooth approximations and quasi-Newton methods for rank-based estimation in AFT models. The computational cost of our method is substantially smaller than the corresponding LP problem and can be applied to small- or large-scale problems similarly. The algorithm described here allows one to couple rank-based estimation for censored data with virtually any regularization and is exemplified through four case studies.  相似文献   

13.
Length‐biased sampling data are often encountered in the studies of economics, industrial reliability, epidemiology, genetics and cancer screening. The complication of this type of data is due to the fact that the observed lifetimes suffer from left truncation and right censoring, where the left truncation variable has a uniform distribution. In the Cox proportional hazards model, Huang & Qin (Journal of the American Statistical Association, 107, 2012, p. 107) proposed a composite partial likelihood method which not only has the simplicity of the popular partial likelihood estimator, but also can be easily performed by the standard statistical software. The accelerated failure time model has become a useful alternative to the Cox proportional hazards model. In this paper, by using the composite partial likelihood technique, we study this model with length‐biased sampling data. The proposed method has a very simple form and is robust when the assumption that the censoring time is independent of the covariate is violated. To ease the difficulty of calculations when solving the non‐smooth estimating equation, we use a kernel smoothed estimation method (Heller; Journal of the American Statistical Association, 102, 2007, p. 552). Large sample results and a re‐sampling method for the variance estimation are discussed. Some simulation studies are conducted to compare the performance of the proposed method with other existing methods. A real data set is used for illustration.  相似文献   

14.
Recurrent event data are commonly encountered in longitudinal studies when events occur repeatedly over time for each study subject. An accelerated failure time (AFT) model on the sojourn time between recurrent events is considered in this article. This model assumes that the covariate effect and the subject-specific frailty are additive on the logarithm of sojourn time, and the covariate effect maintains the same over distinct episodes, while the distributions of the frailty and the random error in the model are unspecified. With the ordinal nature of recurrent events, two scale transformations of the sojourn times are derived to construct semiparametric methods of log-rank type for estimating the marginal covariate effects in the model. The proposed estimation approaches/inference procedures also can be extended to the bivariate events, which alternate themselves over time. Examples and comparisons are presented to illustrate the performance of the proposed methods.  相似文献   

15.
The semiparametric accelerated failure time (AFT) model is not as widely used as the Cox relative risk model due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations for censored data provide promising tools to make the AFT models more attractive in practice. For multivariate AFT models, we propose a generalized estimating equations (GEE) approach, extending the GEE to censored data. The consistency of the regression coefficient estimator is robust to misspecification of working covariance, and the efficiency is higher when the working covariance structure is closer to the truth. The marginal error distributions and regression coefficients are allowed to be unique for each margin or partially shared across margins as needed. The initial estimator is a rank-based estimator with Gehan’s weight, but obtained from an induced smoothing approach with computational ease. The resulting estimator is consistent and asymptotically normal, with variance estimated through a multiplier resampling method. In a large scale simulation study, our estimator was up to three times as efficient as the estimateor that ignores the within-cluster dependence, especially when the within-cluster dependence was strong. The methods were applied to the bivariate failure times data from a diabetic retinopathy study.  相似文献   

16.
We introduce a general class of semiparametric hazard regression models, called extended hazard (EH) models, that are designed to accommodate various survival schemes with time-dependent covariates. The EH model contains both the Cox model and the accelerated failure time (AFT) model as its subclasses so that we can use this nested structure to perform model selection between the Cox model and the AFT model. A class of estimating equations using counting process and martingale techniques is developed to estimate the regression parameters of the proposed model. The performance of the estimating procedure and the impact of model misspecification are assessed through simulation studies. Two data examples, Stanford heart transplant data and Mediterranean fruit flies, egg-laying data, are used to demonstrate the usefulness of the EH model.  相似文献   

17.
Abstract

In this article, we propose a new penalized-likelihood method to conduct model selection for finite mixture of regression models. The penalties are imposed on mixing proportions and regression coefficients, and hence order selection of the mixture and the variable selection in each component can be simultaneously conducted. The consistency of order selection and the consistency of variable selection are investigated. A modified EM algorithm is proposed to maximize the penalized log-likelihood function. Numerical simulations are conducted to demonstrate the finite sample performance of the estimation procedure. The proposed methodology is further illustrated via real data analysis.  相似文献   

18.
We propose an alternative estimation method for the semiparametric accelerated failure time mixture cure model by incorporating the profile likelihood into the M-step of the EM algorithm. The proposed method performs as well as the existing methods when the censoring is light and better than the existing methods when the censoring is moderate from the simulation studies. Regarding to the computational time, the proposed method runs faster than the existing methods.  相似文献   

19.
Abstract

In this article, we study the variable selection and estimation for linear regression models with missing covariates. The proposed estimation method is almost as efficient as the popular least-squares-based estimation method for normal random errors and empirically shown to be much more efficient and robust with respect to heavy tailed errors or outliers in the responses and covariates. To achieve sparsity, a variable selection procedure based on SCAD is proposed to conduct estimation and variable selection simultaneously. The procedure is shown to possess the oracle property. To deal with the covariates missing, we consider the inverse probability weighted estimators for the linear model when the selection probability is known or unknown. It is shown that the estimator by using estimated selection probability has a smaller asymptotic variance than that with true selection probability, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for penalized rank estimator with the covariates missing in the linear model. Some numerical examples are provided to demonstrate the performance of the estimators.  相似文献   

20.
A Bayesian approach is proposed for coefficient estimation in the Tobit quantile regression model. The proposed approach is based on placing a g-prior distribution depends on the quantile level on the regression coefficients. The prior is generalized by introducing a ridge parameter to address important challenges that may arise with censored data, such as multicollinearity and overfitting problems. Then, a stochastic search variable selection approach is proposed for Tobit quantile regression model based on g-prior. An expression for the hyperparameter g is proposed to calibrate the modified g-prior with a ridge parameter to the corresponding g-prior. Some possible extensions of the proposed approach are discussed, including the continuous and binary responses in quantile regression. The methods are illustrated using several simulation studies and a microarray study. The simulation studies and the microarray study indicate that the proposed approach performs well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号