首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

We present here an extension of Pan's multiple imputation approach to Cox regression in the setting of interval-censored competing risks data. The idea is to convert interval-censored data into multiple sets of complete or right-censored data and to use partial likelihood methods to analyse them. The process is iterated, and at each step, the coefficient of interest, its variance–covariance matrix, and the baseline cumulative incidence function are updated from multiple posterior estimates derived from the Fine and Gray sub-distribution hazards regression given augmented data. Through simulation of patients at risks of failure from two causes, and following a prescheduled programme allowing for informative interval-censoring mechanisms, we show that the proposed method results in more accurate coefficient estimates as compared to the simple imputation approach. We have implemented the method in the MIICD R package, available on the CRAN website.  相似文献   

2.
The study of a linear regression model with an interval-censored covariate, which was motivated by an acquired immunodeficiency syndrome (AIDS) clinical trial, was first proposed by Gómez et al. They developed a likelihood approach, together with a two-step conditional algorithm, to estimate the regression coefficients in the model. However, their method is inapplicable when the interval-censored covariate is continuous. In this article, we propose a novel and fast method to treat the continuous interval-censored covariate. By using logspline density estimation, we impute the interval-censored covariate with a conditional expectation. Then, the ordinary least-squares method is applied to the linear regression model with the imputed covariate. To assess the performance of the proposed method, we compare our imputation with the midpoint imputation and the semiparametric hierarchical method via simulations. Furthermore, an application to the AIDS clinical trial is presented.  相似文献   

3.
We consider the semiparametric proportional hazards model for the cause-specific hazard function in analysis of competing risks data with missing cause of failure. The inverse probability weighted equation and augmented inverse probability weighted equation are proposed for estimating the regression parameters in the model, and their theoretical properties are established for inference. Simulation studies demonstrate that the augmented inverse probability weighted estimator is doubly robust and the proposed method is appropriate for practical use. The simulations also compare the proposed estimators with the multiple imputation estimator of Lu and Tsiatis (2001). The application of the proposed method is illustrated using data from a bone marrow transplant study.  相似文献   

4.
This article develops three empirical likelihood (EL) approaches to estimate parameters in nonlinear regression models in the presence of nonignorable missing responses. These are based on the inverse probability weighted (IPW) method, the augmented IPW (AIPW) method and the imputation technique. A logistic regression model is adopted to specify the propensity score. Maximum likelihood estimation is used to estimate parameters in the propensity score by combining the idea of importance sampling and imputing estimating equations. Under some regularity conditions, we obtain the asymptotic properties of the maximum EL estimators of these unknown parameters. Simulation studies are conducted to investigate the finite sample performance of our proposed estimation procedures. Empirical results provide evidence that the AIPW procedure exhibits better performance than the other two procedures. Data from a survey conducted in 2002 are used to illustrate the proposed estimation procedure. The Canadian Journal of Statistics 48: 386–416; 2020 © 2020 Statistical Society of Canada  相似文献   

5.
Doubly robust (DR) estimators of the mean with missing data are compared. An estimator is DR if either the regression of the missing variable on the observed variables or the missing data mechanism is correctly specified. One method is to include the inverse of the propensity score as a linear term in the imputation model [D. Firth and K.E. Bennett, Robust models in probability sampling, J. R. Statist. Soc. Ser. B. 60 (1998), pp. 3–21; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146; H. Bang and J.M. Robins, Doubly robust estimation in missing data and causal inference models, Biometrics 61 (2005), pp. 962–972]. Another method is to calibrate the predictions from a parametric model by adding a mean of the weighted residuals [J.M Robins, A. Rotnitzky, and L.P. Zhao, Estimation of regression coefficients when some regressors are not always observed, J. Am. Statist. Assoc. 89 (1994), pp. 846–866; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146]. The penalized spline propensity prediction (PSPP) model includes the propensity score into the model non-parametrically [R.J.A. Little and H. An, Robust likelihood-based analysis of multivariate data with missing values, Statist. Sin. 14 (2004), pp. 949–968; G. Zhang and R.J. Little, Extensions of the penalized spline propensity prediction method of imputation, Biometrics, 65(3) (2008), pp. 911–918]. All these methods have consistency properties under misspecification of regression models, but their comparative efficiency and confidence coverage in finite samples have received little attention. In this paper, we compare the root mean square error (RMSE), width of confidence interval and non-coverage rate of these methods under various mean and response propensity functions. We study the effects of sample size and robustness to model misspecification. The PSPP method yields estimates with smaller RMSE and width of confidence interval compared with other methods under most situations. It also yields estimates with confidence coverage close to the 95% nominal level, provided the sample size is not too small.  相似文献   

6.
In this article, the authors consider a semiparametric additive hazards regression model for right‐censored data that allows some censoring indicators to be missing at random. They develop a class of estimating equations and use an inverse probability weighted approach to estimate the regression parameters. Nonparametric smoothing techniques are employed to estimate the probability of non‐missingness and the conditional probability of an uncensored observation. The asymptotic properties of the resulting estimators are derived. Simulation studies show that the proposed estimators perform well. They motivate and illustrate their methods with data from a brain cancer clinical trial. The Canadian Journal of Statistics 38: 333–351; 2010 © 2010 Statistical Society of Canada  相似文献   

7.
Interval-censored survival data arise often in medical applications and clinical trials [Wang L, Sun J, Tong X. Regression analyis of case II interval-censored failure time data with the additive hazards model. Statistica Sinica. 2010;20:1709–1723]. However, most of existing interval-censored survival analysis techniques suffer from challenges such as heavy computational cost or non-proportionality of hazard rates due to complicated data structure [Wang L, Lin X. A Bayesian approach for analyzing case 2 interval-censored data under the semiparametric proportional odds model. Statistics & Probability Letters. 2011;81:876–883; Banerjee T, Chen M-H, Dey DK, et al. Bayesian analysis of generalized odds-rate hazards models for survival data. Lifetime Data Analysis. 2007;13:241–260]. To address these challenges, in this paper, we introduce a flexible Bayesian non-parametric procedure for the estimation of the odds under interval censoring, case II. We use Bernstein polynomials to introduce a prior for modeling the odds and propose a novel and easy-to-implement sampling manner based on the Markov chain Monte Carlo algorithms to study the posterior distributions. We also give general results on asymptotic properties of the posterior distributions. The simulated examples show that the proposed approach is quite satisfactory in the cases considered. The use of the proposed method is further illustrated by analyzing the hemophilia study data [McMahan CS, Wang L. A package for semiparametric regression analysis of interval-censored data; 2015. http://CRAN.R-project.org/package=ICsurv.  相似文献   

8.
The additive hazards model is one of the most commonly used regression models in the analysis of failure time data and many methods have been developed for its inference in various situations. However, no established estimation procedure exists when there are covariates with missing values and the observed responses are interval-censored; both types of complications arise in various settings including demographic, epidemiological, financial, medical and sociological studies. To address this deficiency, we propose several inverse probability weight-based and reweighting-based estimation procedures for the situation where covariate values are missing at random. The resulting estimators of regression model parameters are shown to be consistent and asymptotically normal. The numerical results that we report from a simulation study suggest that the proposed methods work well in practical situations. An application to a childhood cancer survival study is provided. The Canadian Journal of Statistics 48: 499–517; 2020 © 2020 Statistical Society of Canada  相似文献   

9.
Quantile regression methods have been used to estimate upper and lower quantile reference curves as the function of several covariates. In this article, it is demonstrated that the estimating equation of Zhou [A weighted quantile regression for randomly truncated data, Comput. Stat. Data Anal. 55 (2011), pp. 554–566.] can be extended to analyse left-truncated and right-censored data. We evaluate the finite sample performance of the proposed estimators through simulation studies. The proposed estimator β?(q) is applied to the Veteran's Administration lung cancer data reported by Prentice [Exponential survival with censoring and explanatory variables, Biometrika 60 (1973), pp. 279–288].  相似文献   

10.
The Buckley–James estimator (BJE) [J. Buckley and I. James, Linear regression with censored data, Biometrika 66 (1979), pp. 429–436] has been extended from right-censored (RC) data to interval-censored (IC) data by Rabinowitz et al. [D. Rabinowitz, A. Tsiatis, and J. Aragon, Regression with interval-censored data, Biometrika 82 (1995), pp. 501–513]. The BJE is defined to be a zero-crossing of a modified score function H(b), a point at which H(·) changes its sign. We discuss several approaches (for finding a BJE with IC data) which are extensions of the existing algorithms for RC data. However, these extensions may not be appropriate for some data, in particular, they are not appropriate for a cancer data set that we are analysing. In this note, we present a feasible iterative algorithm for obtaining a BJE. We apply the method to our data.  相似文献   

11.
The prevalence of interval censored data is increasing in medical studies due to the growing use of biomarkers to define a disease progression endpoint. Interval censoring results from periodic monitoring of the progression status. For example, disease progression is established in the interval between the clinic visit where progression is recorded and the prior clinic visit where there was no evidence of disease progression. A methodology is proposed for estimation and inference on the regression coefficients in the Cox proportional hazards model with interval censored data. The methodology is based on estimating equations and uses an inverse probability weight to select event time pairs where the ordering is unambiguous. Simulations are performed to examine the finite sample properties of the estimate and a colon cancer data set is used to demonstrate its performance relative to the conventional partial likelihood estimate that ignores the interval censoring.  相似文献   

12.
We propose a multiple imputation method to deal with incomplete categorical data. This method imputes the missing entries using the principal component method dedicated to categorical data: multiple correspondence analysis (MCA). The uncertainty concerning the parameters of the imputation model is reflected using a non-parametric bootstrap. Multiple imputation using MCA (MIMCA) requires estimating a small number of parameters due to the dimensionality reduction property of MCA. It allows the user to impute a large range of data sets. In particular, a high number of categories per variable, a high number of variables or a small number of individuals are not an issue for MIMCA. Through a simulation study based on real data sets, the method is assessed and compared to the reference methods (multiple imputation using the loglinear model, multiple imputation by logistic regressions) as well to the latest works on the topic (multiple imputation by random forests or by the Dirichlet process mixture of products of multinomial distributions model). The proposed method provides a good point estimate of the parameters of the analysis model considered, such as the coefficients of a main effects logistic regression model, and a reliable estimate of the variability of the estimators. In addition, MIMCA has the great advantage that it is substantially less time consuming on data sets of high dimensions than the other multiple imputation methods.  相似文献   

13.
In this paper, we study linear regression analysis when some of the censoring indicators are missing at random. We define regression calibration estimate, imputation estimate and inverse probability weighted estimate for the regression coefficient vector based on the weighted least squared approach due to Stute (1993), and prove all the estimators are asymptotically normal. A simulation study was conducted to evaluate the finite properties of the proposed estimators, and a real data example is provided to illustrate our methods.  相似文献   

14.
In this study, classical and Bayesian inference methods are introduced to analyze lifetime data sets in the presence of left censoring considering two generalizations of the Lindley distribution: a first generalization proposed by Ghitany et al. [Power Lindley distribution and associated inference, Comput. Statist. Data Anal. 64 (2013), pp. 20–33], denoted as a power Lindley distribution and a second generalization proposed by Sharma et al. [The inverse Lindley distribution: A stress–strength reliability model with application to head and neck cancer data, J. Ind. Prod. Eng. 32 (2015), pp. 162–173], denoted as an inverse Lindley distribution. In our approach, we have used a distribution obtained from these two generalizations denoted as an inverse power Lindley distribution. A numerical illustration is presented considering a dataset of thyroglobulin levels present in a group of individuals with differentiated cancer of thyroid.  相似文献   

15.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada  相似文献   

16.
Scheike and Zhang [An additive-multiplicative Cox-Aalen regression model. Scand J Stat. 2002;29:75–88] proposed a flexible additive-multiplicative hazard model, called the Cox-Aalen model, by replacing the baseline hazard function in the well-known Cox model with a covariate-dependent Aalen model, which allows for both fixed and dynamic covariate effects. In this paper, based on left-truncated and mixed interval-censored (LT-MIC) data, we consider maximum likelihood estimation for the Cox-Aalen model with fixed covariates. We propose expectation-maximization (EM) algorithms for obtaining the conditional maximum likelihood estimators (cMLE) of the regression coefficients for the Cox-Aalen model. We establish the consistency of the cMLE. Numerical studies show that estimation via the EM algorithms performs well.  相似文献   

17.
A general nonparametric imputation procedure, based on kernel regression, is proposed to estimate points as well as set- and function-indexed parameters when the data are missing at random (MAR). The proposed method works by imputing a specific function of a missing value (and not the missing value itself), where the form of this specific function is dictated by the parameter of interest. Both single and multiple imputations are considered. The associated empirical processes provide the right tool to study the uniform convergence properties of the resulting estimators. Our estimators include, as special cases, the imputation estimator of the mean, the estimator of the distribution function proposed by Cheng and Chu [1996. Kernel estimation of distribution functions and quantiles with missing data. Statist. Sinica 6, 63–78], imputation estimators of a marginal density, and imputation estimators of regression functions.  相似文献   

18.
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation‐based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation‐based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias‐correction method of imputation‐based AUCs and found that the bias‐corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation‐based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Wong et al. [(2018), ‘Piece-wise Proportional Hazards Models with Interval-censored Data’, Journal of Statistical Computation and Simulation, 88, 140–155] studied the piecewise proportional hazards (PWPH) model with interval-censored (IC) data under the distribution-free set-up. It is well known that the partial likelihood approach is not applicable for IC data, and Wong et al. (2018) showed that the standard generalised likelihood approach does not work either. They proposed the maximum modified generalised likelihood estimator (MMGLE) and the simulation results suggest that the MMGLE is consistent. We establish the consistency and asymptotically normality of the MMGLE.  相似文献   

20.
In order to study developmental variables, for example, neuromotor development of children and adolescents, monotone fitting is typically needed. Most methods, to estimate a monotone regression function non-parametrically, however, are not straightforward to implement, a difficult issue being the choice of smoothing parameters. In this paper, a convenient implementation of the monotone B-spline estimates of Ramsay [Monotone regression splines in action (with discussion), Stat. Sci. 3 (1988), pp. 425–461] and Kelly and Rice [Montone smoothing with application to dose-response curves and the assessment of synergism, Biometrics 46 (1990), pp. 1071–1085] is proposed and applied to neuromotor data. Knots are selected adaptively using ideas found in Friedman and Silverman [Flexible parsimonous smoothing and additive modelling (with discussion), Technometrics 31 (1989), pp. 3–39] yielding a flexible algorithm to automatically and accurately estimate a monotone regression function. Using splines also simultaneously allows to include other aspects in the estimation problem, such as modeling a constant difference between two groups or a known jump in the regression function. Finally, an estimate which is not only monotone but also has a ‘levelling-off’ (i.e. becomes constant after some point) is derived. This is useful when the developmental variable is known to attain a maximum/minimum within the interval of observation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号