首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the estimation of the regression coefficients in the Cox proportional hazards model with left-truncated and interval-censored data. Using the approaches of Pan [A multiple imputation approach to Cox regression with interval-censored data, Biometrics 56 (2000), pp. 199–203] and Heller [Proportional hazards regression with interval censored data using an inverse probability weight, Lifetime Data Anal. 17 (2011), pp. 373–385], we propose two estimates of the regression coefficients. The first estimate is based on a multiple imputation methodology. The second estimate uses an inverse probability weight to select event time pairs where the ordering is unambiguous. A simulation study is conducted to investigate the performance of the proposed estimators. The proposed methods are illustrated using the Centers for Disease Control and Prevention (CDC) acquired immunodeficiency syndrome (AIDS) Blood Transfusion Data.  相似文献   

2.
ABSTRACT

We propose a multiple imputation method based on principal component analysis (PCA) to deal with incomplete continuous data. To reflect the uncertainty of the parameters from one imputation to the next, we use a Bayesian treatment of the PCA model. Using a simulation study and real data sets, the method is compared to two classical approaches: multiple imputation based on joint modelling and on fully conditional modelling. Contrary to the others, the proposed method can be easily used on data sets where the number of individuals is less than the number of variables and when the variables are highly correlated. In addition, it provides unbiased point estimates of quantities of interest, such as an expectation, a regression coefficient or a correlation coefficient, with a smaller mean squared error. Furthermore, the widths of the confidence intervals built for the quantities of interest are often smaller whilst ensuring a valid coverage.  相似文献   

3.
The study of a linear regression model with an interval-censored covariate, which was motivated by an acquired immunodeficiency syndrome (AIDS) clinical trial, was first proposed by Gómez et al. They developed a likelihood approach, together with a two-step conditional algorithm, to estimate the regression coefficients in the model. However, their method is inapplicable when the interval-censored covariate is continuous. In this article, we propose a novel and fast method to treat the continuous interval-censored covariate. By using logspline density estimation, we impute the interval-censored covariate with a conditional expectation. Then, the ordinary least-squares method is applied to the linear regression model with the imputed covariate. To assess the performance of the proposed method, we compare our imputation with the midpoint imputation and the semiparametric hierarchical method via simulations. Furthermore, an application to the AIDS clinical trial is presented.  相似文献   

4.
ABSTRACT

Missing data are commonly encountered in self-reported measurements and questionnaires. It is crucial to treat missing values using appropriate method to avoid bias and reduction of power. Various types of imputation methods exist, but it is not always clear which method is preferred for imputation of data with non-normal variables. In this paper, we compared four imputation methods: mean imputation, quantile imputation, multiple imputation, and quantile regression multiple imputation (QRMI), using both simulated and real data investigating factors affecting self-efficacy in breast cancer survivors. The results displayed an advantage of using multiple imputation, especially QRMI when data are not normal.  相似文献   

5.
Competing risks often occur when subjects may fail from one of several mutually exclusive causes. For example, when a patient suffering a cancer may die from other cause, we are interested in the effect of a certain covariate on the probability of dying of cancer at a certain time. Several approaches have been suggested to analyse competing risk data in the presence of complete information of failure cause. In this paper, our interest is to consider the occurrence of missing causes as well as interval censored failure time. There exist no method to discuss this problem. We applied a Klein–Andersen's pseudo-value approach [Klein, JP Andersen PK. Regression modeling of competing risks data based on pseudovalues of the cumulative incidence function. Biometrics. 2005;61:223–229] based on the estimated cumulative incidence function and a regression coefficient is estimated through a multiple imputation. We evaluate the suggested method by comparing with a complete case analysis in several simulation settings.  相似文献   

6.
In this article, an iterative single-point imputation (SPI) algorithm, called quantile-filling algorithm for the analysis of interval-censored data, is studied. This approach combines the simplicity of the SPI and the iterative thoughts of multiple imputation. The virtual complete data are imputed by conditional quantiles on the intervals. The algorithm convergence is based on the convergence of the moment estimation from the virtual complete data. Simulation studies have been carried out and the results are shown for interval-censored data generated from the Weibull distribution. For the Weibull distribution, complete procedures of the algorithm are shown in closed forms. Furthermore, the algorithm is applicable to the parameter inference with other distributions. From simulation studies, it has been found that the algorithm is feasible and stable. The estimation accuracy is also satisfactory.  相似文献   

7.
Missing data methods, maximum likelihood estimation (MLE) and multiple imputation (MI), for longitudinal questionnaire data were investigated via simulation. Predictive mean matching (PMM) was applied at both item and scale levels, logistic regression at item level and multivariate normal imputation at scale level. We investigated a hybrid approach which is combination of MLE and MI, i.e. scales from the imputed data are eliminated if all underlying items were originally missing. Bias and mean square error (MSE) for parameter estimates were examined. ML seemed to provide occasionally the best results in terms of bias, but hardly ever on MSE. All imputation methods at the scale level and logistic regression at item level hardly ever showed the best performance. The hybrid approach is similar or better than its original MI. The PMM-hybrid approach at item level demonstrated the best MSE for most settings and in some cases also the smallest bias.  相似文献   

8.
The Buckley–James estimator (BJE) [J. Buckley and I. James, Linear regression with censored data, Biometrika 66 (1979), pp. 429–436] has been extended from right-censored (RC) data to interval-censored (IC) data by Rabinowitz et al. [D. Rabinowitz, A. Tsiatis, and J. Aragon, Regression with interval-censored data, Biometrika 82 (1995), pp. 501–513]. The BJE is defined to be a zero-crossing of a modified score function H(b), a point at which H(·) changes its sign. We discuss several approaches (for finding a BJE with IC data) which are extensions of the existing algorithms for RC data. However, these extensions may not be appropriate for some data, in particular, they are not appropriate for a cancer data set that we are analysing. In this note, we present a feasible iterative algorithm for obtaining a BJE. We apply the method to our data.  相似文献   

9.
Various methods have been suggested in the literature to handle a missing covariate in the presence of surrogate covariates. These methods belong to one of two paradigms. In the imputation paradigm, Pepe and Fleming (1991) and Reilly and Pepe (1995) suggested filling in missing covariates using the empirical distribution of the covariate obtained from the observed data. We can proceed one step further by imputing the missing covariate using nonparametric maximum likelihood estimates (NPMLE) of the density of the covariate. Recently Murphy and Van der Vaart (1998a) showed that such an approach yields a consistent, asymptotically normal, and semiparametric efficient estimate for the logistic regression coefficient. In the weighting paradigm, Zhao and Lipsitz (1992) suggested an estimating function using completely observed records after weighting inversely by the probability of observation. An extension of this weighting approach designed to achieve semiparametric efficient bound is considered by Robins, Hsieh and Newey (RHN) (1995). The two ends of each paradigm (NPMLE and RHN) attain the efficiency bound and are asymptotically equivalent. However, both require a substantial amount of computation. A question arises whether and when, in practical situations, this extensive computation is worthwhile. In this paper we investigate the performance of single and multiple imputation estimates, weighting estimates, semiparametric efficient estimates, and two new imputation estimates. Simulation studies suggest that the sample size should be substantially large (e.g. n=2000) for NPMLE and RHN to be more efficient than simpler imputation estimates. When the sample size is moderately large (n≤ 1500), simpler imputation estimates have as small a variance as semiparametric efficient estimates.  相似文献   

10.
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semi-parametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.  相似文献   

11.
Conditional expectation imputation and local-likelihood methods are contrasted with a midpoint imputation method for bivariate regression involving interval-censored responses. Although the methods can be extended in principle to higher order polynomials, our focus is on the local constant case. Comparisons are based on simulations of data scattered about three target functions with normally distributed errors. Two censoring mechanisms are considered: the first is analogous to current-status data in which monitoring times occur according to a homogeneous Poisson process; the second is analogous to a coarsening mechanism such as would arise when the response values are binned. We find that, according to a pointwise MSE criterion, no method dominates any other when interval sizes are fixed, but when the intervals have a variable width, the local-likelihood method often performs better than the other methods, and midpoint imputation performs the worst. Several illustrative examples are presented.  相似文献   

12.
In this paper we propose a latent class based multiple imputation approach for analyzing missing categorical covariate data in a highly stratified data model. In this approach, we impute the missing data assuming a latent class imputation model and we use likelihood methods to analyze the imputed data. Via extensive simulations, we study its statistical properties and make comparisons with complete case analysis, multiple imputation, saturated log-linear multiple imputation and the Expectation–Maximization approach under seven missing data mechanisms (including missing completely at random, missing at random and not missing at random). These methods are compared with respect to bias, asymptotic standard error, type I error, and 95% coverage probabilities of parameter estimates. Simulations show that, under many missingness scenarios, latent class multiple imputation performs favorably when jointly considering these criteria. A data example from a matched case–control study of the association between multiple myeloma and polymorphisms of the Inter-Leukin 6 genes is considered.  相似文献   

13.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

14.
Abstract

In this paper, we propose an outlier-detection approach that uses the properties of an intercept estimator in a difference-based regression model (DBRM) that we first introduce. This DBRM uses multiple linear regression, and invented it to detect outliers in a multiple linear regression. Our outlier-detection approach uses only the intercept; it does not require estimates for the other parameters in the DBRM. In this paper, we first employed a difference-based intercept estimator to study the outlier-detection problem in a multiple regression model. We compared our approach with several existing methods in a simulation study and the results suggest that our approach outperformed the others. We also demonstrated the advantage of our approach using a real data application. Our approach can extend to nonparametric regression models for outliers detection.  相似文献   

15.
研究缺失偏态数据下线性回归模型的参数估计问题,针对缺失偏态数据,为克服样本分布扭曲缺点和提高模型的回归系数、尺度参数和偏度参数的估计效果,提出了一种适合偏态数据下线性回归模型中缺失数据的修正回归插补方法.通过随机模拟和实例研究,并与均值插补、回归插补、随机回归插补方法比较,结果表明所提出的修正回归插补方法是有效可行的.  相似文献   

16.
Composite scores are useful in providing insights and trends about complex and multidimensional quality of care processes. However, missing data in subcomponents may hinder the overall reliability of a composite measure. In this study, strategies for handling missing data in Paediatric Admission Quality of Care (PAQC) score, an ordinal composite outcome, were explored through a simulation study. Specifically, the implications of the conventional method employed in addressing missing PAQC score subcomponents, consisting of scoring missing PAQC score components with a zero, and a multiple imputation (MI)-based strategy, were assessed. The latent normal joint modelling MI approach was used for the latter. Across simulation scenarios, MI of missing PAQC score elements at item level produced minimally biased estimates compared to the conventional method. Moreover, regression coefficients were more prone to bias compared to standards errors. Magnitude of bias was dependent on the proportion of missingness and the missing data generating mechanism. Therefore, incomplete composite outcome subcomponents should be handled carefully to alleviate potential for biased estimates and misleading inferences. Further research on other strategies of imputing at the component and composite outcome level and imputing compatibly with the substantive model in this setting, is needed.KEYWORDS: Composite outcome, multiple imputation, paediatrics, PAQC score, pneumonia  相似文献   

17.
ABSTRACT

In this article, a finite mixture model of hurdle Poisson distribution with missing outcomes is proposed, and a stochastic EM algorithm is developed for obtaining the maximum likelihood estimates of model parameters and mixing proportions. Specifically, missing data is assumed to be missing not at random (MNAR)/non ignorable missing (NINR) and the corresponding missingness mechanism is modeled through probit regression. To improve the algorithm efficiency, a stochastic step is incorporated into the E-step based on data augmentation, whereas the M-step is solved by the method of conditional maximization. A variation on Bayesian information criterion (BIC) is also proposed to compare models with different number of components with missing values. The considered model is a general model framework and it captures the important characteristics of count data analysis such as zero inflation/deflation, heterogeneity as well as missingness, providing us with more insight into the data feature and allowing for dispersion to be investigated more fully and correctly. Since the stochastic step only involves simulating samples from some standard distributions, the computational burden is alleviated. Once missing responses and latent variables are imputed to replace the conditional expectation, our approach works as part of a multiple imputation procedure. A simulation study and a real example illustrate the usefulness and effectiveness of our methodology.  相似文献   

18.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

19.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号