首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   

2.
Estimation in Semiparametric Marginal Shared Gamma Frailty Models   总被引:1,自引:0,他引:1  
The semiparametric marginal shared frailty models in survival analysis have the non–parametric hazard functions multiplied by a random frailty in each cluster, and the survival times conditional on frailties are assumed to be independent. In addition, the marginal hazard functions have the same form as in the usual Cox proportional hazard models. In this paper, an approach based on maximum likelihood and expectation–maximization is applied to semiparametric marginal shared gamma frailty models, where the frailties are assumed to be gamma distributed with mean 1 and variance θ. The estimates of the fixed–effect parameters and their standard errors obtained using this approach are compared in terms of both bias and efficiency with those obtained using the extended marginal approach. Similarly, the standard errors of our frailty variance estimates are found to compare favourably with those obtained using other methods. The asymptotic distribution of the frailty variance estimates is shown to be a 50–50 mixture of a point mass at zero and a truncated normal random variable on the positive axis for θ0 = 0. Simulations demonstrate that, for θ0 < 0, it is approximately an x −(100 − x )%, 0 ≤ x ≤ 50, mixture between a point mass at zero and a truncated normal random variable on the positive axis for small samples and small values of θ0; otherwise, it is approximately normal.  相似文献   

3.
Shared frailty models are of interest when one has clustered survival data and when focus is on comparing the lifetimes within clusters and further on estimating the correlation between lifetimes from the same cluster. It is well known that the positive stable model should be preferred to the gamma model in situations where the correlated survival data show a decreasing association with time. In this paper, we devise a likelihood based estimation procedure for the positive stable shared frailty Cox model, which is expected to obtain high efficiency. The proposed estimator is provided with large sample properties and also a consistent estimator of standard errors is given. Simulation studies show that the estimation procedure is appropriate for practical use, and that it is much more efficient than a recently suggested procedure. The suggested methodology is applied to a dataset concerning time to blindness for patients with diabetic retinopathy.  相似文献   

4.
Using some logarithmic and integral transformation we transform a continuous covariate frailty model into a polynomial regression model with a random effect. The responses of this mixed model can be ‘estimated’ via conditional hazard function estimation. The random error in this model does not have zero mean and its variance is not constant along the covariate and, consequently, these two quantities have to be estimated. Since the asymptotic expression for the bias is complicated, the two-large-bandwidth trick is proposed to estimate the bias. The proposed transformation is very useful for clustered incomplete data subject to left truncation and right censoring (and for complex clustered data in general). Indeed, in this case no standard software is available to fit the frailty model, whereas for the transformed model standard software for mixed models can be used for estimating the unknown parameters in the original frailty model. A small simulation study illustrates the good behavior of the proposed method. This method is applied to a bladder cancer data set.  相似文献   

5.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

6.
The frailty approach is commonly used in reliability theory and survival analysis to model the dependence between lifetimes of individuals or components subject to common risk factors; according to this model the frailty (an unobservable random vector that describes environmental conditions) acts simultaneously on the hazard functions of the lifetimes. Some interesting conditions for stochastic comparisons between random vectors defined in accordance with these models have been described in the literature; in particular, comparisons between frailty models have been studied by assuming independence for the baseline survival functions and the corresponding environmental parameters. In this paper, a generalization of these models is developed, which assumes conditional dependence between the components of the random vector, and some conditions for stochastic comparisons are provided. Some examples of frailty models satisfying these conditions are also described.  相似文献   

7.
Unexplained heterogeneity in univariate survival data and association in multivariate survival can both be modelled by the inclusion of frailty effects. This paper investigates the consequences of ignoring frailty in analysis, fitting misspecified Cox proportional hazards models to the marginal distributions. Regression coefficients are biased towards 0 by an amount which depends in magnitude on the variability of the frailty terms and the form of frailty distribution. The bias is reduced when censoring is present. Fitted marginal survival curves can also differ substantially from the true marginals.  相似文献   

8.
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semi-parametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.  相似文献   

9.
The Kaplan–Meier estimator of a survival function requires that the censoring indicator is always observed. A method of survival function estimation is developed when the censoring indicators are missing completely at random (MCAR). The resulting estimator is a smooth functional of the Nelson–Aalen estimators of certain cumulative transition intensities. The asymptotic properties of this estimator are derived. A simulation study shows that the proposed estimator has greater efficiency than competing MCAR-based estimators. The approach is extended to the Cox model setting for the estimation of a conditional survival function given a covariate.  相似文献   

10.
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model with right-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.  相似文献   

11.
The gamma frailty model is a natural extension of the Cox proportional hazards model in survival analysis. Because the frailties are unobserved, an E-M approach is often used for estimation. Such an approach is shown to lead to finite sample underestimation of the frailty variance, with the corresponding regression parameters also being underestimated as a result. For the univariate case, we investigate the source of the bias with simulation studies and a complete enumeration. The rank-based E-M approach, we note, only identifies frailty through the order in which failures occur; additional frailty which is evident in the survival times is ignored, and as a result the frailty variance is underestimated. An adaption of the standard E-M approach is suggested, whereby the non-parametric Breslow estimate is replaced by a local likelihood formulation for the baseline hazard which allows the survival times themselves to enter the model. Simulations demonstrate that this approach substantially reduces the bias, even at small sample sizes. The method developed is applied to survival data from the North West Regional Leukaemia Register.  相似文献   

12.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models.  相似文献   

13.
Summary.  Recurrent events models have had considerable attention recently. The majority of approaches show the consistency of parameter estimates under the assumption that censoring is independent of the recurrent events process of interest conditional on the covariates that are included in the model. We provide an overview of available recurrent events analysis methods and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen–Gill model that is commonly used for recurrent event analysis. This estimator remains consistent under informative censoring if the censoring mechanism is estimated consistently, and it generally improves on the naïve estimator for the Andersen–Gill model in the case of independent censoring. We illustrate the bias of ad hoc estimators in the presence of informative censoring with a simulation study and provide a data analysis of recurrent lung exacerbations in cystic fibrosis patients when some patients are lost to follow-up.  相似文献   

14.
A generalized Cox regression model is studied for the covariance analysis of competing risks data subject to independent random censoring. The information of the maximum partial likelihood estimates is compared with that of maximum likelihood estimates assuming a log linear hazard function.The method of generalized variance is used to define the efficiency of estimation between the two models. This is then applied to two-sample problems with two exponentially censoring rates. Numerical results are summarized ane presented graphically.The detailed results indicate that the semi-parametric model wrks well for a higher rate of censoring. A method of generalizing the result to type 1 censoring and the efficiency of estimating the coefficient of the covariate are discussecd. A brief account of using the results to help design experiments is also given.  相似文献   

15.
Regression parameter estimation in the Cox failure time model is considered when regression variables are subject to measurement error. Assuming that repeat regression vector measurements adhere to a classical measurement model, we can consider an ordinary regression calibration approach in which the unobserved covariates are replaced by an estimate of their conditional expectation given available covariate measurements. However, since the rate of withdrawal from the risk set across the time axis, due to failure or censoring, will typically depend on covariates, we may improve the regression parameter estimator by recalibrating within each risk set. The asymptotic and small sample properties of such a risk set regression calibration estimator are studied. A simple estimator based on a least squares calibration in each risk set appears able to eliminate much of the bias that attends the ordinary regression calibration estimator under extreme measurement error circumstances. Corresponding asymptotic distribution theory is developed, small sample properties are studied using computer simulations and an illustration is provided.  相似文献   

16.
In studies that involve censored time-to-event data, stratification is frequently encountered due to different reasons, such as stratified sampling or model adjustment due to violation of model assumptions. Often, the main interest is not in the clustering variables, and the cluster-related parameters are treated as nuisance. When inference is about a parameter of interest in presence of many nuisance parameters, standard likelihood methods often perform very poorly and may lead to severe bias. This problem is particularly evident in models for clustered data with cluster-specific nuisance parameters, when the number of clusters is relatively high with respect to the within-cluster size. However, it is still unclear how the presence of censoring would affect this issue. We consider clustered failure time data with independent censoring, and propose frequentist inference based on an integrated likelihood. We then apply the proposed approach to a stratified Weibull model. Simulation studies show that appropriately defined integrated likelihoods provide very accurate inferential results in all circumstances, such as for highly clustered data or heavy censoring, even in extreme settings where standard likelihood procedures lead to strongly misleading results. We show that the proposed method performs generally as well as the frailty model, but it is superior when the frailty distribution is seriously misspecified. An application, which concerns treatments for a frequent disease in late-stage HIV-infected people, illustrates the proposed inferential method in Weibull regression models, and compares different inferential conclusions from alternative methods.  相似文献   

17.
In this paper, we propose a bias corrected estimate of the regression coefficient for the generalized probit regression model when the covariates are subject to measurement error and the responses are subject to interval censoring. The main improvement of our method is that it reduces most of the bias that the naive estimates have. The great advantage of our method is that it is baseline and censoring distribution free, in a sense that the investigator does not need to calculate the baseline or the censoring distribution to obtain the estimator of the regression coefficient, an important property of Cox regression model. A sandwich estimator for the variance is also proposed. Our procedure can be generalized to general measurement error distribution as long as the first four moments of the measurement error are known. The results of extensive simulations show that our approach is very effective in eliminating the bias when the measurement error is not too large relative to the error term of the regression model.  相似文献   

18.
We propose a profile conditional likelihood approach to handle missing covariates in the general semiparametric transformation regression model. The method estimates the marginal survival function by the Kaplan-Meier estimator, and then estimates the parameters of the survival model and the covariate distribution from a conditional likelihood, substituting the Kaplan-Meier estimator for the marginal survival function in the conditional likelihood. This method is simpler than full maximum likelihood approaches, and yields consistent and asymptotically normally distributed estimator of the regression parameter when censoring is independent of the covariates. The estimator demonstrates very high relative efficiency in simulations. When compared with complete-case analysis, the proposed estimator can be more efficient when the missing data are missing completely at random and can correct bias when the missing data are missing at random. The potential application of the proposed method to the generalized probit model with missing continuous covariates is also outlined.  相似文献   

19.
The unknown or unobservable risk factors in the survival analysis cause heterogeneity between individuals. Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times, the shared frailty models were suggested. The most common shared frailty model is a model in which frailty act multiplicatively on the hazard function. In this paper, we introduce the shared gamma frailty model and the inverse Gaussian frailty model with the reversed hazard rate. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We also apply the proposed models to the Australian twin data set and a better model is suggested.  相似文献   

20.
We study non-Markov multistage models under dependent censoring regarding estimation of stage occupation probabilities. The individual transition and censoring mechanisms are linked together through covariate processes that affect both the transition intensities and the censoring hazard for the corresponding subjects. In order to adjust for the dependent censoring, an additive hazard regression model is applied to the censoring times, and all observed counting and “at risk” processes are subsequently given an inverse probability of censoring weighted form. We examine the bias of the Datta–Satten and Aalen–Johansen estimators of stage occupation probability, and also consider the variability of these estimators by studying their estimated standard errors and mean squared errors. Results from different simulation studies of frailty models indicate that the Datta–Satten estimator is approximately unbiased, whereas the Aalen–Johansen estimator either under- or overestimates the stage occupation probability due to the dependent nature of the censoring process. However, in our simulations, the mean squared error of the latter estimator tends to be slightly smaller than that of the former estimator. Studies on development of nephropathy among diabetics and on blood platelet recovery among bone marrow transplant patients are used as demonstrations on how the two estimation methods work in practice. Our analyses show that the Datta–Satten estimator performs well in estimating stage occupation probability, but that the censoring mechanism has to be quite selective before a deviation from the Aalen-Johansen estimator is of practical importance. N. Gunnes—Supported by a grant from the Norwegian Cancer Society.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号