共查询到20条相似文献,搜索用时 15 毫秒
1.
Because of limitations of the univariate frailty model in analysis of multivariate survival data, a bivariate frailty model is introduced for the analysis of bivariate survival data. This provides tremendous flexibility especially in allowing negative associations between subjects within the same cluster. The approach involves incorporating into the model two possibly correlated frailties for each cluster. The bivariate lognormal distribution is used as the frailty distribution. The model is then generalized to multivariate survival data with two distinguished groups and also to alternating process data. A modified EM algorithm is developed with no requirement of specification of the baseline hazards. The estimators are generalized maximum likelihood estimators with subject-specific interpretation. The model is applied to a mental health study on evaluation of health policy effects for inpatient psychiatric care. 相似文献
2.
Christian Bressen Pipper Torben Martinussen 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(1):207-220
Summary. Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable to use the marginal additive hazards model. We derive asymptotic properties of the Lin and Ying estimators for the marginal additive hazards model for multivariate failure time data. Furthermore we suggest estimating equations for the regression parameters and association parameters in parametric shared frailty models with marginal additive hazards by using the Lin and Ying estimators. We give the large sample properties of the estimators arising from these estimating equations and investigate their small sample properties by Monte Carlo simulation. A real example is provided for illustration. 相似文献
3.
Kung-Yee Liang Steven G. Self Karen J. Bandeen-Roche Scott L. Zeger 《Lifetime data analysis》1995,1(4):403-415
Cox's seminal 1972 paper on regression methods for possibly censored failure time data popularized the use of time to an event as a primary response in prospective studies. But one key assumption of this and other regression methods is that observations are independent of one another. In many problems, failure times are clustered into small groups where outcomes within a group are correlated. Examples include failure times for two eyes from one person or for members of the same family.This paper presents a survey of models for multivariate failure time data. Two distinct classes of models are considered: frailty and marginal models. In a frailty model, the correlation is assumed to derive from latent variables (frailties) common to observations from the same cluster. Regression models are formulated for the conditional failure time distribution given the frailties. Alternatively, marginal models describe the marginal failure time distribution of each response while separately modelling the association among responses from the same cluster.We focus on recent extensions of the proportional hazards model for multivariate failure time data. Model formulation, parameter interpretation and estimation procedures are considered. 相似文献
4.
We consider tied survival data based on Cox proportional regression model. The standard approaches are the Breslow and Efron
approximations and various so called exact methods. All these methods lead to biased estimates when the true underlying model
is in fact a Cox model. In this paper we review the methods and suggest a new method based on the missing-data principle using
EM-algorithm that leads to a score equation that can be solved directly. This score has mean zero.
We also show that all the considered methods have the same asymptotic properties and that there is no loss of asymptotic efficiency
when the tie sizes are bounded or even converge to infinity at a given rate. A simulation study is conducted to compare the
finite sample properties of the methods. 相似文献
5.
6.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting
the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can
get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured
with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both
with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement
error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic
in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative
measurement error. 相似文献
7.
We consider a Cox-type regression model with change-points in the covariates. A change-point specifies the unknown threshold at which the influence of a covariate shifts smoothly, i.e., the regression parameter may change over the range of a covariate and the underlying regression function is continuous but not differentiable. The model can be used to describe change-points in different covariates but also to model more than one change-point in a single covariate. Estimates of the change-points and of the regression parameters are derived and their properties are investigated. It is shown that not only the estimates of the regression parameters are [Formula: see text] -consistent but also the estimates of the change-points in contrast to the conjecture of other authors. Asymptotic normality is shown by using results developed for M-estimators. At the end of this paper we apply our model to an actuarial dataset, the PBC dataset of Fleming and Harrington (Counting processes and survival analysis, 1991) and to a dataset of electric motors. 相似文献
8.
We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach. 相似文献
9.
Gwangsu Kim 《Journal of the Korean Statistical Society》2019,48(1):146-168
The estimation of random effects in frailty models is an important problem in survival analysis. Testing for the presence of random effects can be essential to improving model efficiency. Posterior consistency in dispersion parameters and coefficients of the frailty model was demonstrated in theory and simulations using the posterior induced by Cox’s partial likelihood and simple priors. We also conducted simulation studies to test for the presence of random effects; the proposed method performed well in several simulations. Data analysis was also conducted. The proposed method is easily tractable and can be used to develop various methods for Bayesian inference in frailty models. 相似文献
10.
Connections are established between the theories of weighted logrank tests and of frailty models. These connections arise because omission of a balanced covariate from a proportional hazards model generally leads to a model with non-proportional hazards, for which the simple logrank test is no longer optimal. The optimal weighting function and the asymptotic relative efficiencies of the simple logrank test and of the optimally weighted logrank test relative to the adjusted test that would be used if the covariate values were known, are expressible in terms of the Laplace transform of the hazard ratio for the distribution of the omitted covariate. For example if this hazard ratio has a gamma distribution, the optimal test is a member of the G
class introduced by Harrington and Fleming (1982). We also consider positive stable, inverse Gaussian, displaced Poisson and two-point frailty distribution. Results are obtained for parametric and nonparametric tests and are extended to include random censoring. We show that the loss of efficiency from omitting a covariate is generally more important than the additional loss due to misspecification of the resulting non-proportional hazards model as a proportional hazards model. However two-point frailty distributions can provide exceptions to this rule. Censoring generally increases the efficiency of the simple logrank test to the adjusted logrank test. 相似文献
11.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models. 相似文献
12.
In this work we present a simple estimation procedure for a general frailty model for analysis of prospective correlated failure times. Earlier work showed this method to perform well in a simulation study. Here we provide rigorous large-sample theory for the proposed estimators of both the regression coefficient vector and the dependence parameter, including consistent variance estimators. 相似文献
13.
A polar coordinate transformation for estimating bivariate survival functions with randomly censored and truncated data 总被引:1,自引:0,他引:1
This paper proposes a new estimator for bivariate distribution functions under random truncation and random censoring. The new method is based on a polar coordinate transformation, which enables us to transform a bivariate survival function to a univariate survival function. A consistent estimator for the transformed univariate function is proposed. Then the univariate estimator is transformed back to a bivariate estimator. The estimator converges weakly to a zero-mean Gaussian process with an easily estimated covariance function. Consistent truncation probability estimate is also provided. Numerical studies show that the distribution estimator and truncation probability estimator perform remarkably well. 相似文献
14.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data. 相似文献
15.
A fundamental problem with the latent-time framework in competing risks is the lack of identifiability of the joint distribution. Given observed covariates along with assumptions as to the form of their effect, then identifiability may obtain. However it is difficult to check any assumptions about form since a more general model may lose identifiability. This paper considers a general framework for modelling the effect of covariates, with the single assumption that the copula dependency structure of the latent times is invariant to the covariates. This framework consists of a set of functions: the covariate-time transformations. The main result produces bounds on these functions, which are derived solely from the crude incidence functions. These bounds are a useful model checking tool when considering the covariate-time transformation resulting from any particular set of further assumptions. An example is given where the widely-used assumption of independent competing risks is checked. 相似文献
16.
Samuel O.M Manda 《统计学通讯:理论与方法》2013,42(4):769-782
Large scale sample surveys often collect survival times that are clustered at a number of hierarchical levels. Only the case where three levels are nested is considered here: that is, individual response times (level- i) are grouped into larger units (level-2) which in turn are grouped into much larger units (level-3). It is assumed that individuals in a unit share a common, unobservable and specific random frailty which induces an association between survival times in the unit. A Bayesian hierarchical analysis of the data is examined by modelling the survival time (level-1) using a semipanmietric Cox proportional hazards and specific level-2 and level-3 random frailty effects are assumed independent and modelled as gamma distributions. The complete posterior distribution of all the model parameters is estimated using the Gibbs sampler, a Monte Carlo method. 相似文献
17.
Kang FangYuan 《统计学通讯:理论与方法》2018,47(8):1901-1912
In this article, an additive rate model is proposed for clustered recurrent event with a terminal event. The subjects are clustered by some property. For the clustered subjects, the recurrent event is precluded by the death. An estimating equation is developed for the model parameter and the baseline rate function. The asymptotic properties of the resulting estimators are established. In addition, a goodness-of-fit test is presented to assess the adequacy of the model. The finite-sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a bladder cancer data is illustrated. 相似文献
18.
Clustered survival data arise often in clinical trial design, where the correlated subunits from the same cluster are randomized to different treatment groups. Under such design, we consider the problem of constructing confidence interval for the difference of two median survival time given the covariates. We use Cox gamma frailty model to account for the within-cluster correlation. Based on the conditional confidence intervals, we can identify the possible range of covariates over which the two groups would provide different median survival times. The associated coverage probability and the expected length of the proposed interval are investigated via a simulation study. The implementation of the confidence intervals is illustrated using a real data set. 相似文献
19.
In this paper, we compare three residuals to assess departures from the error assumptions as well as to detect outlying observations in log-Burr XII regression models with censored observations. These residuals can also be used for the log-logistic regression model, which is a special case of the log-Burr XII regression model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the modified martingale-type residual in log-Burr XII regression models with censored data. 相似文献
20.
In medical studies, there is interest in inferring the marginal distribution of a survival time subject to competing risks. The Kyushu Lipid Intervention Study (KLIS) was a clinical study for hypercholesterolemia, where pravastatin treatment was compared with conventional treatment. The primary endpoint was time to events of coronary heart disease (CHD). In this study, however, some subjects died from causes other than CHD or were censored due to loss to follow-up. Because the treatments were targeted to reduce CHD events, the investigators were interested in the effect of the treatment on CHD events in the absence of causes of death or events other than CHD. In this paper, we present a method for estimating treatment group-specific marginal survival curves of time-to-event data in the presence of dependent competing risks. The proposed method is a straightforward extension of the Inverse Probability of Censoring Weighted (IPCW) method to settings with more than one reason for censoring. The results of our analysis showed that the IPCW marginal incidence for CHD was almost the same as the lower bound for which subjects with competing events were assumed to be censored at the end of all follow-up. This result provided reassurance that the results in KLIS were robust to competing risks. 相似文献