首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sun W  Li H 《Lifetime data analysis》2004,10(3):229-245
The additive genetic gamma frailty model has been proposed for genetic linkage analysis for complex diseases to account for variable age of onset and possible covariates effects. To avoid ascertainment biases in parameter estimates, retrospective likelihood ratio tests are often used, which may result in loss of efficiency due to conditioning. This paper considers when the sibships are ascertained by having at least two affected sibs with the disease before a given age and provides two approaches for estimating the parameters in the additive gamma frailty model. One approach is based on the likelihood function conditioning on the ascertainment event, the other is based on maximizing a full ascertainment-adjusted likelihood. Explicit forms for these likelihood functions are derived. Simulation studies indicate that when the baseline hazard function can be correctly pre-specified, both approaches give accurate estimates of the model parameters. However, when the baseline hazard function has to be estimated simultaneously, only the ascertainment-adjusted likelihood method gives an unbiased estimate of the parameters. These results imply that the ascertainment-adjusted likelihood ratio test in the context of the additive genetic gamma frailty may be used for genetic linkage analysis.  相似文献   

2.
During their follow-up, patients with cancer can experience several types of recurrent events and can also die. Over the last decades, several joint models have been proposed to deal with recurrent events with dependent terminal event. Most of them require the proportional hazard assumption. In the case of long follow-up, this assumption could be violated. We propose a joint frailty model for two types of recurrent events and a dependent terminal event to account for potential dependencies between events with potentially time-varying coefficients. For that, regression splines are used to model the time-varying coefficients. Baseline hazard functions (BHF) are estimated with piecewise constant functions or with cubic M-Splines functions. The maximum likelihood estimation method provides parameter estimates. Likelihood ratio tests are performed to test the time dependency and the statistical association of the covariates. This model was driven by breast cancer data where the maximum follow-up was close to 20 years.  相似文献   

3.
The accelerated hazard model in survival analysis assumes that the covariate effect acts the time scale of the baseline hazard rate. In this paper, we study the stochastic properties of the mixed accelerated hazard model since the covariate is considered basically unobservable. We build dependence structure between the population variable and the covariate, and also present some preservation properties. Using some well-known stochastic orders, we compare two mixed accelerated hazards models arising out of different choices of distributions for unobservable covariates or different baseline hazard rate functions.  相似文献   

4.
In the semiparametric additive hazard regression model of McKeague and Sasieni (Biometrika 81: 501–514), the hazard contributions of some covariates are allowed to change over time, without parametric restrictions (Aalen model), while the contributions of other covariates are assumed to be constant. In this paper, we develop tests that help to decide which of the covariate contributions indeed change over time. The remaining covariates may be modelled with constant hazard coefficients, thus reducing the number of curves that have to be estimated nonparametrically. Several bootstrap tests are proposed. The behavior of the tests is investigated in a simulation study. In a practical example, the tests consistently identify covariates with constant and with changing hazard contributions.  相似文献   

5.
For survival data, mark variables are only observed at uncensored failure times, and it is of interest to investigate whether there is any relationship between the failure time and the mark variable. The additive hazards model, focusing on hazard differences rather than hazard ratios, has been widely used in practice. In this article, we propose a mark-specific additive hazards model in which both the regression coefficient functions and the baseline hazard function depend nonparametrically on a continuous mark. An estimating equation approach is developed to estimate the regression functions, and the asymptotic properties of the resulting estimators are established. In addition, some formal hypothesis tests are constructed for various hypotheses concerning the mark-specific treatment effects. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a data set from the first HIV vaccine efficacy trial is provided.  相似文献   

6.
A class of non-proportional hazards regression models is considered to have hazard specifications consisting of a power form of cross-effects on the base-line hazard function. The primary goal of these models is to deal with settings in which heterogeneous distribution shapes of survival times may be present in populations characterized by some observable covariates. Although effects of such heterogeneity can be explicitly seen through crossing cumulative hazards phenomena in k -sample problems, they are barely visible in a one-sample regression setting. Hence, heterogeneity of this kind may not be noticed and, more importantly, may result in severely misleading inference. This is because the partial likelihood approach cannot eliminate the unknown cumulative base-line hazard functions in this setting. For coherent statistical inferences, a system of martingale processes is taken as a basis with which, together with the method of sieves, an overidentified estimating equation approach is proposed. A Pearson's χ2 type of goodness-of-fit testing statistic is derived as a by-product. An example with data on gastric cancer patients' survival times is analysed.  相似文献   

7.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

8.
The most widely used model for multidimensional survival analysis is the Cox model. This model is semi-parametric, since its hazard function is the product of an unspecified baseline hazard, and a parametric functional form relating the hazard and the covariates. We consider a more flexible and fully nonparametric proportional hazards model, where the functional form of the covariates effect is left unspecified. In this model, estimation is based on the maximum likelihood method. Results obtained from a Monte-Carlo experiment and from real data are presented. Finally, the advantages and the limitations of the approacha are discussed.  相似文献   

9.
There has been extensive interest in discussing inference methods for survival data when some covariates are subject to measurement error. It is known that standard inferential procedures produce biased estimation if measurement error is not taken into account. With the Cox proportional hazards model a number of methods have been proposed to correct bias induced by measurement error, where the attention centers on utilizing the partial likelihood function. It is also of interest to understand the impact on estimation of the baseline hazard function in settings with mismeasured covariates. In this paper we employ a weakly parametric form for the baseline hazard function and propose simple unbiased estimating functions for estimation of parameters. The proposed method is easy to implement and it reveals the connection between the naive method ignoring measurement error and the corrected method with measurement error accounted for. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error in covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study [Knuiman, M.W., Cullent, K.J., Bulsara, M.K., Welborn, T.A., Hobbs, M.S.T., 1994. Mortality trends, 1965 to 1989, in Busselton, the site of repeated health surveys and interventions. Austral. J. Public Health 18, 129–135].  相似文献   

10.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   

11.
In the accelerated hazards regression model with censored data, estimation of the covariance matrices of the regression parameters is difficult, since it involves the unknown baseline hazard function and its derivative. This paper provides simple but reliable procedures that yield asymptotically normal estimators whose covariance matrices can be easily estimated. A class of weight functions are introduced to result in the estimators whose asymptotic covariance matrices do not involve the derivative of the unknown hazard function. Based on the estimators obtained from different weight functions, some goodness-of-fit tests are constructed to check the adequacy of the accelerated hazards regression model. Numerical simulations show that the estimators and tests perform well. The procedures are illustrated in the real world example of leukemia cancer. For the leukemia cancer data, the issue of interest is a comparison of two groups of patients that had two different kinds of bone marrow transplants. It is found that the difference of the two groups are well described by a time-scale change in hazard functions, i.e., the accelerated hazards model.  相似文献   

12.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

13.
Monte Carlo methods are used to examine the small-sample properties of 11 test statistics that can be used for comparing several treatments with respect to their mortality experiences while adjusting for covariables. The test statistics are investigated from three distinct models: the parametric, semiparametric and rank analysis of covariance (Quade, 1967) models. Four tests (likelihood ratio, Wald, conditional and unconditional score tests) from each of the first two models and three tests (based on rank scores) from the last model are discussed. The empirical size and power of the tests are investigated under a proportional hazards model in three situations: (1) the baseline hazard is correctly assumed to be Exponential, (2) the baseline hazard is incorrectly assumed to be Exponential, and (3) a treatment-covariate interaction is omitted from the analysis.  相似文献   

14.
Including time-varying covariates is a popular extension to the Cox model and a suitable approach for dealing with non-proportional hazards. However, partial likelihood (PL) estimation of this model has three shortcomings: (i) estimated regression coefficients can be less accurate in small samples with heavy censoring; (ii) the baseline hazard is not directly estimated and (iii) a covariance matrix for both the regression coefficients and the baseline hazard is not easily produced.We address these by developing a maximum likelihood (ML) approach to jointly estimate regression coefficients and baseline hazard using a constrained optimisation ensuring the latter''s non-negativity. We demonstrate asymptotic properties of these estimates and show via simulation their increased accuracy compared to PL estimates in small samples and show our method produces smoother baseline hazard estimates than the Breslow estimator.Finally, we apply our method to two examples, including an important real-world financial example to estimate time to default for retail home loans. We demonstrate using our ML estimate for the baseline hazard can give much clearer corroboratory evidence of the ‘humped hazard’, whereby the risk of loan default rises to a peak and then later falls.  相似文献   

15.
For the time-to-event outcome, current methods for sample determination are based on the proportional hazard model. However, if the proportionality assumption fails to capture the relationship between the hazard time and covariates, the proportional hazard model is not suitable to analyze survival data. The accelerated failure time (AFT) model is an alternative method to deal with survival data. In this paper, we address the issue that the relationship between the hazard time and the treatment effect is satisfied with the AFT model to design a multiregional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multiregional trial, and the proposed criteria are used to rationalize partition sample size to each region.  相似文献   

16.
We develop a likelihood ratio test for an abrupt change point in Weibull hazard functions with covariates, including the two-piece constant hazard as a special case. We first define the log-likelihood ratio test statistic as the supremum of the profile log-likelihood ratio process over the interval which may contain an unknown change point. Using local asymptotic normality (LAN) and empirical measure, we show that the profile log-likelihood ratio process converges weakly to a quadratic form of Gaussian processes. We determine the critical values of the test and discuss how the test can be used for model selection. We also illustrate the method using the Chronic Granulomatous Disease (CGD) data.  相似文献   

17.
Abstract.  We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerate distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicates for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c- myc expression level, is subject to measurement error.  相似文献   

18.
For the time-to-event outcome, current methods for sample size determination are based on the proportional hazard model. However, if the proportionality assumption fails to capture the relationship between the hazard time and covariates, the proportional hazard model is not suitable to analyze survival data. The accelerated failure time model is an alternative method to deal with survival data. In this article, we address the issue that the relationship between the hazard time and the treatment effect is satisfied with the accelerated failure time model to design a multi-regional trial for a phase III clinical trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multi-regional trial and the consistent trend is used to rationalize partition sample size to each region.  相似文献   

19.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   

20.
Summary.  The analysis of covariance is a technique that is used to improve the power of a k -sample test by adjusting for concomitant variables. If the end point is the time of survival, and some observations are right censored, the score statistic from the Cox proportional hazards model is the method that is most commonly used to test the equality of conditional hazard functions. In many situations, however, the proportional hazards model assumptions are not satisfied. Specifically, the relative risk function is not time invariant or represented as a log-linear function of the covariates. We propose an asymptotically valid k -sample test statistic to compare conditional hazard functions which does not require the assumption of proportional hazards, a parametric specification of the relative risk function or randomization of group assignment. Simulation results indicate that the performance of this statistic is satisfactory. The methodology is demonstrated on a data set in prostate cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号