首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, the proportional hazard model with Weibull frailty, which is outside the range of the exponential family, is used for analysing the right-censored longitudinal survival data. Complex multidimensional integrals are avoided by using hierarchical likelihood to estimate the regression parameters and to predict the realizations of random effects. The adjusted profile hierarchical likelihood is adopted to estimate the parameters in frailty distribution, during which the first- and second-order methods are used. The simulation studies indicate that the regression-parameter estimates in the Weibull frailty model are accurate, which is similar to the gamma frailty and lognormal frailty models. Two published data sets are used for illustration.  相似文献   

2.
In this article, we develop a Bayesian approach for the estimation of two cure correlated frailty models that have been extended to the cure frailty models introduced by Yin [34]. We used the two different type of frailty with bivariate log-normal distribution instead of gamma distribution. A likelihood function was constructed based on a piecewise exponential distribution function. The model parameters were estimated by the Markov chain Monte Carlo method. The comparison of models is based on the Cox correlated frailty model with log-normal distribution. A real data set of bilateral corneal graft rejection was used to compare these models. The results of this data, based on deviance information criteria, showed the advantage of the proposed models.  相似文献   

3.
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model with right-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.  相似文献   

4.
This paper proposes a class of nonparametric estimators for the bivariate survival function estimation under both random truncation and random censoring. In practice, the pair of random variables under consideration may have certain parametric relationship. The proposed class of nonparametric estimators uses such parametric information via a data transformation approach and thus provides more accurate estimates than existing methods without using such information. The large sample properties of the new class of estimators and a general guidance of how to find a good data transformation are given. The proposed method is also justified via a simulation study and an application on an economic data set.  相似文献   

5.
The present work demonstrates an application of random effects model for analyzing birth intervals that are clustered into geographical regions. Observations from the same cluster are assumed to be correlated because usually they share certain unobserved characteristics between them. Ignoring the correlations among the observations may lead to incorrect standard errors of the estimates of parameters of interest. Beside making the comparisons between Cox's proportional hazards model and random effects model for analyzing geographically clustered time-to-event data, important demographic and socioeconomic factors that may affect the length of birth intervals of Bangladeshi women are also reported in this paper.  相似文献   

6.
The p -variate Burr distribution has been derived, developed, discussed and deployed by various authors. In this paper a score statistic for testing independence of the components, equivalent to testing for p independent Weibull against a p -variate Burr alternative, is obtained. Its null and non-null properties are investigated with and without nuisance parameters and including the possibility of censoring. Two applications to real data are described. The test is also discussed in the context of other Weibull mixture models.  相似文献   

7.
The Weibull distribution is a natural starting point in the modelling of failure times in reliability, material strength data and many other applications that involve lifetime data. In recent years there has been a growing interest in modelling heterogeneity within this context. A natural approach is to consider a mixture, either discrete or continuous, of Weibull distributions. A judicious choice of mixing distribution yields a tractable and flexible generalization of the Weibull distribution. In this note a score test for detecting heterogeneity in this context is discussed and illustrated using some infant nutrition data.  相似文献   

8.
A Multivariate Model for Repeated Failure Time Measurements   总被引:1,自引:1,他引:0  
A parametric multivariate failure time distribution is derived from a frailty-type model with a particular frailty distribution. It covers as special cases certain distributions which have been used for multivariate survival data in recent years. Some properties of the distribution are derived: its marginal and conditional distributions lie within the parametric family, and association between the component variates can be positive or, to a limited extent, negative. The simple closed form of the survivor function is useful for right-censored data, as occur commonly in survival analysis, and for calculating uniform residuals. Also featured is the distribution of ratios of paired failure times. The model is applied to data from the literature  相似文献   

9.
We decompose the score statistic for testing for shared finite variance frailty in multivariate lifetime data into marginal and covariance-based terms. The null properties of the covariance-based statistic are derived in the context of parametric lifetime models. Its non-null properties are estimated using simulation and compared with those of the score test and two likelihood ratio tests when the underlying lifetime distribution is Weibull. Some examples are used to illustrate the covariance-based test. A case is made for using the covariance-based statistic as a simple diagnostic procedure for shared frailty in a parametric exploratory analysis of multivariate lifetime data and a link to the bivariate Clayton–Oakes copula model is shown.  相似文献   

10.
The authors consider regression analysis for binary data collected repeatedly over time on members of numerous small clusters of individuals sharing a common random effect that induces dependence among them. They propose a mixed model that can accommodate both these structural and longitudinal dependencies. They estimate the parameters of the model consistently and efficiently using generalized estimating equations. They show through simulations that their approach yields significant gains in mean squared error when estimating the random effects variance and the longitudinal correlations, while providing estimates of the fixed effects that are just as precise as under a generalized penalized quasi‐likelihood approach. Their method is illustrated using smoking prevention data.  相似文献   

11.
Summary.  Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable to use the marginal additive hazards model. We derive asymptotic properties of the Lin and Ying estimators for the marginal additive hazards model for multivariate failure time data. Furthermore we suggest estimating equations for the regression parameters and association parameters in parametric shared frailty models with marginal additive hazards by using the Lin and Ying estimators. We give the large sample properties of the estimators arising from these estimating equations and investigate their small sample properties by Monte Carlo simulation. A real example is provided for illustration.  相似文献   

12.
The computation in the multinomial logit mixed effects model is costly especially when the response variable has a large number of categories, since it involves high-dimensional integration and maximization. Tsodikov and Chefo (2008) developed a stable MLE approach to problems with independent observations, based on generalized self-consistency and quasi-EM algorithm developed in Tsodikov (2003). In this paper, we apply the idea to clustered multinomial response to simplify the maximization step. The method transforms the complex multinomial likelihood to Poisson-type likelihood and hence allows for the estimates to be obtained iteratively solving a set of independent low-dimensional problems. The methodology is applied to real data and studied by simulations. While maximization is simplified, numerical integration remains the dominant challenge to computational efficiency.  相似文献   

13.
ABSTRACT

In clustered survival data, the dependence among individual survival times within a cluster has usually been described using copula models and frailty models. In this paper we propose a profile likelihood approach for semiparametric copula models with different cluster sizes. We also propose a likelihood ratio method based on profile likelihood for testing the absence of association parameter (i.e. test of independence) under the copula models, leading to the boundary problem of the parameter space. For this purpose, we show via simulation study that the proposed likelihood ratio method using an asymptotic chi-square mixture distribution performs well as sample size increases. We compare the behaviors of the two models using the profile likelihood approach under a semiparametric setting. The proposed method is demonstrated using two well-known data sets.  相似文献   

14.
Statistical methods are proposed to analyze parallel time series of hospital-based health data and measurements of ambient air pollution. Specifically, associations between the number of daily health events (hospital admissions or emergency-room visits for respiratory illnesses) and daily levels of ambient air pollutants in the vicinity of several hospitals are examined. A relative-risk regression model is proposed in which the regression parameters are assumed to vary at random among hospitals. Adjustment for seasonal trends in admissions are also considered. Simple computational methods based on generalized estimating equations are explored as the basis for statistical inference. The proposed methods are illustrated on data obtained from 164 acute-care hospitals in Ontario over the May-to-August period for 1983 to 1988. These admission rates are related to ozone levels obtained from 22 monitoring stations maintained by the Ontario Ministry of the Environment.  相似文献   

15.
In pre-clinical oncology studies, tumor-bearing animals are treated and observed over a period of time in order to measure and compare the efficacy of one or more cancer-intervention therapies along with a placebo/standard of care group. A data analysis is typically carried out by modeling and comparing tumor volumes, functions of tumor volumes, or survival. Data analysis on tumor volumes is complicated because animals under observation may be euthanized prior to the end of the study for one or more reasons, such as when an animal's tumor volume exceeds an upper threshold. In such a case, the tumor volume is missing not-at-random for the time remaining in the study. To work around the non-random missingness issue, several statistical methods have been proposed in the literature, including the rate of change in log tumor volume and partial area under the curve. In this work, an examination and comparison of the test size and statistical power of these and other popular methods for the analysis of tumor volume data is performed through realistic Monte Carlo computer simulations. The performance, advantages, and drawbacks of popular statistical methods for animal oncology studies are reported. The recommended methods are applied to a real data set.  相似文献   

16.
Combining patient-level data from clinical trials can connect rare phenomena with clinical endpoints, but statistical techniques applied to a single trial may become problematical when trials are pooled. Estimating the hazard of a binary variable unevenly distributed across trials showcases a common pooled database issue. We studied how an unevenly distributed binary variable can compromise the integrity of fixed and random effects Cox proportional hazards (cph) models. We compared fixed effect and random effects cph models on a set of simulated datasets inspired by a 17-trial pooled database of patients presenting with ST segment elevation myocardial infarction (STEMI) and non-STEMI undergoing percutaneous coronary intervention. An unevenly distributed covariate can bias hazard ratio estimates, inflate standard errors, raise type I error, and reduce power. While uneveness causes problems for all cph models, random effects suffer least. Compared to fixed effect models, random effects suffer lower bias and trade inflated type I errors for improved power. Contrasting hazard rates between trials prevent accurate estimates from both fixed and random effects models.  相似文献   

17.
We propose a joint modeling likelihood-based approach for studies with repeated measures and informative right censoring. Joint modeling of longitudinal and survival data are common approaches but could result in biased estimates if proportionality of hazards is violated. To overcome this issue, and given that the exact time of dropout is typically unknown, we modeled the censoring time as the number of follow-up visits and extended it to be dependent on selected covariates. Longitudinal trajectories for each subject were modeled to provide insight into disease progression and incorporated with the number follow-up visits in one likelihood function.  相似文献   

18.
Abstract.  Multivariate correlated failure time data arise in many medical and scientific settings. In the analysis of such data, it is important to use models where the parameters have simple interpretations. In this paper, we formulate a model for bivariate survival data based on the Plackett distribution. The model is an alternative to the Gamma frailty model proposed by Clayton and Oakes. The parameter in this distribution has a very appealing odds ratio interpretation for dependence between the two failure times; in addition, it allows for negative dependence. We develop novel semiparametric estimation and inference procedures for the model. The asymptotic results of the estimator are developed. The performance of the proposed techniques in finite samples is examined using simulation studies; in addition, the proposed methods are applied to data from an observational study in cancer.  相似文献   

19.
In this paper, we study the survival times of alternately occurring events. The dependence between the times to the two events is modelled through the Archimedean copula, while the dependence over the recurring cycles is modelled through a functional relationship of the distribution parameters. Taking account of appropriate censoring that may be present in the data, the model parameters are estimated using the maximum likelihood method. The standard errors of the estimators are then derived and confidence belts for the survival functions constructed. Methods for choosing the appropriate copula are also discussed. The results are illustrated through a clinical trial data on patients suffering from cystic fibrosis. A simulation study is also done to corroborate the results.  相似文献   

20.
Consider panel data modelled by a linear random intercept model that includes a time‐varying covariate. Suppose that our aim is to construct a confidence interval for the slope parameter. Commonly, a Hausman pretest is used to decide whether this confidence interval is constructed using the random effects model or the fixed effects model. This post‐model‐selection confidence interval has the attractive features that it (a) is relatively short when the random effects model is correct and (b) reduces to the confidence interval based on the fixed effects model when the data and the random effects model are highly discordant. However, this confidence interval has the drawbacks that (i) its endpoints are discontinuous functions of the data and (ii) its minimum coverage can be far below its nominal coverage probability. We construct a new confidence interval that possesses these attractive features, but does not suffer from these drawbacks. This new confidence interval provides an intermediate between the post‐model‐selection confidence interval and the confidence interval obtained by always using the fixed effects model. The endpoints of the new confidence interval are smooth functions of the Hausman test statistic, whereas the endpoints of the post‐model‐selection confidence interval are discontinuous functions of this statistic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号