首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Because of limitations of the univariate frailty model in analysis of multivariate survival data, a bivariate frailty model is introduced for the analysis of bivariate survival data. This provides tremendous flexibility especially in allowing negative associations between subjects within the same cluster. The approach involves incorporating into the model two possibly correlated frailties for each cluster. The bivariate lognormal distribution is used as the frailty distribution. The model is then generalized to multivariate survival data with two distinguished groups and also to alternating process data. A modified EM algorithm is developed with no requirement of specification of the baseline hazards. The estimators are generalized maximum likelihood estimators with subject-specific interpretation. The model is applied to a mental health study on evaluation of health policy effects for inpatient psychiatric care.  相似文献   

2.
Unexplained heterogeneity in univariate survival data and association in multivariate survival can both be modelled by the inclusion of frailty effects. This paper investigates the consequences of ignoring frailty in analysis, fitting misspecified Cox proportional hazards models to the marginal distributions. Regression coefficients are biased towards 0 by an amount which depends in magnitude on the variability of the frailty terms and the form of frailty distribution. The bias is reduced when censoring is present. Fitted marginal survival curves can also differ substantially from the true marginals.  相似文献   

3.
In this work we present a simple estimation procedure for a general frailty model for analysis of prospective correlated failure times. Earlier work showed this method to perform well in a simulation study. Here we provide rigorous large-sample theory for the proposed estimators of both the regression coefficient vector and the dependence parameter, including consistent variance estimators.  相似文献   

4.
Evolution of recurrent asthma event rate over time in frailty models   总被引:1,自引:0,他引:1  
Summary. To model the time evolution of the event rate in recurrent event data a crucial role is played by the timescale that is used. Depending on the timescale selected the interpretation of the time evolution will be entirely different, both in parametric and semiparametric frailty models. The gap timescale is more appropriate when studying the recurrent event rate as a function of time since the last event, whereas the calendar timescale keeps track of actual time. We show both timescales in action on data from an asthma prevention trial in young children. The frailty model is further extended to include both timescales simultaneously as this might be most relevant in practice.  相似文献   

5.
We propose bivariate Weibull regression model with frailty in which dependence is generated by a gamma or positive stable or power variance function distribution. We assume that the bivariate survival data follows bivariate Weibull of Hanagal (Econ Qual Control 19:83–90, 2004; Econ Qual Control 20:143–150, 2005a; Stat Pap 47:137–148, 2006a; Stat Methods, 2006b). There are some interesting situations like survival times in genetic epidemiology, dental implants of patients and twin births (both monozygotic and dizygotic) where genetic behavior (which is unknown and random) of patients follows known frailty distribution. These are the situations which motivate to study this particular model. David D. Hanagal is on leave from Department of Statistics, University of Pune, Pune 411007, India.  相似文献   

6.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Model‐based phase I dose‐finding designs rely on a single model throughout the study for estimating the maximum tolerated dose (MTD). Thus, one major concern is about the choice of the most suitable model to be used. This is important because the dose allocation process and the MTD estimation depend on whether or not the model is reliable, or whether or not it gives a better fit to toxicity data. The aim of our work was to propose a method that would remove the need for a model choice prior to the trial onset and then allow it sequentially at each patient's inclusion. In this paper, we described model checking approach based on the posterior predictive check and model comparison approach based on the deviance information criterion, in order to identify a more reliable or better model during the course of a trial and to support clinical decision making. Further, we presented two model switching designs for a phase I cancer trial that were based on the aforementioned approaches, and performed a comparison between designs with or without model switching, through a simulation study. The results showed that the proposed designs had the advantage of decreasing certain risks, such as those of poor dose allocation and failure to find the MTD, which could occur if the model is misspecified. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
This paper considers a finite mixture model for longitudinal data, which can be used to study the dependency of the shape of the respective follow-up curves on treatments or other influential factors and to classify these curves. An EM-algorithm to achieve the ml-estimate of the model is given. The potencies of the model are demonstrated using data of a clinical trial.  相似文献   

9.
Randomized clinical trials are designed to estimate the direct effect of a treatment by randomly assigning patients to receive either treatment or control. However, in some trials, patients who discontinued their initial randomized treatment are allowed to switch to another treatment. Therefore, the direct treatment effect of interest may be confounded by subsequent treatment. Moreover, the decision on whether to initiate a second‐line treatment is typically made based on time‐dependent factors that may be affected by prior treatment history. Due to these time‐dependent confounders, traditional time‐dependent Cox models may produce biased estimators of the direct treatment effect. Marginal structural models (MSMs) have been applied to estimate causal treatment effects even in the presence of time‐dependent confounders. However, the occurrence of extremely large weights can inflate the variance of the MSM estimators. In this article, we proposed a new method for estimating weights in MSMs by adaptively truncating the longitudinal inverse probabilities. This method provides balance in the bias variance trade‐off when large weights are inevitable, without the ad hoc removal of selected observations. We conducted simulation studies to explore the performance of different methods by comparing bias, standard deviation, confidence interval coverage rates, and mean square error under various scenarios. We also applied these methods to a randomized, open‐label, phase III study of patients with nonsquamous non‐small cell lung cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The Consumer Price Index (CPI) approximates changes in the costs of household consumption assuming the constant utility (COLI, Cost of Living Index). In practice, the Laspeyres price index is used to measure the CPI despite the fact that many economists consider the superlative indices to be the best approximation of COLI. The Fisher index is one of the superlative indices and additionally it satisfies most of tests from the axiomatic price index theory. Nevertheless, the Fisher price index makes use of current-period expenditure data and its usefulness in CPI measurement is limited. In this article, we verify the utility of using the Lowe, Young, and AG Mean indices for Fisher price index approximation. We confirm this utility in a simulation study and we provide an empirical proof.  相似文献   

11.
In this paper, we consider the problem wherein one desires to estimate a linear combination of binomial probabilities from k>2k>2 independent populations. In particular, we create a new family of asymptotic confidence intervals, extending the approach taken by Beal [1987. Asymptotic confidence intervals for the difference between two binomial parameters for use with small samples. Biometrics 73, 941–950] in the two-sample case. One of our new intervals is shown to perform very well when compared to the best available intervals documented in Price and Bonett [2004. An improved confidence interval for a linear function of binomial proportions. Comput. Statist. Data Anal. 45, 449–456]. Furthermore, our interval estimation approach is quite general and could be extended to handle more complicated parametric functions and even to other discrete probability models in stratified settings. We illustrate our new intervals using two real data examples, one from an ecology study and one from a multicenter clinical trial.  相似文献   

12.
The estimation of random effects in frailty models is an important problem in survival analysis. Testing for the presence of random effects can be essential to improving model efficiency. Posterior consistency in dispersion parameters and coefficients of the frailty model was demonstrated in theory and simulations using the posterior induced by Cox’s partial likelihood and simple priors. We also conducted simulation studies to test for the presence of random effects; the proposed method performed well in several simulations. Data analysis was also conducted. The proposed method is easily tractable and can be used to develop various methods for Bayesian inference in frailty models.  相似文献   

13.
The Cox proportional hazards model is widely used in clinical trials with time-to-event outcomes to compare an experimental treatment with the standard of care. At the design stage of a trial the number of events required to achieve a desired power needs to be determined, which is frequently based on estimating the variance of the maximum partial likelihood estimate of the regression parameter with a function of the number of events. Underestimating the variance at the design stage will lead to insufficiently powered studies, and overestimating the variance will lead to unnecessarily large trials. A simple approach to estimating the variance is introduced, which is compared with two widely adopted approaches in practice. Simulation results show that the proposed approach outperforms the standard ones and gives nearly unbiased estimates of the variance.  相似文献   

14.
Muitivariate failure time data are common in medical research; com¬monly used statistical models for such correlated failure-time data include frailty and marginal models. Both types of models most often assume pro¬portional hazards (Cox, 1972); but the Cox model may not fit the data well This article presents a class of linear transformation frailty models that in¬cludes, as a special case, the proportional hazards model with frailty. We then propose approximate procedures to derive the best linear unbiased es¬timates and predictors of the regression parameters and frailties. We apply the proposed methods to analyze results of a clinical trial of different dose levels of didansine (ddl) among HIV-infected patients who were intolerant of zidovudine (ZDV). These methods yield estimates of treatment effects and of frailties corresponding to patient groups defined by clinical history prior to entry into the trial.  相似文献   

15.
In this article, we consider the empirical likelihood for the autoregressive error-in-explanatory variable models. With the help of validation, we first develop an empirical likelihood ratio test statistic for the parameters of interest, and prove that its asymptotic distribution is that of a weighted sum of independent standard χ21 random variables with unknown weights. Also, we propose an adjusted empirical likelihood and prove that its asymptotic distribution is a standard χ2. Furthermore, an empirical likelihood-based confidence region is given. Simulation results indicate that the proposed method works well for practical situations.  相似文献   

16.
We investigate the effect of unobserved heterogeneity in the context of the linear transformation model for censored survival data in the clinical trials setting. The unobserved heterogeneity is represented by a frailty term, with unknown distribution, in the linear transformation model. The bias of the estimate under the assumption of no unobserved heterogeneity when it truly is present is obtained. We also derive the asymptotic relative efficiency of the estimate of treatment effect under the incorrect assumption of no unobserved heterogeneity. Additionally we investigate the loss of power for clinical trials that are designed assuming the model without frailty when, in fact, the model with frailty is true. Numerical studies under a proportional odds model show that the loss of efficiency and the loss of power can be substantial when the heterogeneity, as embodied by a frailty, is ignored. An erratum to this article can be found at  相似文献   

17.
The performance of computationally inexpensive model selection criteria in the context of tree structured prediction is discussed. It is shown through a simulation study that no one model selection criterion exhibits a uniformly superior performance over a wide range of scenarios. Therefore, a two-stage approach for model selection is suggested and shown to perform satisfactorily. A computationally efficient method of tree-growing within the RECursive Partition and AMalgamation (RECPAM) framework is suggested. The computationally efficient algorithm gives identical results as the original RECPAM tree-growing algorithm. An example of medical data analysis for developing prognostic classification is presented.  相似文献   

18.
This paper describes a Fortran program for analyzing partially-censored data from a firstorder semi-Markov model. Nonparametric maximum likelihood estimates and estimates of their corresponding covariance matrices are computed based on the results of Lagakos, Sornmer and Zelen (1978). The program can be applied to data arising from a wide array of multistate stochastic processes. The program's required input and output are discussed and illustrated using the data from a recent lung cancer clinical trial.  相似文献   

19.
This paper evaluates the ability of a Markov regime-switching log-normal (RSLN) model to capture the time-varying features of stock return and volatility. The model displays a better ability to depict a fat tail distribution as compared with using a log-normal model, which means that the RSLN model can describe observed market behavior better. Our major objective is to explore the capability of the model to capture stock market behavior over time. By analyzing the behavior of calibrated regime-switching parameters over different lengths of time intervals, the change-point concept is introduced and an algorithm is proposed for identifying the change-points in the series corresponding to the times when there are changes in parameter estimates. This algorithm for identifying change-points is tested on the Standard and Poor's 500 monthly index data from 1971 to 2008, and the Nikkei 225 monthly index data from 1984 to 2008. It is evident that the change-points we identify match the big events observed in the US stock market and the Japan stock market (e.g., the October 1987 stock market crash), and that the segmentations of stock index series, which are defined as the periods between change-points, match the observed bear–bull market phases.  相似文献   

20.
Summary. We prove identification of dependent competing risks models in which each risk has a mixed proportional hazard specification with regressors, and the risks are dependent by way of the unobserved heterogeneity, or frailty, components. We show that the conditions for identification given by Heckman and Honoré can be relaxed. We extend the results to the case in which multiple spells are observed for each subject.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号