首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.  相似文献   

2.
Assessing dose-response from flexible-dose clinical trials (e.g., titration or dose escalation studies) is challenging and often problematic due to the selection bias caused by 'titration-to-response'. We investigate the performance of a dynamic linear mixed-effects (DLME) model and marginal structural model (MSM) in evaluating dose-response from flexible-dose titration clinical trials via simulations. The simulation results demonstrated that DLME models with previous exposure as a time-varying covariate may provide an unbiased and efficient estimator to recover exposure-response relationship from flexible-dose clinical trials. Although the MSM models with independent and exchangeable working correlations appeared to be able to recover the right direction of the dose-response relationship, it tended to over-correct selection bias and overestimated the underlying true dose-response. The MSM estimators were also associated with large variability in the parameter estimates. Therefore, DLME may be an appropriate modeling option in identifying dose-response when data from fixed-dose studies are absent or a fixed-dose design is unethical to be implemented.  相似文献   

3.
We evaluate the performance of various bootstrap methods for constructing confidence intervals for mean and median of several common distributions. Using Monte Carlo simulation, we assessed performance by looking at coverage percentages and average confidence interval lengths. Poor performance is characterized by coverage deviating from 0.95 and large confidence interval lengths. Undercoverage is of greater concern than overcoverage. We also assess the performance of bootstrap methods in estimating the parameters of the Cox Proportional Hazard model and Accelerated Failure Time model.  相似文献   

4.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper, we deal with the analysis of case series. The self-controlled case series method (SCCS) was developed to analyse the temporal association between time-varying exposure and an outcome event. We apply the SCCS method to the vaccination data of the German Examination Survey for Children and Adolescents (KiGGS). We illustrate that the standard SCCS method cannot be applied to terminal events such as death. In this situation, an extension of SCCS adjusted for terminal events gives unbiased point estimators. The key question of this paper is whether the general Cox regression model for time-dependent covariates may be an alternative to the adjusted SCCS method for terminal events. In contrast to the SCCS method, Cox regression is included in most software packages (SPSS, SAS, STATA, R, …) and it is easy to use. We can show that Cox regression is applicable to test the null hypothesis. In our KiGGS example without censored data, the Cox regression and the adjusted SCCS method yield point estimates almost identical to the standard SCCS method. We have conducted several simulation studies to complete the comparison of the two methods. The Cox regression shows a tendency to underestimate the true effect with prolonged risk periods and strong effects (Relative Incidence >2). If risk of the event is strongly affected by the age, the adjusted SCCS method slightly overestimates the predefined exposure effect. Cox regression has the same efficiency as the adjusted SCCS method in the simulation.  相似文献   

6.
Nested case–control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms’ Tumor Study is also given to illustrate the methodology.  相似文献   

7.
Procedures for estimating the parameters of the general class of semiparametric models for recurrent events proposed by Peña and Hollander [(2004). Models for recurrent events in reliability and survival analysis. In: Soyer R., Mazzuchi T., Singpurwalla N. (Eds.), Mathematical Reliability: An Expository Perspective. Kluwer Academic Publishers, Dordrecht, pp. 105–123 (Chapter 6)] are developed. This class of models incorporates an effective age function encoding the effect of changes after each event occurrence such as the impact of an intervention, it models the impact of accumulating event occurrences on the unit, it admits a link function in which the effect of possibly time-dependent covariates are incorporated, and it allows the incorporation of unobservable frailty components which induce dependencies among the inter-event times for each unit. The estimation procedures are semiparametric in that a baseline hazard function is nonparametrically specified. The sampling distribution properties of the estimators are examined through a simulation study, and the consequences of mis-specifying the model are analyzed. The results indicate that the flexibility of this general class of models provides a safeguard for analyzing recurrent event data, even data possibly arising from a frailty-less mechanism. The estimation procedures are applied to real data sets arising in the biomedical and public health settings, as well as from reliability and engineering situations. In particular, the procedures are applied to a data set pertaining to times to recurrence of bladder cancer and the results of the analysis are compared to those obtained using three methods of analyzing recurrent event data.  相似文献   

8.

Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed in the literature which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject’s event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose a Bayesian model which loosens assumptions on the mixing distribution inherent in the random effects FHT models currently in use. We demonstrate via simulation study that the proposed model greatly improves both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodology to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with both the Cox model and two popular FHT models.

  相似文献   

9.
Over the past decades, various principles for causal effect estimation have been proposed, all differing in terms of how they adjust for measured confounders: either via traditional regression adjustment, by adjusting for the expected exposure given those confounders (e.g., the propensity score), or by inversely weighting each subject's data by the likelihood of the observed exposure, given those confounders. When the exposure is measured with error, this raises the question whether these different estimation strategies might be differently affected and whether one of them is to be preferred for that reason. In this article, we investigate this by comparing inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators for the exposure effect in linear marginal structural mean models (MSM) with G-estimators, propensity score (PS) adjusted estimators and ordinary least squares (OLS) estimators for the exposure effect in linear regression models. We find analytically that these estimators are equally affected when exposure misclassification is independent of the confounders, but not otherwise. Simulation studies reveal similar results for time-varying exposures and when the model of interest includes a logistic link.  相似文献   

10.
Abstract

Imputation methods for missing data on a time-dependent variable within time-dependent Cox models are investigated in a simulation study. Quality of life (QoL) assessments were removed from the complete simulated datasets, which have a positive relationship between QoL and disease-free survival (DFS) and delayed chemotherapy and DFS, by missing at random and missing not at random (MNAR) mechanisms. Standard imputation methods were applied before analysis. Method performance was influenced by missing data mechanism, with one exception for simple imputation. The greatest bias occurred under MNAR and large effect sizes. It is important to carefully investigate the missing data mechanism.  相似文献   

11.
ABSTRACT

This research examines the statistical methodology that is used to estimate the parameters in item response models. An integral part of an item response model is the normalization rule that is used to identify the distributional parameters. The main result shown here is that only Verhelst–Glas normalizations that arbitrarily set one difficulty and one dispersion parameter to unity are consistent with the basic assumptions underlying the two-parameter logistic model. Failure to employ this type of normalization will lead to scores that depend on the item composition of the test and differential item difficulty (DIF) will compromise the validity of the estimated ability scores when different groups are being compared. It is also shown that some of the tests for DIF fail when the data are generated by an IRT model with a random effect. Most of the results are based on simulations of a four item model. Because the data generation mechanism is known, it is possible to determine the effect on ability scores and parameter estimates when different normalizations or different distribution parameter values are used.  相似文献   

12.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

13.
Abstract

In general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper.  相似文献   

14.
Fitting cross-classified multilevel models with binary response is challenging. In this setting a promising method is Bayesian inference through Integrated Nested Laplace Approximations (INLA), which performs well in several latent variable models. We devise a systematic simulation study to assess the performance of INLA with cross-classified binary data under different scenarios defined by the magnitude of the variances of the random effects, the number of observations, the number of clusters, and the degree of cross-classification. In the simulations INLA is systematically compared with the popular method of Maximum Likelihood via Laplace Approximation. By an application to the classical salamander mating data, we compare INLA with the best performing methods. Given the computational speed and the generally good performance, INLA turns out to be a valuable method for fitting logistic cross-classified models.  相似文献   

15.
Failure time data represent a particular case of binary longitudinal data. The corresponding analysis of the effect of explanatory covariates repeatedly collected over time on the failure rate has been largely facilitated by the Cox semi-parametric regression model. However, neither the interpretation of the estimated parameters associated with time-dependent covariates is straight-forward, nor does this model fully account for the dynamics of the effect of a covariate over time. Markovian regression models appear as complementary tools to address these specific issues from the predictive point of view. We illustrate these aspects using data from the WHO multicenter study, which was designed to analyze the relation between the duration of postpartum lactational amenorrhea and the breastfeeding pattern. One of the main advantage of this approach applied to the field of reproductive epidemiology was to provide a flexible tool, easily and directly understood by clinicians and fieldworkers, for simulating situations, which were still unobserved, and to predict their effects on the duration of amenorrhea.  相似文献   

16.
Lifetime Data Analysis - Marginal structural models (MSMs) allow for causal analysis of longitudinal data. The standard MSM is based on discrete time models, but the continuous-time MSM is a...  相似文献   

17.
Studying the effect of exposure or intervention on a dichotomous outcome is very common in medical research. Logistic regression (LR) is often used to determine such association which provides odds ratio (OR). OR often overestimates the effect size for prevalent outcome data. In such situations, use of relative risk (RR) has been suggested. We propose modifications in Zhang and Yu and Diaz-Quijano methods. These methods were compared with stratified Mantel Haenszel method, LR, log binomial regression (LBR), Zhang and Yu method, Poisson/Cox regression, modified Poisson/Cox regression, marginal probability method, COPY method, inverse probability of treatment weighted LBR, and Diaz-Quijano method. Our proposed modified Diaz-Quijano (MDQ) method provides RR and its confidence interval similar to those estimated by modified Poisson/Cox and LBRs. The proposed modifications in Zhang and Yu method provides better estimate of RR and its standard error as compared to Zhang and Yu method in a variety of situations with prevalent outcome. The MDQ method can be used easily to estimate the RR and its confidence interval in the studies which require reporting of RRs. Regression models which directly provide the estimate of RR without convergence problems such as the MDQ method and modified Poisson/Cox regression should be preferred.  相似文献   

18.
For investigating differences between two treatment groups in medical science, selecting a suitable model to capture the underlying survival function for each group with some covariates is an important issue. Many methods, such as stratified Cox model and unstratified Cox model, have been proposed for investigating the problem. However, different models generally perform differently under different circumstances and none dominates the others. In this article, we focus on two sample problems with right-censored data and propose a model selection criterion based on an approximately unbiased estimator of Kullback-Leibler loss, which accounts for estimation uncertainty in estimated survival functions obtained by various candidate models. The effectiveness of the proposed method is justified by some simulation studies and it also applied to an HIV+ data set for illustration.  相似文献   

19.
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.  相似文献   

20.
The Cox proportional frailty model with a random effect has been proposed for the analysis of right-censored data which consist of a large number of small clusters of correlated failure time observations. For right-censored data, Cai et al. [3] proposed a class of semiparametric mixed-effects models which provides useful alternatives to the Cox model. We demonstrate that the approach of Cai et al. [3] can be used to analyze clustered doubly censored data when both left- and right-censoring variables are always observed. The asymptotic properties of the proposed estimator are derived. A simulation study is conducted to investigate the performance of the proposed estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号