首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
Several survival regression models have been developed to assess the effects of covariates on failure times. In various settings, including surveys, clinical trials and epidemiological studies, missing data may often occur due to incomplete covariate data. Most existing methods for lifetime data are based on the assumption of missing at random (MAR) covariates. However, in many substantive applications, it is important to assess the sensitivity of key model inferences to the MAR assumption. The index of sensitivity to non-ignorability (ISNI) is a local sensitivity tool to measure the potential sensitivity of key model parameters to small departures from the ignorability assumption, needless of estimating a complicated non-ignorable model. We extend this sensitivity index to evaluate the impact of a covariate that is potentially missing, not at random in survival analysis, using parametric survival models. The approach will be applied to investigate the impact of missing tumor grade on post-surgical mortality outcomes in individuals with pancreas-head cancer in the Surveillance, Epidemiology, and End Results data set. For patients suffering from cancer, tumor grade is an important risk factor. Many individuals in these data with pancreas-head cancer have missing tumor grade information. Our ISNI analysis shows that the magnitude of effect for most covariates (with significant effect on the survival time distribution), specifically surgery and tumor grade as some important risk factors in cancer studies, highly depends on the missing mechanism assumption of the tumor grade. Also a simulation study is conducted to evaluate the performance of the proposed index in detecting sensitivity of key model parameters.  相似文献   

2.
The use of the Cox proportional hazards regression model is wide-spread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.  相似文献   

3.
We propose methods for Bayesian inference for missing covariate data with a novel class of semi-parametric survival models with a cure fraction. We allow the missing covariates to be either categorical or continuous and specify a parametric distribution for the covariates that is written as a sequence of one dimensional conditional distributions. We assume that the missing covariates are missing at random (MAR) throughout. We propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. The proposed class of priors are shown to be useful in recovering information on the missing covariates especially in situations where the missing data fraction is large. Properties of the proposed prior and resulting posterior distributions are examined. Also, model checking techniques are proposed for sensitivity analyses and for checking the goodness of fit of a particular model. Specifically, we extend the Conditional Predictive Ordinate (CPO) statistic to assess goodness of fit in the presence of missing covariate data. Computational techniques using the Gibbs sampler are implemented. A real data set involving a melanoma cancer clinical trial is examined to demonstrate the methodology.  相似文献   

4.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

5.
The conventional Cox proportional hazards regression model contains a loglinear relative risk function, linking the covariate information to the hazard ratio with a finite number of parameters. A generalization, termed the partly linear Cox model, allows for both finite dimensional parameters and an infinite dimensional parameter in the relative risk function, providing a more robust specification of the relative risk function. In this work, a likelihood based inference procedure is developed for the finite dimensional parameters of the partly linear Cox model. To alleviate the problems associated with a likelihood approach in the presence of an infinite dimensional parameter, the relative risk is reparameterized such that the finite dimensional parameters of interest are orthogonal to the infinite dimensional parameter. Inference on the finite dimensional parameters is accomplished through maximization of the profile partial likelihood, profiling out the infinite dimensional nuisance parameter using a kernel function. The asymptotic distribution theory for the maximum profile partial likelihood estimate is established. It is determined that this estimate is asymptotically efficient; the orthogonal reparameterization enables employment of profile likelihood inference procedures without adjustment for estimation of the nuisance parameter. An example from a retrospective analysis in cancer demonstrates the methodology.  相似文献   

6.
We consider failure time regression analysis with an auxiliary variable in the presence of a validation sample. We extend the nonparametric inference procedure of Zhou and Pepe to handle a continuous auxiliary or proxy covariate. We estimate the induced relative risk function with a kernel smoother and allow the selection probability of the validation set to depend on the observed covariates. We present some asymptotic properties for the kernel estimator and provide some simulation results. The method proposed is illustrated with a data set from an on-going epidemiologic study.  相似文献   

7.
In two observational studies, one investigating the effects of minimum wage laws on employment and the other of the effects of exposures to lead, an estimated treatment effect's sensitivity to hidden bias is examined. The estimate uses the combined quantile averages that were introduced in 1981 by B. M. Brown as simple, efficient, robust estimates of location admitting both exact and approximate confidence intervals and significance tests. Closely related to Gastwirth's estimate and Tukey's trimean, the combined quantile average has asymptotic efficiency for normal data that is comparable with that of a 15% trimmed mean, and higher efficiency than the trimean, but it has resistance to extreme observations or breakdown comparable with that of the trimean and better than the 15% trimmed mean. Combined quantile averages provide consistent estimates of an additive treatment effect in a matched randomized experiment. Sensitivity analyses are discussed for combined quantile averages when used in a matched observational study in which treatments are not randomly assigned. In a sensitivity analysis in an observational study, subjects are assumed to differ with respect to an unobserved covariate that was not adequately controlled by the matching, so that treatments are assigned within pairs with probabilities that are unequal and unknown. The sensitivity analysis proposed here uses significance levels, point estimates and confidence intervals based on combined quantile averages and examines how these inferences change under a range of assumptions about biases due to an unobserved covariate. The procedures are applied in the studies of minimum wage laws and exposures to lead. The first example is also used to illustrate sensitivity analysis with an instrumental variable.  相似文献   

8.
 在纵向数据研究中,混合效应模型的随机误差通常采用正态性假设。而诸如病毒载量和CD4细胞数目等病毒性数据通常呈现偏斜性,因此正态性假设可能影响模型结果甚至导致错误的结论。在HIV动力学研究中,病毒响应值往往与协变量相关,且协变量的测量值通常存在误差,为此论文中联立协变量过程建立具有偏正态分布的非线性混合效应联合模型,并用贝叶斯推断方法估计模型的参数。由于协变量能够解释个体内的部分变化,因此协变量过程的模型选择对病毒载量的拟合效果有重要的影响。该文提出了一次移动平均模型作为协变量过程的改进模型,比较后发现当协变量采用移动平均模型时,病毒载量模型的拟合效果更好。该结果对协变量模型的研究具有重要的指导意义。  相似文献   

9.
We consider estimating the mode of a response given an error‐prone covariate. It is shown that ignoring measurement error typically leads to inconsistent inference for the conditional mode of the response given the true covariate, as well as misleading inference for regression coefficients in the conditional mode model. To account for measurement error, we first employ the Monte Carlo corrected score method (Novick & Stefanski, 2002) to obtain an unbiased score function based on which the regression coefficients can be estimated consistently. To relax the normality assumption on measurement error this method requires, we propose another method where deconvoluting kernels are used to construct an objective function that is maximized to obtain consistent estimators of the regression coefficients. Besides rigorous investigation on asymptotic properties of the new estimators, we study their finite sample performance via extensive simulation experiments, and find that the proposed methods substantially outperform a naive inference method that ignores measurement error. The Canadian Journal of Statistics 47: 262–280; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
When using the co-twin control design for analysis of event times, one needs a model to address the possible within-pair association. One such model is the shared frailty model in which the random frailty variable creates the desired within-pair association. Standard inference for this model requires independence between the random effect and the covariates. We study how violations of this assumption affect inference for the regression coefficients and conclude that substantial bias may occur. We propose an alternative way of making inference for the regression parameters by using a fixed-effects models for survival in matched pairs. Fitting this model to data generated from the frailty model provides consistent and asymptotically normal estimates of regression coefficients, no matter whether the independence assumption is met.  相似文献   

11.
The authors propose methods for Bayesian inference for generalized linear models with missing covariate data. They specify a parametric distribution for the covariates that is written as a sequence of one‐dimensional conditional distributions. They propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. They examine the properties of the proposed prior and resulting posterior distributions. They also present a Bayesian criterion for comparing various models, and a calibration is derived for it. A detailed simulation is conducted and two real data sets are examined to demonstrate the methodology.  相似文献   

12.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Abstract. This article deals with two problems concering the probabilities of causation defined by Pearl (Causality: models, reasoning, and inference, 2nd edn, 2009, Cambridge University Press, New York) namely, the probability that one observed event was a necessary (or sufficient, or both) cause of another; one is to derive new bounds, and the other is to provide the covariate selection criteria. Tian & Pearl (Ann. Math. Artif. Intell., 28, 2000, 287–313) showed how to bound the probabilities of causation using information from experimental and observational studies, with minimal assumptions about the data‐generating process, and identifiable conditions for these probabilities. In this article, we derive narrower bounds using covariate information that is available from those studies. In addition, we propose the conditional monotonicity assumption so as to further narrow the bounds. Moreover, we discuss the covariate selection problem from the viewpoint of the estimation accuracy, and show that selecting a covariate that has a direct effect on an outcome variable cannot always improve the estimation accuracy, which is contrary to the situation in linear regression models. These results provide more accurate information for public policy, legal determination of responsibility and personal decision making.  相似文献   

14.
Summary.  We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference. These issues concern some of the most fundamental advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking before randomization and matching after assignment of treatment to achieve covariate balance. Applied researchers in a wide range of scientific disciplines seem to fall prey to one or more of these fallacies and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show how this decomposition can help scholars from different experimental and observational research traditions to understand better each other's inferential problems and attempted solutions.  相似文献   

15.
Statistically matched files are created in an attempt to solve the practical problem that exists when no single file has the full set of variables needed for drawing important inferences. Previous methods of file matching are reviewed, and the method of file concatenation with adjusted weights and multiple imputations is described and illustrated on an artificial example. A major benefit of this approach is the ability to display sensitivity of inference to untestable assumptions being made when creating the matched file.  相似文献   

16.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

17.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

18.
The Cox regression model is often used when analyzing survival data as it provides a convenient way of summarizing covariate effects in terms of relative risks. The proportional hazards assumption may not hold, however. A typical violation of the assumption is time-changing covariate effects. Under such scenarios one may use more flexible models but the results from such models may be complicated to communicate and it is desirable to have simple measures of a treatment effect, say. In this paper we focus on the odds-of-concordance measure that was recently studied by Schemper et al. (Stat Med 28:2473?C2489, 2009). They suggested to estimate this measure using weighted Cox regression (WCR). Although WCR may work in many scenarios no formal proof can be established. We suggest an alternative estimator of the odds-of-concordance measure based on the Aalen additive hazards model. In contrast to the WCR, one may derive the large sample properties for this estimator making formal inference possible. The estimator also allows for additional covariate effects.  相似文献   

19.
In an observational study in which each treated subject is matched to several untreated controls by using observed pretreatment covariates, a sensitivity analysis asks how hidden biases due to unobserved covariates might alter the conclusions. The bounds required for a sensitivity analysis are the solution to an optimization problem. In general, this optimization problem is not separable, in the sense that one cannot find the needed optimum by performing a separate optimization in each matched set and combining the results. We show, however, that this optimization problem is asymptotically separable, so that when there are many matched sets a separate optimization may be performed in each matched set and the results combined to yield the correct optimum with negligible error. This is true when the Wilcoxon rank sum test or the Hodges-Lehmann aligned rank test is applied in matching with multiple controls. Numerical calculations show that the asymptotic approximation performs well with as few as 10 matched sets. In the case of the rank sum test, a table is given containing the separable solution. With this table, only simple arithmetic is required to conduct the sensitivity analysis. The method also supplies estimates, such as the Hodges-Lehmann estimate, and confidence intervals associated with rank tests. The method is illustrated in a study of dropping out of US high schools and the effects on cognitive test scores.  相似文献   

20.
Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed [?entürk, D., Müller, H.G., 2005. Covariate adjusted regression. Biometrika 92, 75–89]. The available data have been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号