首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity‐score‐matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity‐score‐matched data, using a robust estimator of the variance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In a study comparing the effects of two treatments, the propensity score is the probability of assignment to one treatment conditional on a subject's measured baseline covariates. Propensity-score matching is increasingly being used to estimate the effects of exposures using observational data. In the most common implementation of propensity-score matching, pairs of treated and untreated subjects are formed whose propensity scores differ by at most a pre-specified amount (the caliper width). There has been a little research into the optimal caliper width. We conducted an extensive series of Monte Carlo simulations to determine the optimal caliper width for estimating differences in means (for continuous outcomes) and risk differences (for binary outcomes). When estimating differences in means or risk differences, we recommend that researchers match on the logit of the propensity score using calipers of width equal to 0.2 of the standard deviation of the logit of the propensity score. When at least some of the covariates were continuous, then either this value, or one close to it, minimized the mean square error of the resultant estimated treatment effect. It also eliminated at least 98% of the bias in the crude estimator, and it resulted in confidence intervals with approximately the correct coverage rates. Furthermore, the empirical type I error rate was approximately correct. When all of the covariates were binary, then the choice of caliper width had a much smaller impact on the performance of estimation of risk differences and differences in means.  相似文献   

3.
In this study, we demonstrate how generalized propensity score estimators (Imbens’ weighted estimator, the propensity score weighted estimator and the generalized doubly robust estimator) can be used to calculate the adjusted marginal probabilities for estimating the three common binomial parameters: the risk difference (RD), the relative risk (RR), and the odds ratio (OR). We further conduct a simulation study to compare the estimated RD, RR, and OR using the adjusted and the unadjusted marginal probabilities in terms of the bias and mean-squared error (MSE). Although there is no clear winner in terms of the MSE for estimating RD, RR, and OR, simulation results surprisingly show thatthe unadjusted marginal probabilities produce the smallest bias compared with the adjusted marginal probabilities in most of the estimates. Hence, in conclusion, we recommend using the unadjusted marginal probabilities to estimate RD, RR, and OR, in practice.  相似文献   

4.
In the medical literature, there has been an increased interest in evaluating association between exposure and outcomes using nonrandomized observational studies. However, because assignments to exposure are not random in observational studies, comparisons of outcomes between exposed and nonexposed subjects must account for the effect of confounders. Propensity score methods have been widely used to control for confounding, when estimating exposure effect. Previous studies have shown that conditioning on the propensity score results in biased estimation of conditional odds ratio and hazard ratio. However, research is lacking on the performance of propensity score methods for covariate adjustment when estimating the area under the ROC curve (AUC). In this paper, AUC is proposed as measure of effect when outcomes are continuous. The AUC is interpreted as the probability that a randomly selected nonexposed subject has a better response than a randomly selected exposed subject. A series of simulations has been conducted to examine the performance of propensity score methods when association between exposure and outcomes is quantified by AUC; this includes determining the optimal choice of variables for the propensity score models. Additionally, the propensity score approach is compared with that of the conventional regression approach to adjust for covariates with the AUC. The choice of the best estimator depends on bias, relative bias, and root mean squared error. Finally, an example looking at the relationship of depression/anxiety and pain intensity in people with sickle cell disease is used to illustrate the estimation of the adjusted AUC using the proposed approaches.  相似文献   

5.
Treatment effect estimators that utilize the propensity score as a balancing score, e.g., matching and blocking estimators are robust to misspecifications of the propensity score model when the misspecification is a balancing score. Such misspecifications arise from using the balancing property of the propensity score in the specification procedure. Here, we study misspecifications of a parametric propensity score model written as a linear predictor in a strictly monotonic function, e.g. a generalized linear model representation. Under mild assumptions we show that for misspecifications, such as not adding enough higher order terms or choosing the wrong link function, the true propensity score is a function of the misspecified model. Hence, the latter does not bring bias to the treatment effect estimator. It is also shown that a misspecification of the propensity score does not necessarily lead to less efficient estimation of the treatment effect. The results of the paper are highlighted in simulations where different misspecifications are studied.  相似文献   

6.
The generalized doubly robust estimator is proposed for estimating the average treatment effect (ATE) of multiple treatments based on the generalized propensity score (GPS). In medical researches where observational studies are conducted, estimations of ATEs are usually biased since the covariate distributions could be unbalanced among treatments. To overcome this problem, Imbens [The role of the propensity score in estimating dose-response functions, Biometrika 87 (2000), pp. 706–710] and Feng et al. [Generalized propensity score for estimating the average treatment effect of multiple treatments, Stat. Med. (2011), in press. Available at: http://onlinelibrary.wiley.com/doi/10.1002/sim.4168/abstract] proposed weighted estimators that are extensions of a ratio estimator based on GPS to estimate ATEs with multiple treatments. However, the ratio estimator always produces a larger empirical sample variance than the doubly robust estimator, which estimates an ATE between two treatments based on the estimated propensity score (PS). We conduct a simulation study to compare the performance of our proposed estimator with Imbens’ and Feng et al.’s estimators, and simulation results show that our proposed estimator outperforms their estimators in terms of bias, empirical sample variance and mean-squared error of the estimated ATEs.  相似文献   

7.
Biao Zhang 《Statistics》2016,50(5):1173-1194
Missing covariate data occurs often in regression analysis. We study methods for estimating the regression coefficients in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866] on regression analyses with missing covariates, in which they pioneered the use of two working models, the working propensity score model and the working conditional score model. A recent approach to missing covariate data analysis is the empirical likelihood method of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503], which effectively combines unbiased estimating equations. In this paper, we consider an alternative likelihood approach based on the full likelihood of the observed data. This full likelihood-based method enables us to generate estimators for the vector of the regression coefficients that are (a) asymptotically equivalent to those of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the working propensity score model is correctly specified, and (b) doubly robust, like the augmented inverse probability weighting (AIPW) estimators of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Am Statist Assoc. 1994;89:846–866]. Thus, the proposed full likelihood-based estimators improve on the efficiency of the AIPW estimators when the working propensity score model is correct but the working conditional score model is possibly incorrect, and also improve on the empirical likelihood estimators of Qin, Zhang and Leung [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the reverse is true, that is, the working conditional score model is correct but the working propensity score model is possibly incorrect. In addition, we consider a regression method for estimation of the regression coefficients when the working conditional score model is correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866]. Finally, we compare the finite-sample performance of various estimators in a simulation study.  相似文献   

8.
This paper is concerned with model averaging procedure for varying-coefficient partially linear models with missing responses. The profile least-squares estimation process and inverse probability weighted method are employed to estimate regression coefficients of the partially restricted models, in which the propensity score is estimated by the covariate balancing propensity score method. The estimators of the linear parameters are shown to be asymptotically normal. Then we develop the focused information criterion, formulate the frequentist model averaging estimators and construct the corresponding confidence intervals. Some simulation studies are conducted to examine the finite sample performance of the proposed methods. We find that the covariate balancing propensity score improves the performance of the inverse probability weighted estimator. We also demonstrate the superiority of the proposed model averaging estimators over those of existing strategies in terms of mean squared error and coverage probability. Finally, our approach is further applied to a real data example.  相似文献   

9.
In measurement error problems, two major and consistent estimation methods are the conditional score and the corrected score. They are functional methods that require no parametric assumptions on mismeasured covariates. The conditional score requires that a suitable sufficient statistic for the mismeasured covariate can be found, while the corrected score requires that the object score function can be estimated without bias. These assumptions limit their ranges of applications. The extensively corrected score proposed here is an extension of the corrected score. It yields consistent estimations in many cases when neither the conditional score nor the corrected score is feasible. We demonstrate its constructions in generalized linear models and the Cox proportional hazards model, assess its performances by simulation studies and illustrate its implementations by two real examples.  相似文献   

10.
The propensity score (PS) method is widely used to estimate the average treatment effect (TE) in observational studies. However, it is generally confined to the binary treatment assignment. In an extension to the settings of a multi-level treatment, Imbens proposed a generalized propensity score which is the conditional probability of receiving a particular level of the treatment given pre-treatment variables. The average TE can then be estimated by conditioning solely on the generalized PS under the assumption of weak unconfoundedness. In the present work, we adopted this approach and conducted extensive simulations to evaluate the performance of several methods using the generalized PS, including subclassification, matching, inverse probability of treatment weighting (IPTW), and covariate adjustment. Compared with other methods, IPTW had the preferred overall performance. We then applied these methods to a retrospective cohort study of 228,876 pregnant women. The impact of the exposure to different types of the antidepressant medications (no exposure, selective serotonin reuptake inhibitor (SSRI) only, non-SSRI only, and both) during pregnancy on several important infant outcomes (birth weight, gestation age, preterm labor, and respiratory distress) were assessed.  相似文献   

11.
We consider estimating the mode of a response given an error‐prone covariate. It is shown that ignoring measurement error typically leads to inconsistent inference for the conditional mode of the response given the true covariate, as well as misleading inference for regression coefficients in the conditional mode model. To account for measurement error, we first employ the Monte Carlo corrected score method (Novick & Stefanski, 2002) to obtain an unbiased score function based on which the regression coefficients can be estimated consistently. To relax the normality assumption on measurement error this method requires, we propose another method where deconvoluting kernels are used to construct an objective function that is maximized to obtain consistent estimators of the regression coefficients. Besides rigorous investigation on asymptotic properties of the new estimators, we study their finite sample performance via extensive simulation experiments, and find that the proposed methods substantially outperform a naive inference method that ignores measurement error. The Canadian Journal of Statistics 47: 262–280; 2019 © 2019 Statistical Society of Canada  相似文献   

12.
When measurement error is present in covariates, it is well known that naïvely fitting a generalized linear model results in inconsistent inferences. Several methods have been proposed to adjust for measurement error without making undue distributional assumptions about the unobserved true covariates. Stefanski and Carroll focused on an unbiased estimating function rather than a likelihood approach. Their estimating function, known as the conditional score, exists for logistic regression models but has two problems: a poorly behaved Wald test and multiple solutions. They suggested a heuristic procedure to identify the best solution that works well in practice but has little theoretical support compared with maximum likelihood estimation. To help to resolve these problems, we propose a conditional quasi-likelihood to accompany the conditional score that provides an alternative to Wald's test and successfully identifies the consistent solution in large samples.  相似文献   

13.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

14.
Estimating equations which are not necessarily likelihood-based score equations are becoming increasingly popular for estimating regression model parameters. This paper is concerned with estimation based on general estimating equations when true covariate data are missing for all the study subjects, but surrogate or mismeasured covariates are available instead. The method is motivated by the covariate measurement error problem in marginal or partly conditional regression of longitudinal data. We propose to base estimation on the expectation of the complete data estimating equation conditioned on available data. The regression parameters and other nuisance parameters are estimated simultaneously by solving the resulting estimating equations. The expected estimating equation (EEE) estimator is equal to the maximum likelihood estimator if the complete data scores are likelihood scores and conditioning is with respect to all the available data. A pseudo-EEE estimator, which requires less computation, is also investigated. Asymptotic distribution theory is derived. Small sample simulations are conducted when the error process is an order 1 autoregressive model. Regression calibration is extended to this setting and compared with the EEE approach. We demonstrate the methods on data from a longitudinal study of the relationship between childhood growth and adult obesity.  相似文献   

15.
Abstract. The conditional score approach is proposed to the analysis of errors‐in‐variable current status data under the proportional odds model. Distinct from the conditional scores in other applications, the proposed conditional score involves a high‐dimensional nuisance parameter, causing challenges in both asymptotic theory and computation. We propose a composite algorithm combining the Newton–Raphson and self‐consistency algorithms for computation and develop an efficient conditional score, analogous to the efficient score from a typical semiparametric likelihood, for building an asymptotic linear expression and hence the asymptotic distribution of the conditional‐score estimator for the regression parameter. Our proposal is shown to perform well in simulation studies and is applied to a zebrafish basal cell carcinoma data involving measurement errors in gene expression levels.  相似文献   

16.
We discuss the covariate dimension reduction properties of conditional density ratios in the estimation of balanced contrasts of expectations. Conditional density ratios, as well as related sufficient summaries, can be used to replace the covariates with a smaller number of variables. For example, for comparisons among k   populations the covariates can be replaced with k-1k-1 conditional density ratios. The dimension reduction properties of conditional density ratios are directly connected with sufficiency, the dimension reduction concepts considered in regression theory, and propensity theory. The theory presented here extends the ideas in propensity theory to situations in which propensities do not exist and develops an approach to dimension reduction outside of the potential outcomes or counterfactual framework. Under general conditions, we show that a principal components transformation of the estimated conditional density ratios can be used to investigate whether a sufficient summary of dimension lower than k-1k-1 exists and to identify such a lower dimensional summary.  相似文献   

17.
Summary. The paper considers a rectangular array asymptotic embedding for multistratum data sets, in which both the number of strata and the number of within-stratum replications increase, and at the same rate. It is shown that under this embedding the maximum likelihood estimator is consistent but not efficient owing to a non-zero mean in its asymptotic normal distribution. By using a projection operator on the score function, an adjusted maximum likelihood estimator can be obtained that is asymptotically unbiased and has a variance that attains the Cramér–Rao lower bound. The adjusted maximum likelihood estimator can be viewed as an approximation to the conditional maximum likelihood estimator.  相似文献   

18.
Drug developers are required to demonstrate substantial evidence of effectiveness through the conduct of adequate and well‐controlled (A&WC) studies to obtain marketing approval of their medicine. What constitutes A&WC is interpreted as the conduct of randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, and cost. One way to reduce sample size is to leverage information on the control through a prior. One consideration when forming data‐driven prior is the consistency of the external and the current data. It is essential to make this process less susceptible to choosing information that only helps improve the chances toward making an effectiveness claim. For this purpose, propensity score methods are employed for two reasons: (1) it gives the probability of a patient to be in the trial, and (2) it minimizes selection bias by pairing together treatment and control within the trial and control subjects in the external data that are similar in terms of their pretreatment characteristics. Two matching schemes based on propensity scores, estimated through generalized boosted methods, are applied to a real example with the objective of using external data to perform Bayesian augmented control in a trial where the allocation is disproportionate. The simulation results show that the data augmentation process prevents prior and data conflict and improves the precision of the estimator of the average treatment effect.  相似文献   

19.
Summary.  The paper proposes an alternative approach to studying the effect of premarital cohabitation on subsequent duration of marriage on the basis of a strong ignorability assumption . The approach is called propensity score matching and consists of computing survival functions conditional on a function of observed variables (the propensity score), thus eliminating any selection that is derived from these variables. In this way, it is possible to identify a time varying effect of cohabitation without making any assumption either regarding its shape or the functional form of covariate effects. The output of the matching method is the difference between the survival functions of treated and untreated individuals at each time point. Results show that the cohabitation effect on duration of marriage is indeed time varying, being close to zero for the first 2–3 years and rising considerably in the following years.  相似文献   

20.
Observational studies are increasingly being used in medicine to estimate the effects of treatments or exposures on outcomes. To minimize the potential for confounding when estimating treatment effects, propensity score methods are frequently implemented. Often outcomes are the time to event. While it is common to report the treatment effect as a relative effect, such as the hazard ratio, reporting the effect using an absolute measure of effect is also important. One commonly used absolute measure of effect is the risk difference or difference in probability of the occurrence of an event within a specified duration of follow-up between a treatment and comparison group. We first describe methods for point and variance estimation of the risk difference when using weighting or matching based on the propensity score when outcomes are time-to-event. Next, we conducted Monte Carlo simulations to compare the relative performance of these methods with respect to bias of the point estimate, accuracy of variance estimates, and coverage of estimated confidence intervals. The results of the simulation generally support the use of weighting methods (untrimmed ATT weights and IPTW) or caliper matching when the prevalence of treatment is low for point estimation. For standard error estimation the simulation results support the use of weighted robust standard errors, bootstrap methods, or matching with a naïve standard error (i.e., Greenwood method). The methods considered in the article are illustrated using a real-world example in which we estimate the effect of discharge prescribing of statins on patients hospitalized for acute myocardial infarction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号