首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In randomized trials, investigators are frequently interested in estimating the direct effect of a treatment on an outcome that is not relayed by intermediate variables, in addition to the usual intention-to-treat (ITT) effect. Even if the ITT effect is not confounded due to randomization, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Although the unmeasured variables cannot be adjusted for in the models, it is still important to evaluate the potential bias of these variables quantitatively. This article proposes a sensitivity analysis method for controlled direct effects using a marginal structural model that is an extension of the sensitivity analysis method of unmeasured confounding introduced in the context of observational studies. The proposed method is illustrated using a randomized trial of depression.  相似文献   

2.
This paper considers the identification problem of direct and indirect effects of treatment on a response, when observed data are available but an important intermediate variable is unmeasured. To solve this problem, we propose identification conditions for direct and indirect effects based on (conditionally independent) proxy variables of the unmeasured intermediate variable. This paper establishes feasible approaches for evaluating direct and indirect effects in studies with a single unmeasured intermediate variable.  相似文献   

3.
This article investigates Monotone Instrumental Variables (MIV) and their ability to aid in identifying treatment effects when the treatment is binary in a nonparametric bounding framework. I show that an MIV can only aid in identification beyond that of a Monotone Treatment Selection assumption if for some region of the instrument the observed conditional-on-received-treatment outcomes exhibit monotonicity in the instrument in the opposite direction as that assumed by the MIV in a Simpson's Paradox-like fashion. Furthermore, an MIV can only aid in identification beyond that of a Monotone Treatment Response assumption if for some region of the instrument either the above Simpson's Paradox-like relationship exists or the instrument's indirect effect on the outcome (as through its influence on treatment selection) is the opposite of its direct effect as assumed by the MIV. The implications of the main findings for empirical work are discussed and the results are highlighted with an application investigating the effect of criminal convictions on job match quality using data from the 1997 National Longitudinal Survey of the Youth. Though the main results are shown to hold only for the binary treatment case in general, they are shown to have important implications for the multi-valued treatment case as well.  相似文献   

4.
Randomized and natural experiments are commonly used in economics and other social science fields to estimate the effect of programs and interventions. Even when employing experimental data, assessing the impact of a treatment is often complicated by the presence of sample selection (outcomes are only observed for a selected group) and noncompliance (some treatment group individuals do not receive the treatment while some control individuals do). We address both of these identification problems simultaneously and derive nonparametric bounds for average treatment effects within a principal stratification framework. We employ these bounds to empirically assess the wage effects of Job Corps (JC), the most comprehensive and largest federally funded job training program for disadvantaged youth in the United States. Our results strongly suggest positive average effects of JC on wages for individuals who comply with their treatment assignment and would be employed whether or not they enrolled in JC (the “always-employed compliers”). Under relatively weak monotonicity and mean dominance assumptions, we find that this average effect is between 5.7% and 13.9% 4 years after randomization, and between 7.7% and 17.5% for non-Hispanics. Our results are consistent with larger effects of JC on wages than those found without adjusting for noncompliance.  相似文献   

5.
We introduce a framework for estimating the effect that a binary treatment has on a binary outcome in the presence of unobserved confounding. The methodology is applied to a case study which uses data from the Medical Expenditure Panel Survey and whose aim is to estimate the effect of private health insurance on health care utilization. Unobserved confounding arises when variables which are associated with both treatment and outcome are not available (in economics this issue is known as endogeneity). Also, treatment and outcome may exhibit a dependence which cannot be modeled using a linear measure of association, and observed confounders may have a non-linear impact on the treatment and outcome variables. The problem of unobserved confounding is addressed using a two-equation structural latent variable framework, where one equation essentially describes a binary outcome as a function of a binary treatment whereas the other equation determines whether the treatment is received. Non-linear dependence between treatment and outcome is dealt using copula functions, whereas covariate-response relationships are flexibly modeled using a spline approach. Related model fitting and inferential procedures are developed, and asymptotic arguments presented.  相似文献   

6.
Abstract

We consider the problem of assessing the effects of a treatment on duration outcomes using data from a randomized evaluation with noncompliance. For such settings, we derive nonparametric sharp bounds for average and quantile treatment effects addressing three pervasive problems simultaneously: self-selection into the spell of interest, endogenous censoring of the duration outcome, and noncompliance with the assigned treatment. Ignoring any of these issues could yield biased estimates of the effects. Notably, the proposed bounds do not impose the independent censoring assumption—which is commonly used to address censoring but is likely to fail in important settings—or exclusion restrictions to address endogeneity of censoring and selection. Instead, they employ monotonicity and stochastic dominance assumptions. To illustrate the use of these bounds we assess the effects of the Job Corps (JC) training program on its participants’ last complete employment spell duration. Our estimated bounds suggest that JC participation may increase the average duration of the last complete employment spell before week 208 after randomization by at least 5.6 log points (5.8%) for individuals who comply with their treatment assignment and experience a complete employment spell whether or not they enrolled in JC. The estimated quantile treatment effects suggest the impacts may be heterogeneous, and strengthen our conclusions based on the estimated average effects.  相似文献   

7.
Abstract

We propose a difference-in-differences approach for disentangling a total treatment effect within specific subpopulations into a direct effect and an indirect effect operating through a binary mediating variable. Random treatment assignment along with specific common trend and effect homogeneity assumptions identify the direct effects on the always and never takers, whose mediator is not affected by the treatment, as well as the direct and indirect effects on the compliers, whose mediator reacts to the treatment. In our empirical application, we analyze the impact of the Vietnam draft lottery on political preferences. The results suggest that a high draft risk due to the draft lottery outcome leads to an increase in mild preferences for the Republican Party, but has no effect on strong preferences for either party or on specific political attitudes. The increase in Republican support is mostly driven by the direct effect not operating through the mediator that is military service.  相似文献   

8.
We present a simulation study and application that shows inclusion of binary proxy variables related to binary unmeasured confounders improves the estimate of a related treatment effect in binary logistic regression. The simulation study included 60,000 randomly generated parameter scenarios of sample size 10,000 across six different simulation structures. We assessed bias by comparing the probability of finding the expected treatment effect relative to the modeled treatment effect with and without the proxy variable. Inclusion of a proxy variable in the logistic regression model significantly reduced the bias of the treatment or exposure effect when compared to logistic regression without the proxy variable. Including proxy variables in the logistic regression model improves the estimation of the treatment effect at weak, moderate, and strong association with unmeasured confounders and the outcome, treatment, or proxy variables. Comparative advantages held for weakly and strongly collapsible situations, as the number of unmeasured confounders increased, and as the number of proxy variables adjusted for increased.  相似文献   

9.
Cluster‐randomized trials are often conducted to assess vaccine effects. Defining estimands of interest before conducting a trial is integral to the alignment between a study's objectives and the data to be collected and analyzed. This paper considers estimands and estimators for overall, indirect, and total vaccine effects in trials, where clusters of individuals are randomized to vaccine or control. The scenario is considered where individuals self‐select whether to participate in the trial, and the outcome of interest is measured on all individuals in each cluster. Unlike the overall, indirect, and total effects, the direct effect of vaccination is shown in general not to be estimable without further assumptions, such as no unmeasured confounding. An illustrative example motivated by a cluster‐randomized typhoid vaccine trial is provided.  相似文献   

10.
Results from classical linear regression regarding the effects of covariate adjustment, with respect to the issues of confounding, the precision with which an exposure effect can be estimated, and the efficiency of hypothesis tests for no treatment effect in randomized experiments, are often assumed to apply more generally to other types of regression models. In this paper results pertaining to several generalized linear models involving a dichotomous response variable are given, demonstrating that with respect to the issues of confounding and precision, for models having a linear or log link function the results of classical linear regression do generally apply, whereas for other models, including those having a logit, probit, log-log, complementary log-log, or generalized logistic link function, the results of classical linear regression do not always apply. It is also shown, however, that for any link function, covariate adjustment results in improved efficiency of hypothesis tests for no treatment effect in randomized experiments, and hence that the classical linear regression results regarding efficiency do apply for all models having a dichotomous response variable.  相似文献   

11.
Hahn [Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica 66:315-331] derived the semiparametric efficiency bounds for estimating the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The variance of ATET depends on whether the propensity score is known or unknown. Hahn attributes this to “dimension reduction.” In this paper, an alternative explanation is given: Knowledge of the propensity score improves upon the estimation of the distribution of the confounding variables.  相似文献   

12.
I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.  相似文献   

13.
The Reed-Frost epidemic model is a simple stochastic process with parameter q that describes the spread of an infectious disease among a closed population. Given data on the final outcome of an epidemic, it is possible to perform Bayesian inference for q using a simple Gibbs sampler algorithm. In this paper it is illustrated that by choosing latent variables appropriately, certain monotonicity properties hold which facilitate the use of a perfect simulation algorithm. The methods are applied to real data.  相似文献   

14.
Weak identification is a well-known issue in the context of linear structural models. However, for probit models with endogenous explanatory variables, this problem has been little explored. In this paper, we study by simulating the behavior of the usual z-test and the LR test in the presence of weak identification. We find that the usual asymptotic z-test exhibits large level distortions (over-rejections under the null hypothesis). The magnitude of the level distortions depends heavily on the parameter value tested. In contrast, asymptotic LR tests do not over-reject and appear to be robust to weak identification.  相似文献   

15.
Abstract. This article deals with two problems concering the probabilities of causation defined by Pearl (Causality: models, reasoning, and inference, 2nd edn, 2009, Cambridge University Press, New York) namely, the probability that one observed event was a necessary (or sufficient, or both) cause of another; one is to derive new bounds, and the other is to provide the covariate selection criteria. Tian & Pearl (Ann. Math. Artif. Intell., 28, 2000, 287–313) showed how to bound the probabilities of causation using information from experimental and observational studies, with minimal assumptions about the data‐generating process, and identifiable conditions for these probabilities. In this article, we derive narrower bounds using covariate information that is available from those studies. In addition, we propose the conditional monotonicity assumption so as to further narrow the bounds. Moreover, we discuss the covariate selection problem from the viewpoint of the estimation accuracy, and show that selecting a covariate that has a direct effect on an outcome variable cannot always improve the estimation accuracy, which is contrary to the situation in linear regression models. These results provide more accurate information for public policy, legal determination of responsibility and personal decision making.  相似文献   

16.
This article provides a strategy to identify the existence and direction of a causal effect in a generalized nonparametric and nonseparable model identified by instrumental variables. The causal effect concerns how the outcome depends on the endogenous treatment variable. The outcome variable, treatment variable, other explanatory variables, and the instrumental variable can be essentially any combination of continuous, discrete, or “other” variables. In particular, it is not necessary to have any continuous variables, none of the variables need to have large support, and the instrument can be binary even if the corresponding endogenous treatment variable and/or outcome is continuous. The outcome can be mismeasured or interval-measured, and the endogenous treatment variable need not even be observed. The identification results are constructive, and can be empirically implemented using standard estimation results.  相似文献   

17.
Recently, Billard (1977) developed a partial sequential procedure for comparing a null hypothesis against a two-sided alternative hypothesis when the parameter under test is that of the binomial distribution. In that paper, approximations to the operating characteristic and average sample number function were derived. In this note, bounds to the average sample number function are derived. Using numerical results a comparison of the approximations, bounds and empirical values is made.  相似文献   

18.
Given a two-way contingency table in which the rows and columns both define ordinal variables, there are many ways in which the informal idea of positive association between those variables might be defined. This paper considers a variety of definitions expressed as inequality constraints on cross-product ratios. Logical relationships between the definitions are explored. Each definition can serve as a composite alternative against which the null hypothesis of no association may be tested. For a broad class of such alternatives a decomposition of the log-likelihood gives both an explicit likelihood ratio statistic and its asymptotic null hypothesis distribution. Results are derived for multinomial sampling and for fully conditional sampling with row and column totals fixed.  相似文献   

19.
Odds ratios are frequently used to describe the relationship between a binary treatment or exposure and a binary outcome. An odds ratio can be interpreted as a causal effect or a measure of association, depending on whether it involves potential outcomes or the actual outcome. An odds ratio can also be characterized as marginal versus conditional, depending on whether it involves conditioning on covariates. This article proposes a method for estimating a marginal causal odds ratio subject to confounding. The proposed method is based on a logistic regression model relating the outcome to the treatment indicator and potential confounders. Simulation results show that the proposed method performs reasonably well in moderate-sized samples and may even offer an efficiency gain over the direct method based on the sample odds ratio in the absence of confounding. The method is illustrated with a real example concerning coronary heart disease.  相似文献   

20.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号