首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper addresses a formal definition of a confounder based on the qualitative definition that is commonly used in standard epidemiology text-books. To derive the criterion for a factor to be a confounder given by Miettinen and Cook and to clarify inconsistency between various criteria for a confounder, we introduce the concepts of an irrelevant factor, an occasional confounder and a uniformly irrelevant factor. We discuss criteria for checking these and show that Miettinen and Cook's criterion can also be applied to occasional confounders. Moreover, we consider situations with multiple potential confounders, and we obtain two necessary conditions that are satisfied by each confounder set. None of the definitions and results presented in this paper require the ignorability and sufficient control confounding assumptions which are commonly employed in observational and epidemiological studies.  相似文献   

2.
Odds ratios are frequently used to describe the relationship between a binary treatment or exposure and a binary outcome. An odds ratio can be interpreted as a causal effect or a measure of association, depending on whether it involves potential outcomes or the actual outcome. An odds ratio can also be characterized as marginal versus conditional, depending on whether it involves conditioning on covariates. This article proposes a method for estimating a marginal causal odds ratio subject to confounding. The proposed method is based on a logistic regression model relating the outcome to the treatment indicator and potential confounders. Simulation results show that the proposed method performs reasonably well in moderate-sized samples and may even offer an efficiency gain over the direct method based on the sample odds ratio in the absence of confounding. The method is illustrated with a real example concerning coronary heart disease.  相似文献   

3.
Summary.  Formal rules governing signed edges on causal directed acyclic graphs are described and it is shown how these rules can be useful in reasoning about causality. Specifically, the notions of a monotonic effect, a weak monotonic effect and a signed edge are introduced. Results are developed relating these monotonic effects and signed edges to the sign of the causal effect of an intervention in the presence of intermediate variables. The incorporation of signed edges in the directed acyclic graph causal framework furthermore allows for the development of rules governing the relationship between monotonic effects and the sign of the covariance between two variables. It is shown that when certain assumptions about monotonic effects can be made then these results can be used to draw conclusions about the presence of causal effects even when data are missing on confounding variables.  相似文献   

4.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

5.
We developed methods for estimating the causal risk difference and causal risk ratio in randomized trials with noncompliance. The developed estimator is unbiased under the assumption that biases due to noncompliance are identical between both treatment arms. The biases are defined as the difference or ratio between the expectations of potential outcomes for a group that received the test treatment and that for the control group in each randomly assigned group. Although the instrumental variable estimator yields an unbiased estimate under a sharp null hypothesis but may yield a biased estimate under a non-null hypothesis, the bias of the developed estimator does not depend on whether this hypothesis holds. Then the estimate of the causal effect from the developed estimator may have a smaller bias than that from the instrumental variable estimator when the treatment effect exists. There is not yet a standard method for coping with noncompliance, and thus it is important to evaluate estimates under different assumptions. The developed estimator can serve this purpose. Its application to a field trial for coronary heart disease is provided.  相似文献   

6.
Typically, in the practice of causal inference from observational studies, a parametric model is assumed for the joint population density of potential outcomes and treatment assignments, and possibly this is accompanied by the assumption of no hidden bias. However, both assumptions are questionable for real data, the accuracy of causal inference is compromised when the data violates either assumption, and the parametric assumption precludes capturing a more general range of density shapes (e.g., heavier tail behavior and possible multi-modalities). We introduce a flexible, Bayesian nonparametric causal model to provide more accurate causal inferences. The model makes use of a stick-breaking prior, which has the flexibility to capture any multi-modalities, skewness and heavier tail behavior in this joint population density, while accounting for hidden bias. We prove the asymptotic consistency of the posterior distribution of the model, and illustrate our causal model through the analysis of small and large observational data sets.  相似文献   

7.
Summary.  A controversial topic in obstetrics is the effect of walking on the probability of Caesarean section among women in labour. A major reason for the controversy is the presence of non-compliance that complicates the estimation of efficacy, the effect of treatment received on outcome. The intent-to-treat method does not estimate efficacy, and estimates of efficacy that are based directly on treatment received may be biased because they are not protected by randomization. However, when non-compliance occurs immediately after randomization, the use of a potential outcomes model with reasonable assumptions has made it possible to estimate efficacy and still to retain the benefits of randomization to avoid selection bias. In this obstetrics application, non-compliance occurs initially and later in one arm. Consequently some parameters cannot be uniquely estimated without making strong assumptions. This difficulty is circumvented by a new study design involving an additional randomization group and a novel potential outcomes model (principal stratification).  相似文献   

8.
Abstract

Although no universally accepted definition of causality exists, in practice one is often faced with the question of statistically assessing causal relationships in different settings. We present a uniform general approach to causality problems derived from the axiomatic foundations of the Bayesian statistical framework. In this approach, causality statements are viewed as hypotheses, or models, about the world and the fundamental object to be computed is the posterior distribution of the causal hypotheses, given the data and the background knowledge. Computation of the posterior, illustrated here in simple examples, may involve complex probabilistic modeling but this is no different than in any other Bayesian modeling situation. The main advantage of the approach is its connection to the axiomatic foundations of the Bayesian framework, and the general uniformity with which it can be applied to a variety of causality settings, ranging from specific to general cases, or from causes of effects to effects of causes.  相似文献   

9.
10.
Summary. A dynamic treatment regime is a list of decision rules, one per time interval, for how the level of treatment will be tailored through time to an individual's changing status. The goal of this paper is to use experimental or observational data to estimate decision regimes that result in a maximal mean response. To explicate our objective and to state the assumptions, we use the potential outcomes model. The method proposed makes smooth parametric assumptions only on quantities that are directly relevant to the goal of estimating the optimal rules. We illustrate the methodology proposed via a small simulation.  相似文献   

11.
We analyze publicly available data to estimate the causal effects of military interventions on the homicide rates in certain problematic regions in Mexico. We use the Rubin causal model to compare the post-intervention homicide rate in each intervened region to the hypothetical homicide rate for that same year had the military intervention not taken place. Because the effect of a military intervention is not confined to the municipality subject to the intervention, a nonstandard definition of units is necessary to estimate the causal effect of the intervention under the standard no-interference assumption of stable-unit treatment value assumption (SUTVA). Donor pools are created for each missing potential outcome under no intervention, thereby allowing for the estimation of unit-level causal effects. A multiple imputation approach accounts for uncertainty about the missing potential outcomes.  相似文献   

12.
Summary.  Consider a clinical trial in which participants are randomized to a single-dose treatment or a placebo control and assume that the adherence level is accurately recorded. If the treatment is effective, then good adherers in the treatment group should do better than poor ad- herers because they received more drug; the treatment group data follow a dose–response curve. But, good adherers to the placebo often do better than poor adherers, so the observed adherence–response in the treatment group cannot be completely attributed to the treatment. Efron and Feldman proposed an adjustment to the observed adherence–response in the treatment group by using the adherence–response in the control group. It relies on a percentile invariance assumption under which each participant's adherence percentile within their assigned treatment group does not depend on the assigned group (active drug or placebo). The Efron and Feldman approach is valid under percentile invariance, but not necessarily under departures from it. We propose an analysis based on a generalization of percentile invariance that allows adherence percentiles to be stochastically permuted across treatment groups, using a broad class of stochastic permutation models. We show that approximate maximum likelihood estimates of the underlying dose–response curve perform well when the stochastic permutation process is correctly specified and are quite robust to model misspecification.  相似文献   

13.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystic pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the potential censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. Robins (1995) specified two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. The goal of this paper is to provide a class of consistent and reasonably efficient semiparametric tests and estimators for the treatment effect under these assumptions. The tests in our class, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also study the estimation, in the presence of informative censoring, of the effect of treatment on the evolution over time of the mean of repeated measures outcome such as CD4 count.  相似文献   

14.
Even in randomized experiments the identification of causal effects is often threatened by the presence of missing outcome values, with missingness possibly being non ignorable. We provide sufficient conditions under which the availability of a binary instrument for non response allows us to non parametrically point identify average causal effects in some latent subgroups of units, named Principal Strata, defined by their non response behavior in all possible combinations of treatment and instrument. Examples are provided as possible scenarios where our assumptions may be plausible.  相似文献   

15.
Summary.  The 'Methods for improving reproductive health in Africa' trial is a recently completed randomized trial that investigated the effect of diaphragm and lubricant gel use in reducing infection by the human immunodeficiency virus (HIV) among susceptible women. 5045 women were randomly assigned to either the active treatment arm or not. Additionally, all subjects in both arms received intensive condom counselling and provision, the 'gold standard' HIV prevention barrier method. There was much lower reported use of condoms in the intervention arm than in the control arm, making it difficult to answer important public health questions based solely on the intention-to-treat analysis. We adapt an analysis technique from causal inference to estimate the 'direct effects' of assignment to the diaphragm arm, adjusting for use of condoms in an appropriate sense. Issues raised in the trial apply to other trials of HIV prevention methods, some of which are currently being conducted or designed.  相似文献   

16.
The potential outcomes approach to causal inference postulates that each individual has a number of possibly latent outcomes, each of which would be observed under a different treatment. For any individual, some of these outcomes will be unobservable or counterfactual. Information about post-treatment characteristics sometimes allows statements about what would have happened if an individual or group with these characteristics had received a different treatment. These are statements about the realized effects of the treatment. Determining the likely effect of an intervention before making a decision involves inference about effects in populations defined only by characteristics observed before decisions about treatment are made. Information on realized effects can tighten bounds on these prospectively defined measures of the intervention effect. We derive formulae for the bounds and their sampling variances and illustrate these points with data from a hypothetical study of the efficacy of screening mammography.  相似文献   

17.
Suppose we are interested in estimating the average causal effect (ACE) for the population mean from observational study. Because of simplicity and ease of interpretation, stratification by a propensity score (PS) is widely used to adjust for influence of confounding factors in estimation of the ACE. Appropriateness of the estimation by the PS stratification relies on correct specification of the PS. We propose an estimator based on stratification with multiple PS models by clustering techniques instead of model selection. If one of them correctly specifies, the proposed estimator removes bias and thus is more robust than the standard PS stratification.  相似文献   

18.
The underlying statistical concept that animates empirical strategies for extracting causal inferences from observational data is that observational data may be adjusted to resemble data that might have originated from a randomized experiment. This idea has driven the literature on matching methods. We explore an un-mined idea for making causal inferences with observational data – that any given observational study may contain a large number of indistinguishably balanced matched designs. We demonstrate how the absence of a unique best solution presents an opportunity for greater information retrieval in causal inference analysis based on the principle that many solutions teach us more about a given scientific hypothesis than a single study and improves our discernment with observational studies. The implementation can be achieved by integrating the statistical theories and models within a computational optimization framework that embodies the statistical foundations and reasoning.  相似文献   

19.
We develop point-identification for the local average treatment effect when the binary treatment contains a measurement error. The standard instrumental variable estimator is inconsistent for the parameter since the measurement error is nonclassical by construction. We correct the problem by identifying the distribution of the measurement error based on the use of an exogenous variable that can even be a binary covariate. The moment conditions derived from the identification lead to generalized method of moments estimation with asymptotically valid inferences. Monte Carlo simulations and an empirical illustration demonstrate the usefulness of the proposed procedure.  相似文献   

20.
As is well known, omission of non confounding covariates identified by the treatment assignment may lead to considerable bias for estimated treatment effect even in a simple randomized trial. In this article we identify confounding vs. dispersing covariates by the confounding influence characterizing variance change and bias risk of estimated treatment effect due to constraint on effects of these covariates. Consequently, consistent constraint on effects of identified confounding covariates reduces variance of estimated treatment effect whereas inconsistent constraint on effects of identified dispersing covariates—such as omission of identified dispersing covariates—leads to little bias for estimated treatment effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号