首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The use of logistic regression analysis is widely applicable to epidemiologic studies concerned with quantifying an association between a study factor (i.e., an exposure variable) and a health outcome (i.e., disease status). This paper reviews the general characteristics of the logistic model and illustrates its use in epidemiologic inquiry. Particular emphasis is given to the control of extraneous variables in the context of follow-up and case-control studies. Techniques for both unconditional and conditional maximum likelihood estimation of the parameters in the logistic model are described and illustrated. A general analysis strategy is also presented which incorporates the assessment of both interaction and confounding in quantifying an exposure-disease association of interest.  相似文献   

2.
The complementary roles fulfilled by observational studies and randomized controlled trials in the population science research agenda is illustrated using results from the Women’s Health Initiative (WHI). Comparative and joint analyses of clinical trial and observational study data can enhance observational study design and analysis choices, and can augment randomized trial implications. These concepts are described in the context of findings from the WHI randomized trials of postmenopausal hormone therapy and of a low-fat dietary pattern, especially in relation to coronary heart disease, stroke, and breast cancer. The role of biomarkers of exposure and outcome, including high-dimensional genomic and proteomic biomarkers, in the elucidation of disease associations, will also be discussed in these same contexts.  相似文献   

3.
Confounding adjustment plays a key role in designing observational studies such as cross-sectional studies, case-control studies, and cohort studies. In this article, we propose a simple method for sample size calculation in observational research in the presence of confounding. The method is motivated by the notion of E-value, using some bounding factor to quantify the impact of confounders on the effect size. The method can be applied to calculate the needed sample size in observational research when the outcome variable is binary, continuous, or time-to-event. The method can be implemented straightforwardly using existing commercial software such as the PASS software. We demonstrate the performance of the proposed method through numerical examples, simulation studies, and a real application, which show that the proposed method is conservative in providing a slightly bigger sample size than what it needs to achieve a given power.  相似文献   

4.
In the medical literature, there has been an increased interest in evaluating association between exposure and outcomes using nonrandomized observational studies. However, because assignments to exposure are not random in observational studies, comparisons of outcomes between exposed and nonexposed subjects must account for the effect of confounders. Propensity score methods have been widely used to control for confounding, when estimating exposure effect. Previous studies have shown that conditioning on the propensity score results in biased estimation of conditional odds ratio and hazard ratio. However, research is lacking on the performance of propensity score methods for covariate adjustment when estimating the area under the ROC curve (AUC). In this paper, AUC is proposed as measure of effect when outcomes are continuous. The AUC is interpreted as the probability that a randomly selected nonexposed subject has a better response than a randomly selected exposed subject. A series of simulations has been conducted to examine the performance of propensity score methods when association between exposure and outcomes is quantified by AUC; this includes determining the optimal choice of variables for the propensity score models. Additionally, the propensity score approach is compared with that of the conventional regression approach to adjust for covariates with the AUC. The choice of the best estimator depends on bias, relative bias, and root mean squared error. Finally, an example looking at the relationship of depression/anxiety and pain intensity in people with sickle cell disease is used to illustrate the estimation of the adjusted AUC using the proposed approaches.  相似文献   

5.
Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.  相似文献   

6.
Unmeasured confounding is a common problem in observational studies. This article presents simple formulae that can set the bounds of the confounding risk ratio under three standard populations of the exposed, unexposed, and total groups. The bounds are derived by considering the confounding risk ratio as a function of the prevalence of a covariate, and can be constructed using only information about either the exposure–confounder or the disease–confounder relationship. The formulae can be extended to the confounding odds ratio in case–control studies, and the confounding risk difference is discussed. The application of these formulae is demonstrated using an example in which estimation may suffer from bias due to population stratification. The formulae can help to provide a realistic picture of the potential impact of bias due to confounding.  相似文献   

7.
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero-inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set.  相似文献   

8.
A primary focus of an increasing number of scientific studies is to determine whether two exposures interact in the effect that they produce on an outcome of interest. Interaction is commonly assessed by fitting regression models in which the linear predictor includes the product between those exposures. When the main interest lies in the interaction, this approach is not entirely satisfactory because it is prone to (possibly severe) bias when the main exposure effects or the association between outcome and extraneous factors are misspecified. In this article, we therefore consider conditional mean models with identity or log link which postulate the statistical interaction in terms of a finite-dimensional parameter, but which are otherwise unspecified. We show that estimation of the interaction parameter is often not feasible in this model because it would require nonparametric estimation of auxiliary conditional expectations given high-dimensional variables. We thus consider 'multiply robust estimation' under a union model that assumes at least one of several working submodels holds. Our approach is novel in that it makes use of information on the joint distribution of the exposures conditional on the extraneous factors in making inferences about the interaction parameter of interest. In the special case of a randomized trial or a family-based genetic study in which the joint exposure distribution is known by design or by Mendelian inheritance, the resulting multiply robust procedure leads to asymptotically distribution-free tests of the null hypothesis of no interaction on an additive scale. We illustrate the methods via simulation and the analysis of a randomized follow-up study.  相似文献   

9.
Post marketing data offer rich information and cost-effective resources for physicians and policy-makers to address some critical scientific questions in clinical practice. However, the complex confounding structures (e.g., nonlinear and nonadditive interactions) embedded in these observational data often pose major analytical challenges for proper analysis to draw valid conclusions. Furthermore, often made available as electronic health records (EHRs), these data are usually massive with hundreds of thousands observational records, which introduce additional computational challenges. In this paper, for comparative effectiveness analysis, we propose a statistically robust yet computationally efficient propensity score (PS) approach to adjust for the complex confounding structures. Specifically, we propose a kernel-based machine learning method for flexibly and robustly PS modeling to obtain valid PS estimation from observational data with complex confounding structures. The estimated propensity score is then used in the second stage analysis to obtain the consistent average treatment effect estimate. An empirical variance estimator based on the bootstrap is adopted. A split-and-merge algorithm is further developed to reduce the computational workload of the proposed method for big data, and to obtain a valid variance estimator of the average treatment effect estimate as a by-product. As shown by extensive numerical studies and an application to postoperative pain EHR data comparative effectiveness analysis, the proposed approach consistently outperforms other competing methods, demonstrating its practical utility.  相似文献   

10.
Propensity score-based estimators are commonly used to estimate causal effects in evaluation research. To reduce bias in observational studies, researchers might be tempted to include many, perhaps correlated, covariates when estimating the propensity score model. Taking into account that the propensity score is estimated, this study investigates how the efficiency of matching, inverse probability weighting, and doubly robust estimators change under the case of correlated covariates. Propositions regarding the large sample variances under certain assumptions on the data-generating process are given. The propositions are supplemented by several numerical large sample and finite sample results from a wide range of models. The results show that the covariate correlations may increase or decrease the variances of the estimators. There are several factors that influence how correlation affects the variance of the estimators, including the choice of estimator, the strength of the confounding toward outcome and treatment, and whether a constant or non-constant causal effect is present.  相似文献   

11.
The Cox proportional hazards (PH) regression model has been widely used to analyze survival data in clinical trials and observational studies. In addition to estimating the main treatment or exposure group effect, it is common to adjust for additional covariates using the Cox model. It is well known that violation of the PH assumption can lead to estimates that are biased and difficult to interpret, and model checking has become a routine procedure. However, such checking might focus on the primary group comparisons, and the assumption can still be violated when adjusting for many of the potential covariates. We study the effect of violation of the PH assumption of the covariates on the estimation of the main group effect in the Cox model. The results are summarized in terms of the bias and the coverage properties of the confidence intervals. Overall in randomized clinical trials, the bias caused by misspecifying the PH assumption on the covariates is no more than 15% in absolute value regardless of sample size. In observational studies where the covariates are likely correlated with the group variable, however, the bias can be very severe. The coverage properties largely depend on sample size, as expected, as bias becomes dominating with increasing sample size. These findings should serve as cautionary notes when adjusting for potential confounders in observational studies, as the violation of PH assumption on the confounders can lead to erroneous results.  相似文献   

12.
Recent research has extended standard methods for meta‐analysis to more general forms of evidence synthesis, where the aim is to combine different data types or data summaries that contain information about functions of multiple parameters to make inferences about the parameters of interest. We consider one such scenario in which the goal is to make inferences about the association between a primary binary exposure and continuously valued outcome in the context of several confounding exposures, and where the data are available in various different forms: individual participant data (IPD) with repeated measures, sample means that have been aggregated over strata, and binary data generated by thresholding the underlying continuously valued outcome measure. We show that an estimator of the population mean of a continuously valued outcome can be constructed using binary threshold data provided that a separate estimate of the outcome standard deviation is available. The results of a simulation study show that this estimator has negligible bias but is less efficient than the sample mean – the minimum variance ratio is based on a Taylor series expansion. Combining this estimator with sample means and IPD from different sources (such as a series of published studies) using both linear and probit regression does, however, improve the precision of estimation considerably by incorporating data that would otherwise have been excluded for being in the wrong format. We apply these methods to investigate the association between the G277S mutation in the transferrin gene and serum ferritin (iron) levels separately in pre‐ and post‐menopausal women based on data from three published studies.  相似文献   

13.
The outcome dependent sampling scheme has been gaining attention in both the statistical literature and applied fields. Epidemiological and environmental researchers have been using it to select the observations for more powerful and cost-effective studies. Motivated by a study of the effect of in utero exposure to polychlorinated biphenyls on children's IQ at age 7, in which the effect of an important confounding variable is nonlinear, we consider a semi-parametric regression model for data from an outcome-dependent sampling scheme where the relationship between the response and covariates is only partially parameterized. We propose a penalized spline maximum likelihood estimation (PSMLE) for inference on both the parametric and the nonparametric components and develop their asymptotic properties. Through simulation studies and an analysis of the IQ study, we compare the proposed estimator with several competing estimators. Practical considerations of implementing those estimators are discussed.  相似文献   

14.
15.
In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830–839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.  相似文献   

16.
Clinical studies, which have a small number of patients, are conducted by pharmaceutical companies and research institutions. Examples of constraints that lead to a small clinical study include a single investigative site with a highly specialized expertise or equipment, rare diseases, and limited time and budget. We consider the following topics, which we believe will be helpful for the investigator and statistician working together on the design and analysis of small clinical studies: definitions of various types of small studies (exploratory, pilot, proof of concept); bias and ways to mitigate the bias; commonly used study designs for randomized and nonrandomized studies, and some less commonly used designs; potential ethical issues associated with small underpowered clinical studies; sample size for small studies; statistical analysis methods for different types of variables and multiplicity issues. We conclude the paper with recommendations made by an Institute of Medicine committee, which was asked to assess the current methodologies and appropriate situations for conducting small clinical studies.  相似文献   

17.
Although mean residual lifetime is often of interest in biomedical studies, restricted mean residual lifetime must be considered in order to accommodate censoring. Differences in the restricted mean residual lifetime can be used as an appropriate quantity for comparing different treatment groups with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also applied to an observational data set of patients after an acute myocardial infarction event.  相似文献   

18.
Propensity score analysis (PSA) is a technique to correct for potential confounding in observational studies. Covariate adjustment, matching, stratification, and inverse weighting are the four most commonly used methods involving propensity scores. The main goal of this research is to determine which PSA method performs the best in terms of protecting against spurious association detection, as measured by Type I error rate, while maintaining sufficient power to detect a true association, if one exists. An examination of these PSA methods along with ordinary least squares regression was conducted under two cases: correct PSA model specification and incorrect PSA model specification. PSA covariate adjustment and PSA matching maintain the nominal Type I error rate, when the PSA model is correctly specified, but only PSA covariate adjustment achieves adequate power levels. Other methods produced conservative Type I Errors in some scenarios, while liberal Type I error rates were observed in other scenarios.  相似文献   

19.
Joint damage in psoriatic arthritis can be measured by clinical and radiological methods, the former being done more frequently during longitudinal follow-up of patients. Motivated by the need to compare findings based on the different methods with different observation patterns, we consider longitudinal data where the outcome variable is a cumulative total of counts that can be unobserved when other, informative, explanatory variables are recorded. We demonstrate how to calculate the likelihood for such data when it is assumed that the increment in the cumulative total follows a discrete distribution with a location parameter that depends on a linear function of explanatory variables. An approach to the incorporation of informative observation is suggested. We present analyses based on an observational database from a psoriatic arthritis clinic. Although the use of the new statistical methodology has relatively little effect in this example, simulation studies indicate that the method can provide substantial improvements in bias and coverage in some situations where there is an important time varying explanatory variable.  相似文献   

20.
Odds ratios are frequently used to describe the relationship between a binary treatment or exposure and a binary outcome. An odds ratio can be interpreted as a causal effect or a measure of association, depending on whether it involves potential outcomes or the actual outcome. An odds ratio can also be characterized as marginal versus conditional, depending on whether it involves conditioning on covariates. This article proposes a method for estimating a marginal causal odds ratio subject to confounding. The proposed method is based on a logistic regression model relating the outcome to the treatment indicator and potential confounders. Simulation results show that the proposed method performs reasonably well in moderate-sized samples and may even offer an efficiency gain over the direct method based on the sample odds ratio in the absence of confounding. The method is illustrated with a real example concerning coronary heart disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号