首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The measurable multiple bio-markers for a disease are used as indicators for studying the response variable of interest in order to monitor and model disease progression. However, it is common for subjects to drop out of the studies prematurely resulting in unbalanced data and hence complicating the inferences involving such data. In this paper we consider a case where data are unbalanced among subjects and also within a subject because for some reason only a subset of the multiple outcomes of the response variable are observed at any one occasion. We propose a nonlinear mixed-effects model for the multivariate response variable data and derive a joint likelihood function that takes into account the partial dropout of the outcomes of the response variable. We further show how the methodology can be used in the estimation of the parameters that characterise HIV disease dynamics. An approximation technique of the parameters is also given and illustrated using a routine observational HIV dataset.  相似文献   

2.
Assessing dose-response from flexible-dose clinical trials (e.g., titration or dose escalation studies) is challenging and often problematic due to the selection bias caused by 'titration-to-response'. We investigate the performance of a dynamic linear mixed-effects (DLME) model and marginal structural model (MSM) in evaluating dose-response from flexible-dose titration clinical trials via simulations. The simulation results demonstrated that DLME models with previous exposure as a time-varying covariate may provide an unbiased and efficient estimator to recover exposure-response relationship from flexible-dose clinical trials. Although the MSM models with independent and exchangeable working correlations appeared to be able to recover the right direction of the dose-response relationship, it tended to over-correct selection bias and overestimated the underlying true dose-response. The MSM estimators were also associated with large variability in the parameter estimates. Therefore, DLME may be an appropriate modeling option in identifying dose-response when data from fixed-dose studies are absent or a fixed-dose design is unethical to be implemented.  相似文献   

3.
In the medical literature, there has been an increased interest in evaluating association between exposure and outcomes using nonrandomized observational studies. However, because assignments to exposure are not random in observational studies, comparisons of outcomes between exposed and nonexposed subjects must account for the effect of confounders. Propensity score methods have been widely used to control for confounding, when estimating exposure effect. Previous studies have shown that conditioning on the propensity score results in biased estimation of conditional odds ratio and hazard ratio. However, research is lacking on the performance of propensity score methods for covariate adjustment when estimating the area under the ROC curve (AUC). In this paper, AUC is proposed as measure of effect when outcomes are continuous. The AUC is interpreted as the probability that a randomly selected nonexposed subject has a better response than a randomly selected exposed subject. A series of simulations has been conducted to examine the performance of propensity score methods when association between exposure and outcomes is quantified by AUC; this includes determining the optimal choice of variables for the propensity score models. Additionally, the propensity score approach is compared with that of the conventional regression approach to adjust for covariates with the AUC. The choice of the best estimator depends on bias, relative bias, and root mean squared error. Finally, an example looking at the relationship of depression/anxiety and pain intensity in people with sickle cell disease is used to illustrate the estimation of the adjusted AUC using the proposed approaches.  相似文献   

4.
Repeated categorical outcomes frequently occur in clinical trials. Muenz and Rubinstein (1985) presented Markov chain models to analyze binary repeated data in a breast cancer study. We extend their method to the setting when more than one repeated outcome variable is of interest. In a randomized clinical trial of breast cancer, we investigate the dependency of toxicities on predictor variables and the relationship among multiple toxic effects.  相似文献   

5.
Binary as well as polytomous logistic models have been found useful for estimating odds ratios when the exposure of prime interest assumes unordered multiple levels under matched pair case-control design. In our earlier studies, we have shown the use of a polytomous logistic model for estimating cumulative odds ratios when the exposure of prime interest assumes multiple ordered levels under matched pair case-control design. In this paper, using the above model, we estimate the covariate adjusted cumulative odds ratios, in the case of an ordinal multiple level exposure variable under a pairwise matched case-control retrospective design. An approach, based on asymptotic distributional results, is also described to investigate whether or not the response categories are distinguishable with respect to the cumulative odds ratios after adjusting the effect of covariates. An illustrative example is presented and discussed.  相似文献   

6.
For survival data, mark variables are only observed at uncensored failure times, and it is of interest to investigate whether there is any relationship between the failure time and the mark variable. The additive hazards model, focusing on hazard differences rather than hazard ratios, has been widely used in practice. In this article, we propose a mark-specific additive hazards model in which both the regression coefficient functions and the baseline hazard function depend nonparametrically on a continuous mark. An estimating equation approach is developed to estimate the regression functions, and the asymptotic properties of the resulting estimators are established. In addition, some formal hypothesis tests are constructed for various hypotheses concerning the mark-specific treatment effects. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a data set from the first HIV vaccine efficacy trial is provided.  相似文献   

7.
Observational epidemiological studies are increasingly used in pharmaceutical research to evaluate the safety and effectiveness of medicines. Such studies can complement findings from randomized clinical trials by involving larger and more generalizable patient populations by accruing greater durations of follow-up and by representing what happens more typically in the clinical setting. However, the interpretation of exposure effects in observational studies is almost always complicated by non-random exposure allocation, which can result in confounding and potentially lead to misleading conclusions. Confounding occurs when an extraneous factor, related to both the exposure and the outcome of interest, partly or entirely explains the relationship observed between the study exposure and the outcome. Although randomization can eliminate confounding by distributing all such extraneous factors equally across the levels of a given exposure, methods for dealing with confounding in observational studies include a careful choice of study design and the possible use of advanced analytical methods. The aim of this paper is to introduce some of the approaches that can be used to help minimize the impact of confounding in observational research to the reader working in the pharmaceutical industry.  相似文献   

8.
Longitudinal clinical trials with long follow-up periods almost invariably suffer from a loss to follow-up and non-compliance with the assigned therapy. An example is protocol 128 of the AIDS Clinical Trials Group, a 5-year equivalency trial comparing reduced dose zidovudine with the standard dose for treatment of paediatric acquired immune deficiency syndrome patients. This study compared responses to treatment by using both clinical and cognitive outcomes. The cognitive outcomes are of particular interest because the effects of human immunodeficiency virus infection of the central nervous system can be more acute in children than in adults. We formulate and apply a Bayesian hierarchical model to estimate both the intent-to-treat effect and the average causal effect of reducing the prescribed dose of zidovudine by 50%. The intent-to-treat effect quantifies the causal effect of assigning the lower dose, whereas the average causal effect represents the causal effect of actually taking the lower dose. We adopt a potential outcomes framework where, for each individual, we assume the existence of a different potential outcomes process at each level of time spent on treatment. The joint distribution of the potential outcomes and the time spent on assigned treatment is formulated using a hierarchical model: the potential outcomes distribution is given at the first level, and dependence between the outcomes and time on treatment is specified at the second level by linking the time on treatment to subject-specific effects that characterize the potential outcomes processes. Several distributional and structural assumptions are used to identify the model from observed data, and these are described in detail. A detailed analysis of AIDS Clinical Trials Group protocol 128 is given; inference about both the intent-to-treat effect and average causal effect indicate a high probability of dose equivalence with respect to cognitive functioning.  相似文献   

9.
The classic recursive bivariate probit model is of particular interest to researchers since it allows for the estimation of the treatment effect that a binary endogenous variable has on a binary outcome in the presence of unobservables. In this article, the authors consider the semiparametric version of this model and introduce a model fitting procedure which permits to estimate reliably the parameters of a system of two binary outcomes with a binary endogenous regressor and smooth functions of continuous covariates. They illustrate the empirical validity of the proposal through an extensive simulation study. The approach is applied to data from a survey, conducted in Botswana, on the impact of education on women's fertility. Some studies suggest that the estimated effect could have been biased by the possible endogeneity arising because unobservable confounders (e.g., ability and motivation) are associated with both fertility and education. The Canadian Journal of Statistics 39: 259–279; 2011 © 2011 Statistical Society of Canada  相似文献   

10.
Abstract

Teratological experiments are controlled dose-response studies in which impregnated animals are randomly assigned to various exposure levels of a toxic substance. Subsequently, both continuous and discrete responses are recorded on the litters of fetuses that these animals produce. Discrete responses are usually binary in nature, such as the presence or absence of some fetal anomaly. This clustered binary data usually exhibits over-dispersion (or under-dispersion), which can be interpreted as either variation between litter response probabilities or intralitter correlation. To model the correlation and/or variation, the beta-binomial distribution has been assumed for the number of positive fetal responses within a litter. Although the mean of the beta-binomial model has been linked to dose-response functions, in terms of measuring over-dispersion, it may be a restrictive method in modeling data from teratological studies. Also for certain toxins, a threshold effect has been observed in the dose-response pattern of the data. We propose to incorporate a random effect into a general threshold dose-response model to account for the variation in responses, while at the same time estimating the threshold effect. We fit this model to a well-known data set in the field of teratology. Simulation studies are performed to assess the validity of the random effects threshold model in these types of studies.  相似文献   

11.
In dose-response studies, Wadley’s problem occurs when the number of organisms that survive exposure to varying doses of a treatment is observed but the number initially present is unknown. The unknown number of organisms initially treated has traditionally been modelled by a Poisson distribution, resulting in a Poisson distribution for the number of survivors with parameter proportional to the probability of survival. Data in this setting are often overdispersed. This study revisits the beta-Poisson distribution and considers its effectiveness in modelling overdispersed data from a Wadley’s problem setting.  相似文献   

12.
Over the past decades, various principles for causal effect estimation have been proposed, all differing in terms of how they adjust for measured confounders: either via traditional regression adjustment, by adjusting for the expected exposure given those confounders (e.g., the propensity score), or by inversely weighting each subject's data by the likelihood of the observed exposure, given those confounders. When the exposure is measured with error, this raises the question whether these different estimation strategies might be differently affected and whether one of them is to be preferred for that reason. In this article, we investigate this by comparing inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators for the exposure effect in linear marginal structural mean models (MSM) with G-estimators, propensity score (PS) adjusted estimators and ordinary least squares (OLS) estimators for the exposure effect in linear regression models. We find analytically that these estimators are equally affected when exposure misclassification is independent of the confounders, but not otherwise. Simulation studies reveal similar results for time-varying exposures and when the model of interest includes a logistic link.  相似文献   

13.
While most of epidemiology is observational, rather than experimental, the culture of epidemiology is still derived from agricultural experiments, rather than other observational fields, such as astronomy or economics. The mismatch is made greater as focus has turned to continue risk factors, multifactorial outcomes, and outcomes with large variation unexplainable by available risk factors. The analysis of such data is often viewed as hypothesis testing with statistical control replacing randomization. However, such approaches often test restricted forms of the hypothesis being investigated, such as the hypothesis of a linear association, when there is no prior empirical or theoretical reason to believe that if an association exists, it is linear. In combination with the large nonstochastic sources of error in such observational studies, this suggests the more flexible alternative of exploring the association. Conclusions on the possible causal nature of any discovered association will rest on the coherence and consistency of multiple studies. Nonparametric smoothing in general, and generalized additive models in particular, represent an attractive approach to such problems. This is illustrated using data examining the relationship between particulate air pollution and daily mortality in Birmingham, Alabama; between particulate air pollution, ozone, and SO2 and daily hospital admissions for respiratory illness in Philadelphia; and between ozone and particulate air pollution and coughing episodes in children in six eastern U.S. cities. The results indicate that airborne particles and ozone are associated with adverse health outcomes at very low concentrations, and that there are likely no thresholds for these relationships.  相似文献   

14.
Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.  相似文献   

15.

For large cohort studies with rare outcomes, the nested case-control design only requires data collection of small subsets of the individuals at risk. These are typically randomly sampled at the observed event times and a weighted, stratified analysis takes over the role of the full cohort analysis. Motivated by observational studies on the impact of hospital-acquired infection on hospital stay outcome, we are interested in situations, where not necessarily the outcome is rare, but time-dependent exposure such as the occurrence of an adverse event or disease progression is. Using the counting process formulation of general nested case-control designs, we propose three sampling schemes where not all commonly observed outcomes need to be included in the analysis. Rather, inclusion probabilities may be time-dependent and may even depend on the past sampling and exposure history. A bootstrap analysis of a full cohort data set from hospital epidemiology allows us to investigate the practical utility of the proposed sampling schemes in comparison to a full cohort analysis and a too simple application of the nested case-control design, if the outcome is not rare.

  相似文献   

16.
The elderly population in the USA is expected to double in size by the year 2025, making longitudinal health studies of this population of increasing importance. The degree of loss to follow-up in studies of the elderly, which is often because elderly people cannot remain in the study, enter a nursing home or die, make longitudinal studies of this population problematic. We propose a latent class model for analysing multiple longitudinal binary health outcomes with multiple-cause non-response when the data are missing at random and a non-likelihood-based analysis is performed. We extend the estimating equations approach of Robins and co-workers to latent class models by reweighting the multiple binary longitudinal outcomes by the inverse probability of being observed. This results in consistent parameter estimates when the probability of non-response depends on observed outcomes and covariates (missing at random) assuming that the model for non-response is correctly specified. We extend the non-response model so that institutionalization, death and missingness due to failure to locate, refusal or incomplete data each have their own set of non-response probabilities. Robust variance estimates are derived which account for the use of a possibly misspecified covariance matrix, estimation of missing data weights and estimation of latent class measurement parameters. This approach is then applied to a study of lower body function among a subsample of the elderly participating in the 6-year Longitudinal Study of Aging.  相似文献   

17.
A method based on pseudo-observations has been proposed for direct regression modeling of functionals of interest with right-censored data, including the survival function, the restricted mean and the cumulative incidence function in competing risks. The models, once the pseudo-observations have been computed, can be fitted using standard generalized estimating equation software. Regression models can however yield problematic results if the number of covariates is large in relation to the number of events observed. Guidelines of events per variable are often used in practice. These rules of thumb for the number of events per variable have primarily been established based on simulation studies for the logistic regression model and Cox regression model. In this paper we conduct a simulation study to examine the small sample behavior of the pseudo-observation method to estimate risk differences and relative risks for right-censored data. We investigate how coverage probabilities and relative bias of the pseudo-observation estimator interact with sample size, number of variables and average number of events per variable.  相似文献   

18.
Twenty-four-hour urinary excretion of nicotine equivalents, a biomarker for exposure to cigarette smoke, has been widely used in biomedical studies in recent years. Its accurate estimate is important for examining human exposure to tobacco smoke. The objective of this article is to compare the bootstrap confidence intervals of nicotine equivalents with the standard confidence intervals derived from linear mixed model (LMM) and generalized estimation equation. We use percentile bootstrap method because it has practical value for real-life application and it works well with nicotine data. To preserve the within-subject correlation of nicotine equivalents between repeated measures, we bootstrap the repeated measures of each subject as a vector. The results indicate that the bootstrapped estimates in most cases give better estimates than the LMM and generalized estimation equation without bootstrap.  相似文献   

19.
In most parametric statistical analyses, knowledge of the distribution of the response variable, or of the errors, is important. As this distribution is not typically known with certainty, one might initially construct a histogram or estimate the density of the variable of interest to gain insight regarding the distribution and its characteristics. However, when the response variable is incomplete, a histogram will only provide a representation of the distribution of the observed data. In the AIDS Clinical Trial Study protocol 175, interest lies in the difference in CD4 counts from baseline to final follow-up, but CD4 counts collected at final follow-up were incomplete. A method is therefore proposed for estimating the density of an incomplete response variable when auxiliary data are available. The proposed estimator is based on the Horvitz–Thompson estimator, and the propensity scores are estimated nonparametrically. Simulation studies indicate that the proposed estimator performs well.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号