首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a randomized trial designed to study the effect of a treatment of interest on the evolution of the mean of a time-dependent outcome variable, subjects are assigned to a treatment regime, or, equivalently, a treatment protocol. Unfortunately, subjects often fail to comply with their assigned regime. From a public health point of view, the causal parameter of interest will often be a function of the treatment differences that would have been observed hadcontrary to fact, all subjects remained on protocol. This paper considers the identification and estimation of these treatment differences based on a new class of structural models, the multivariate structural nested mean models, when reliable estimates of each subject's actual treatment are available. Estimates of “actual treatment” might, for example, be obtained by measuring the amount of “active drug” in the subject's blood or urine at each follow-up visit or by pill counting techniques. In addition, we discuss a natural extension of our methods to observational studies.  相似文献   

2.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
A power transformation regression model is considered for exponentially distributed time to failure data with right censoring. Procedures for estimation of parameters by maximum likelihood and assessment of goodness of model fit are described and illustrated.  相似文献   

4.
In this paper, we propose Bayes estimators of the parameter and reliability function of inverted exponential distribution under the general entropy loss function for complete, type I and type II censored samples. The proposed estimators have been compared with the corresponding maximum-likelihood estimators for their simulated risks (average loss over sample space).  相似文献   

5.
Many late-onset diseases are caused by what appears to be a combination of a genetic predisposition to disease and environmental factors. The use of existing cohort studies provides an opportunity to infer genetic predisposition to disease on a representative sample of a study population, now that many such studies are gathering genetic information on the participants. One feature to using existing cohorts is that subjects may be censored due to death prior to genetic sampling, thereby adding a layer of complexity to the analysis. We develop a statistical framework to infer parameters of a latent variables model for disease onset. The latent variables model describes the role of genetic and modifiable risk factors on the onset ages of multiple diseases, and accounts for right-censoring of disease onset ages. The framework also allows for missing genetic information by inferring a subject's unknown genotype through appropriately incorporated covariate information. The model is applied to data gathered in the Framingham Heart Study for measuring the effect of different Apo-E genotypes on the occurrence of various cardiovascular disease events.  相似文献   

6.
Existing statutes in the United States and Europe require manufacturers to demonstrate evidence of effectiveness through the conduct of adequate and well‐controlled studies to obtain marketing approval of a therapeutic product. What constitutes adequate and well‐controlled studies is usually interpreted as randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, cost, patient preference, or in some cases, ethical concerns. For example, RCTs may not be fully powered in rare diseases or in infections caused by multidrug resistant pathogens because of the low number of enrollable patients. In this case, data available from external controls (including historical controls and observational studies or data registries) can complement information provided by RCT. Propensity score matching methods can be used to select or “borrow” additional patients from the external controls, for maintaining a one‐to‐one randomization between the treatment arm and active control, by matching the new treatment and control units based on a set of measured covariates, ie, model‐based pairing of treatment and control units that are similar in terms of their observable pretreatment characteristics. To this end, 2 matching schemes based on propensity scores are explored and applied to a real clinical data example with the objective of using historical or external observations to augment data in a trial where the randomization is disproportionate or asymmetric.  相似文献   

7.
The authors consider the estimation of regression parameters in the context of a class of generalized proportional hazards models, termed linear transformation models, in the presence of interval‐censored data. They present an estimating equation approach whose good performance is demonstrated through simulations and which they illustrate in a few concrete cases.  相似文献   

8.
In this paper, we propose a semiparametric shock model for two dependent failure times where the risk indicator of one failure time plays the part of a time-varying covariate for the other one. According to Hougaard [2000. Analysis of Multivariate Survival Data. Springer, New York], the dependence between the two failure times is therefore of event-related type.  相似文献   

9.
In the context of clinical trials, there is interest in the treatment effect for subpopulations of patients defined by intercurrent events, namely disease-related events occurring after treatment initiation that affect either the interpretation or the existence of endpoints. With the principal stratum strategy, the ICH E9(R1) guideline introduces a formal framework in drug development for defining treatment effects in such subpopulations. Statistical estimation of the treatment effect can be performed based on the principal ignorability assumption using multiple imputation approaches. Principal ignorability is a conditional independence assumption that cannot be directly verified; therefore, it is crucial to evaluate the robustness of results to deviations from this assumption. As a sensitivity analysis, we propose a joint model that multiply imputes the principal stratum membership and the outcome variable while allowing different levels of violation of the principal ignorability assumption. We illustrate with a simulation study that the joint imputation model-based approaches are superior to naive subpopulation analyses. Motivated by an oncology clinical trial, we implement the sensitivity analysis on a time-to-event outcome to assess the treatment effect in the subpopulation of patients who discontinued due to adverse events using a synthetic dataset. Finally, we explore the potential usage and provide interpretation of such analyses in clinical settings, as well as possible extension of such models in more general cases.  相似文献   

10.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   

11.
Cluster‐randomized trials are often conducted to assess vaccine effects. Defining estimands of interest before conducting a trial is integral to the alignment between a study's objectives and the data to be collected and analyzed. This paper considers estimands and estimators for overall, indirect, and total vaccine effects in trials, where clusters of individuals are randomized to vaccine or control. The scenario is considered where individuals self‐select whether to participate in the trial, and the outcome of interest is measured on all individuals in each cluster. Unlike the overall, indirect, and total effects, the direct effect of vaccination is shown in general not to be estimable without further assumptions, such as no unmeasured confounding. An illustrative example motivated by a cluster‐randomized typhoid vaccine trial is provided.  相似文献   

12.
We consider approximate Bayesian inference about scalar parameters of linear regression models with possible censoring. A second-order expansion of their Laplace posterior is seen to have a simple and intuitive form for logconcave error densities with nondecreasing hazard functions. The accuracy of the approximations is assessed for normal and Gumbel errors when the number of regressors increases with sample size. Perturbations of the prior and the likelihood are seen to be easily accommodated within our framework. Links with the work of DiCiccio et al. (1990) and Viveros and Sprott (1987) extend the applicability of our results to conditional frequentist inference based on likelihood-ratio statistics.  相似文献   

13.
The most widely used model for multidimensional survival analysis is the Cox model. This model is semi-parametric, since its hazard function is the product of an unspecified baseline hazard, and a parametric functional form relating the hazard and the covariates. We consider a more flexible and fully nonparametric proportional hazards model, where the functional form of the covariates effect is left unspecified. In this model, estimation is based on the maximum likelihood method. Results obtained from a Monte-Carlo experiment and from real data are presented. Finally, the advantages and the limitations of the approacha are discussed.  相似文献   

14.
It is not uncommon to encounter a randomized clinical trial (RCT) in which each patient is treated with several courses of therapies and his/her response is taken after treatment with each course because of the nature of a treatment design for a disease. On the basis of a simple multiplicative risk model proposed elsewhere for repeated binary measurements, we derive the maximum likelihood estimator (MLE) for the proportion ratio (PR) of responses between two treatments in closed form without the need of modeling the complicated relationship between patient’s compliance and patient’s response. We further derive the asymptotic variance of the MLE and propose an asymptotic interval estimator for the PR using the logarithmic transformation. We also consider two other asymptotic interval estimators. One is derived from the principle of Fieller’s Theorem and the other is derived by using the randomization-based approach suggested elsewhere. To evaluate and compare the finite-sample performance of these interval estimators, we apply the Monte Carlo simulation. We find that the interval estimator using the logarithmic transformation of the MLE consistently outperforms the other two estimators with respect to efficiency. This gain in efficiency can be substantial especially when there are patients not complying with their assigned treatments. Finally, we employ the data regarding the trial of using macrophage colony stimulating factor (M-CSF) over three courses of intensive chemotherapies to reduce febrile neutropenia incidence for acute myeloid leukemia patients to illustrate the use of these estimators.  相似文献   

15.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Summary.  The cure fraction (the proportion of patients who are cured of disease) is of interest to both patients and clinicians and is a useful measure to monitor trends in survival of curable disease. The paper extends the non-mixture and mixture cure fraction models to estimate the proportion cured of disease in population-based cancer studies by incorporating a finite mixture of two Weibull distributions to provide more flexibility in the shape of the estimated relative survival or excess mortality functions. The methods are illustrated by using public use data from England and Wales on survival following diagnosis of cancer of the colon where interest lies in differences between age and deprivation groups. We show that the finite mixture approach leads to improved model fit and estimates of the cure fraction that are closer to the empirical estimates. This is particularly so in the oldest age group where the cure fraction is notably lower. The cure fraction is broadly similar in each deprivation group, but the median survival of the 'uncured' is lower in the more deprived groups. The finite mixture approach overcomes some of the limitations of the more simplistic cure models and has the potential to model the complex excess hazard functions that are seen in real data.  相似文献   

17.
Fisher information matrix is used to quantify information loss in the randomly right-censored model, A real value approach alternative to the matrix approach of Turrero (1988) is presented for obtaining real valued measures of the relative efficiency of the censored experiment. Properties of the proposed measures are examined. The connection between both approaches and the Bayesian approach to this problem is also studied. Results in the paper are exemplified by considering grouped survival data.  相似文献   

18.
This paper discusses a general strategy for reducing measurement-error-induced bias in statistical models. It is assumed that the measurement error is unbiased with a known variance although no other distributional assumptions on the measurement-error are employed,

Using a preliminary fit of the model to the observed data, a transformation of the variable measured with error is estimated. The transformation is constructed so that the estimates obtained by refitting the model to the ‘corrected’ data have smaller bias,

Whereas the general strategy can be applied in a number of settings, this paper focuses on the problem of covariate measurement error in generalized linear models, Two estimators are derived and their effectiveness at reducing bias is demonstrated in a Monte Carlo study.  相似文献   

19.
The issue of modelling the joint distribution of survival time and of prognostic variables measured periodically has recently become of interest in the AIDS literature but is of relevance in other applications. The focus of this paper is on clinical trials where follow-up measurements of potentially prognostic variables are often collected but not routinely used. These measurements can be used to study the biological evolution of the disease of interest; in particular the effect of an active treatment can be examined by comparing the time profiles of patients in the active and placebo group. It is proposed to use multilevel regression analysis to model the individual repeated observations as function of time and, possibly, treatment. To address the problem of informative drop-out—which may arise if deaths (or any other censoring events) are related to the unobserved values of the prognostic variables—we analyse sequentially overlapping portions of the follow-up information. An example arising from a randomized clinical trial for the treatment of primary biliary cirrhosis is examined in detail.  相似文献   

20.
ABSTRACT

Random events such as a production machine breakdown in a manufacturing plant, an equipment failure within a transportation system, a security failure of information system, or any number of different problems may cause supply chain disruption. Although several researchers have focused on supply chain disruptions and have discussed the measures that companies should use to design better supply chains, or study the different ways that could help firms to mitigate the consequences of a supply chain disruption, the lack of an appropriate method to predict time to disruptive events is strongly felt. Based on this need, this paper introduces statistical flowgraph models (SFGMs) for survival analysis in supply chains. SFGMs provide an innovative approach to analyze time-to-event data. Time-to-event data analysis focuses on modeling waiting times until events of interest occur. SFGMs are useful for reducing multistate models into an equivalent binary-state model. Analysis from the SFGM gives an entire waiting time distribution as well as the system reliability (survivor) and hazard functions for any total or partial waiting time. The end results from a SFGM helps to identify the supply chain's strengths, and more importantly, weaknesses. Therefore, the results are a valuable decision support for supply chain managers to predict supply chain behaviors. Examples presented in this paper demonstrate with clarity the applicability of SFGMs to survival analysis in supply chains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号