首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Borrowing data from external control has been an appealing strategy for evidence synthesis when conducting randomized controlled trials (RCTs). Often named hybrid control trials, they leverage existing control data from clinical trials or potentially real-world data (RWD), enable trial designs to allocate more patients to the novel intervention arm, and improve the efficiency or lower the cost of the primary RCT. Several methods have been established and developed to borrow external control data, among which the propensity score methods and Bayesian dynamic borrowing framework play essential roles. Noticing the unique strengths of propensity score methods and Bayesian hierarchical models, we utilize both methods in a complementary manner to analyze hybrid control studies. In this article, we review methods including covariate adjustments, propensity score matching and weighting in combination with dynamic borrowing and compare the performance of these methods through comprehensive simulations. Different degrees of covariate imbalance and confounding are examined. Our findings suggested that the conventional covariate adjustment in combination with the Bayesian commensurate prior model provides the highest power with good type I error control under the investigated settings. It has desired performance especially under scenarios of different degrees of confounding. To estimate efficacy signals in the exploratory setting, the covariate adjustment method in combination with the Bayesian commensurate prior is recommended.  相似文献   

2.
Covariate adjustment for the estimation of treatment effect for randomized controlled trials (RCT) is a simple approach with a long history, hence, its pros and cons have been well‐investigated and published in the literature. It is worthwhile to revisit this topic since recently there has been significant investigation and development on model assumptions, robustness to model mis‐specification, in particular, regarding the Neyman‐Rubin model and the average treatment effect estimand. This paper discusses key results of the investigation and development and their practical implication on pharmaceutical statistics. Accordingly, we recommend that appropriate covariate adjustment should be more widely used for RCTs for both hypothesis testing and estimation.  相似文献   

3.
Regulatory agencies typically evaluate the efficacy and safety of new interventions and grant commercial approval based on randomized controlled trials (RCTs). Other major healthcare stakeholders, such as insurance companies and health technology assessment agencies, while basing initial access and reimbursement decisions on RCT results, are also keenly interested in whether results observed in idealized trial settings will translate into comparable outcomes in real world settings—that is, into so-called “real world” effectiveness. Unfortunately, evidence of real world effectiveness for new interventions is not available at the time of initial approval. To bridge this gap, statistical methods are available to extend the estimated treatment effect observed in a RCT to a target population. The generalization is done by weighting the subjects who participated in a RCT so that the weighted trial population resembles a target population. We evaluate a variety of alternative estimation and weight construction procedures using both simulations and a real world data example using two clinical trials of an investigational intervention for Alzheimer's disease. Our results suggest an optimal approach to estimation depends on the characteristics of source and target populations, including degree of selection bias and treatment effect heterogeneity.  相似文献   

4.
In a randomized controlled trial (RCT), it is possible to improve precision and power and reduce sample size by appropriately adjusting for baseline covariates. There are multiple statistical methods to adjust for prognostic baseline covariates, such as an ANCOVA method. In this paper, we propose a clustering-based stratification method for adjusting for the prognostic baseline covariates. Clusters (strata) are formed only based on prognostic baseline covariates, not outcome data nor treatment assignment. Therefore, the clustering procedure can be completed prior to the availability of outcome data. The treatment effect is estimated in each cluster, and the overall treatment effect is derived by combining all cluster-specific treatment effect estimates. The proposed implementation of the procedure is described. Simulations studies and an example are presented.  相似文献   

5.
ABSTRACT

Convergence problems often arise when complex linear mixed-effects models are fitted. Previous simulation studies (see, e.g. [Buyse M, Molenberghs G, Burzykowski T, Renard D, Geys H. The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics. 2000;1:49–67, Renard D, Geys H, Molenberghs G, Burzykowski T, Buyse M. Validation of surrogate endpoints in multiple randomized clinical trials with discrete outcomes. Biom J. 2002;44:921–935]) have shown that model convergence rates were higher (i) when the number of available clusters in the data increased, and (ii) when the size of the between-cluster variability increased (relative to the size of the residual variability). The aim of the present simulation study is to further extend these findings by examining the effect of an additional factor that is hypothesized to affect model convergence, i.e. imbalance in cluster size. The results showed that divergence rates were substantially higher for data sets with unbalanced cluster sizes – in particular when the model at hand had a complex hierarchical structure. Furthermore, the use of multiple imputation to restore ‘balance’ in unbalanced data sets reduces model convergence problems.  相似文献   

6.
Because of its simplicity, the Q statistic is frequently used to test the heterogeneity of the estimated intervention effect in meta-analyses of individually randomized trials. However, it is inappropriate to apply it directly to the meta-analyses of cluster randomized trials without taking clustering effects into account. We consider the properties of the adjusted Q statistic for testing heterogeneity in the meta-analyses of cluster randomized trials with binary outcomes. We also derive an analytic expression for the power of this statistic to detect heterogeneity in meta-analyses, which can be useful when planning a meta-analysis. A simulation study is used to assess the performance of the adjusted Q statistic, in terms of its Type I error rate and power. The simulation results are compared to that obtained from the proposed formula. It is found that the adjusted Q statistic has a Type I error rate close to the nominal level of 5%, as compared to the unadjusted Q statistic commonly used to test for heterogeneity in the meta-analyses of individually randomized trials with an inflated Type I error rate. Data from a meta-analysis of four cluster randomized trials are used to illustrate the procedures.  相似文献   

7.
Health technology assessment often requires the evaluation of interventions which are implemented at the level of the health service organization unit (e.g. GP practice) for clusters of individuals. In a cluster randomized controlled trial (cRCT), clusters of patients are randomized; not each patient individually.

The majority of statistical analyses, in individually RCT, assume that the outcomes on different patients are independent. In cRCTs there is doubt about the validity of this assumption as the outcomes of patients, in the same cluster, may be correlated. Hence, the analysis of data from cRCTs presents a number of difficulties. The aim of this paper is to describe the statistical methods of adjusting for clustering, in the context of cRCTs.

There are essentially four approaches to analysing cRCTs: 1. Cluster-level analysis using aggregate summary data.

2. Regression analysis with robust standard errors.

3. Random-effects/cluster-specific approach.

4. Marginal/population-averaged approach.

This paper will compare and contrast the four approaches, using example data, with binary and continuous outcomes, from a cRCT designed to evaluate the effectiveness of training Health Visitors in psychological approaches to identify post-natal depressive symptoms and support post-natal women compared with usual care. The PoNDER Trial randomized 101 clusters (GP practices) and collected data on 2659 new mothers with an 18-month follow-up.  相似文献   

8.
Recent ‘marginal’ methods for the regression analysis of multivariate failure time data have mostly assumed Cox (1972)model hazard functions in which the members of the cluster have distinct baseline hazard functions. In some important applications, including sibling family studies in genetic epidemiology and group randomized intervention trials, a common baseline hazard assumption is more natural. Here we consider a weighted partial likelihood score equation for the estimation of regression parameters under a common baseline hazard model, and provide corresponding asymptotic distribution theory. An extensive series of simulation studies is used to examine the adequacy of the asymptotic distributional approximations, and especially the efficiency gain due to weighting, as a function of strength of dependency within cluster, and cluster size. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
The randomized cluster design is typical in studies where the unit of randomization is a cluster of individuals rather than the individual. Evaluating various intervention strategies across medical care providers at either an institutional level or at a physician group practice level fits the randomized cluster model. Clearly, the analytical approach to such studies must take the unit of randomization and accompanying intraclass correlation into consideration. We review alternative methods to the typical Pearson's chi-square analysis and illustrate these alternatives. We have written and tested a Fortran program that produces the statistics outlined in this paper. The program, in an executable format is available from the author on request.  相似文献   

11.
This article proposes and evaluates two new methods of reweighting preliminary data to obtain estimates more closely approximating those derived from the final data set. In our motivating example, the preliminary data are an early sample of tax returns, and the final data set is the sample after all tax returns have been processed. The new methods estimate a predicted propensity for late filing for each return in the advance sample and then poststratify based on these propensity scores. Using advance and complete sample data for 1982, we demonstrate that the new methods produce advance estimates generally much closer to the final estimates than those derived from the current advance estimation techniques. The results demonstrate the value of propensity modeling, a general-purpose methodology that can be applied to a wide range of problems, including adjustment for unit nonresponse and frame undercoverage as well as statistical matching.  相似文献   

12.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

13.
The odds ratio (OR) has been recommended elsewhere to measure the relative treatment efficacy in a randomized clinical trial (RCT), because it possesses a few desirable statistical properties. In practice, it is not uncommon to come across an RCT in which there are patients who do not comply with their assigned treatments and patients whose outcomes are missing. Under the compound exclusion restriction, latent ignorable and monotonicity assumptions, we derive the maximum likelihood estimator (MLE) of the OR and apply Monte Carlo simulation to compare its performance with those of the other two commonly used estimators for missing completely at random (MCAR) and for the intention-to-treat (ITT) analysis based on patients with known outcomes, respectively. We note that both estimators for MCAR and the ITT analysis may produce a misleading inference of the OR even when the relative treatment effect is equal. We further derive three asymptotic interval estimators for the OR, including the interval estimator using Wald’s statistic, the interval estimator using the logarithmic transformation, and the interval estimator using an ad hoc procedure of combining the above two interval estimators. On the basis of a Monte Carlo simulation, we evaluate the finite-sample performance of these interval estimators in a variety of situations. Finally, we use the data taken from a randomized encouragement design studying the effect of flu shots on the flu-related hospitalization rate to illustrate the use of the MLE and the asymptotic interval estimators for the OR developed here.  相似文献   

14.
It is common practice to use hierarchical Bayesian model for the informing of a pediatric randomized controlled trial (RCT) by adult data, using a prespecified borrowing fraction parameter (BFP). This implicitly assumes that the BFP is intuitive and corresponds to the degree of similarity between the populations. Generalizing this model to any K 1 historical studies, naturally leads to empirical Bayes meta-analysis. In this paper we calculate the Bayesian BFPs and study the factors that drive them. We prove that simultaneous mean squared error reduction relative to an uninformed model is always achievable through application of this model. Power and sample size calculations for a future RCT, designed to be informed by multiple external RCTs, are also provided. Potential applications include inference on treatment efficacy from independent trials involving either heterogeneous patient populations or different therapies from a common class.  相似文献   

15.
In randomized clinical trials (RCTs), we may come across the situation in which some patients do not fully comply with their assigned treatment. For an experimental treatment with trichotomous levels, we derive the maximum likelihood estimator (MLE) of the risk ratio (RR) per level of dose increase in a RCT with noncompliance. We further develop three asymptotic interval estimators for the RR. To evaluate and compare the finite sample performance of these interval estimators, we employ Monte Carlo simulation. When the number of patients per treatment is large, we find that all interval estimators derived in this paper can perform well. When the number of patients is not large, we find that the interval estimator using Wald’s statistic can be liberal, while the interval estimator using the logarithmic transformation of the MLE can lose precision. We note that use of a bootstrap variance estimate in this case may alleviate these concerns. We further note that an interval estimator combining interval estimators using Wald’s statistic and the logarithmic transformation can generally perform well with respect to the coverage probability, and be generally more efficient than interval estimators using bootstrap variance estimates when RR>1. Finally, we use the data taken from a study of vitamin A supplementation to reduce mortality in preschool children to illustrate the use of these estimators.  相似文献   

16.
Optimality of equal versus unequal cluster sizes in the context of multilevel intervention studies is examined. A Monte Carlo study is done to examine to what degree asymptotic results on the optimality hold for realistic sample sizes and for different estimation methods. The relative D-criterion, comparing equal versus unequal cluster sizes, almost always exceeded 85%, implying that loss of information due to unequal cluster sizes can be compensated for by increasing the number of clusters by 18%. The simulation results are in line with asymptotic results, showing that, for realistic sample sizes and various estimation methods, the asymptotic results can be used in planning multilevel intervention studies.  相似文献   

17.
Existing statutes in the United States and Europe require manufacturers to demonstrate evidence of effectiveness through the conduct of adequate and well‐controlled studies to obtain marketing approval of a therapeutic product. What constitutes adequate and well‐controlled studies is usually interpreted as randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, cost, patient preference, or in some cases, ethical concerns. For example, RCTs may not be fully powered in rare diseases or in infections caused by multidrug resistant pathogens because of the low number of enrollable patients. In this case, data available from external controls (including historical controls and observational studies or data registries) can complement information provided by RCT. Propensity score matching methods can be used to select or “borrow” additional patients from the external controls, for maintaining a one‐to‐one randomization between the treatment arm and active control, by matching the new treatment and control units based on a set of measured covariates, ie, model‐based pairing of treatment and control units that are similar in terms of their observable pretreatment characteristics. To this end, 2 matching schemes based on propensity scores are explored and applied to a real clinical data example with the objective of using historical or external observations to augment data in a trial where the randomization is disproportionate or asymmetric.  相似文献   

18.
The purpose of this article is to compare efficiencies of several cluster randomized designs using the method of quantile dispersion graphs (QDGs). A cluster randomized design is considered whenever subjects are randomized at a group level but analyzed at the individual level. A prior knowledge of the correlation existing between subjects within the same cluster is necessary to design these cluster randomized trials. Using the QDG approach, we are able to compare several cluster randomized designs without requiring any information on the intracluster correlation. For a given design, several quantiles of the power function, which are directly related to the effect size, are obtained for several effect sizes. The quantiles depend on the intracluster correlation present in the model. The dispersion of these quantiles over the space of the unknown intracluster correlation is determined, and then depicted by the QDGs. Two applications of the proposed methodology are presented.  相似文献   

19.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

20.
The use of parametric linear mixed models and generalized linear mixed models to analyze longitudinal data collected during randomized control trials (RCT) is conventional. The application of these methods, however, is restricted due to various assumptions required by these models. When the number of observations per subject is sufficiently large, and individual trajectories are noisy, functional data analysis (FDA) methods serve as an alternative to parametric longitudinal data analysis techniques. However, the use of FDA in RCTs is rare. In this paper, the effectiveness of FDA and linear mixed models (LMMs) was compared by analyzing data from rural persons living with HIV and comorbid depression enrolled in a depression treatment randomized clinical trial. Interactive voice response systems were used for weekly administrations of the 10-item Self-Administered Depression Scale (SADS) over 41 weeks. Functional principal component analysis and functional regression analysis methods detected a statistically significant difference in SADS between telphone-administered interpersonal psychotherapy (tele-IPT) and controls but linear mixed effects model results did not. Additional simulation studies were conducted to compare FDA and LMMs under a different nonlinear trajectory assumption. In this clinical trial with sufficient per subject measured outcomes and individual trajectories that are noisy and nonlinear, we found FDA methods to be a better alternative to LMMs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号