首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stratified randomization based on the baseline value of the primary analysis variable is common in clinical trial design. We illustrate from a theoretical viewpoint the advantage of such a stratified randomization to achieve balance of the baseline covariate. We also conclude that the estimator for the treatment effect is consistent when including both the continuous baseline covariate and the stratification factor derived from the baseline covariate. In addition, the analysis of covariance model including both the continuous covariate and the stratification factor is asymptotically no less efficient than including either only the continuous baseline value or only the stratification factor. We recommend that the continuous baseline covariate should generally be included in the analysis model. The corresponding stratification factor may also be included in the analysis model if one is not confident that the relationship between the baseline covariate and the response variable is linear. In spite of the above recommendation, one should always carefully examine relevant historical data to pre-specify the most appropriate analysis model for a perspective study.  相似文献   

2.
Response adaptive randomization (RAR) methods for clinical trials are susceptible to imbalance in the distribution of influential covariates across treatment arms. This can make the interpretation of trial results difficult, because observed differences between treatment groups may be a function of the covariates and not necessarily because of the treatments themselves. We propose a method for balancing the distribution of covariate strata across treatment arms within RAR. The method uses odds ratios to modify global RAR probabilities to obtain stratum‐specific modified RAR probabilities. We provide illustrative examples and a simple simulation study to demonstrate the effectiveness of the strategy for maintaining covariate balance. The proposed method is straightforward to implement and applicable to any type of RAR method or outcome.  相似文献   

3.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

4.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper optimal experimental designs for multilevel models with covariates and two levels of nesting are considered. Multilevel models are used to describe the relationship between an outcome variable and a treatment condition and covariate. It is assumed that the outcome variable is measured on a continuous scale. As optimality criteria D-optimality, and L-optimality are chosen. It is shown that pre-stratification on the covariate leads to a more efficient design and that the person level is the optimal level of randomization. Furthermore, optimal sample sizes are given and it is shown that these do not depend on the optimality criterion when randomization is done at the group level.  相似文献   

6.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

7.
This paper proposes an approach for detecting multiple confounders which combines the advantages of two causal models, the potential outcome model and the causal diagram. The approach need not use a complete causal diagram as long as it is known that a known covariate set ZZ contains the parent set of the exposure E  . On the other hand, whether a covariate is or not a confounder may depend on its categorization. We introduce uniform non-confounding which implies non-confounding in any subpopulation defined by the interval of a covariate (or any pooled level for a discrete covariate). We show that the conditions in Miettinen and Cook's criteria for non-confounding also imply uniform non-confounding. Further we present an algorithm for deleting non-confounders from the potential confounder set ZZ, which extends Greenland et al.'s [1999a. Causal diagrams for epidemiologic research. Epidemiology 10, 37–48] approach by splitting ZZ into a series of potential confounder subsets. We also discuss conditions for non-confounding bias in the subpopulations in which we are interested, where the subpopulations may be defined by non-confounders.  相似文献   

8.
Propensity score-based estimators are commonly used to estimate causal effects in evaluation research. To reduce bias in observational studies, researchers might be tempted to include many, perhaps correlated, covariates when estimating the propensity score model. Taking into account that the propensity score is estimated, this study investigates how the efficiency of matching, inverse probability weighting, and doubly robust estimators change under the case of correlated covariates. Propositions regarding the large sample variances under certain assumptions on the data-generating process are given. The propositions are supplemented by several numerical large sample and finite sample results from a wide range of models. The results show that the covariate correlations may increase or decrease the variances of the estimators. There are several factors that influence how correlation affects the variance of the estimators, including the choice of estimator, the strength of the confounding toward outcome and treatment, and whether a constant or non-constant causal effect is present.  相似文献   

9.
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero-inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set.  相似文献   

10.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

11.
We present a new experimental design procedure that divides a set of experimental units into two groups in order to minimize error in estimating a treatment effect. One concern is the elimination of large covariate imbalance between the two groups before the experiment begins. Another concern is robustness of the design to misspecification in response models. We address both concerns in our proposed design: we first place subjects into pairs using optimal nonbipartite matching, making our estimator robust to complicated nonlinear response models. Our innovation is to keep the matched pairs extant, take differences of the covariate values within each matched pair, and then use the greedy switching heuristic of Krieger et al. (2019) or rerandomization on these differences. This latter step greatly reduces covariate imbalance. Furthermore, our resultant designs are shown to be nearly as random as matching, which is robust to unobserved covariates. When compared to previous designs, our approach exhibits significant improvement in the mean squared error of the treatment effect estimator when the response model is nonlinear and performs at least as well when the response model is linear. Our design procedure can be found as a method in the open source R package available on CRAN called GreedyExperimentalDesign .  相似文献   

12.
Motivated by a potential-outcomes perspective, the idea of principal stratification has been widely recognized for its relevance in settings susceptible to posttreatment selection bias such as randomized clinical trials where treatment received can differ from treatment assigned. In one such setting, we address subtleties involved in inference for causal effects when using a key covariate to predict membership in latent principal strata. We show that when treatment received can differ from treatment assigned in both study arms, incorporating a stratum-predictive covariate can make estimates of the "complier average causal effect" (CACE) derive from observations in the two treatment arms with different covariate distributions. Adopting a Bayesian perspective and using Markov chain Monte Carlo for computation, we develop posterior checks that characterize the extent to which incorporating the pretreatment covariate endangers estimation of the CACE. We apply the method to analyze a clinical trial comparing two treatments for jaw fractures in which the study protocol allowed surgeons to overrule both possible randomized treatment assignments based on their clinical judgment and the data contained a key covariate (injury severity) predictive of treatment received.  相似文献   

13.
Model misspecification and noisy covariate measurements are two common sources of inference bias. There is considerable literature on the consequences of each problem in isolation. In this paper, however, the author investigates their combined effects. He shows that in the context of linear models, the large‐sample error in estimating the regression function may be partitioned in two terms quantifying the impact of these sources of bias. This decomposition reveals trade‐offs between the two biases in question in a number of scenarios. After presenting a finite‐sample version of the decomposition, the author studies the relative impacts of model misspecification, covariate imprecision, and sampling variability, with reference to the detectability of the model misspecification via diagnostic plots.  相似文献   

14.
Abstract. This paper reviews some of the key statistical ideas that are encountered when trying to find empirical support to causal interpretations and conclusions, by applying statistical methods on experimental or observational longitudinal data. In such data, typically a collection of individuals are followed over time, then each one has registered a sequence of covariate measurements along with values of control variables that in the analysis are to be interpreted as causes, and finally the individual outcomes or responses are reported. Particular attention is given to the potentially important problem of confounding. We provide conditions under which, at least in principle, unconfounded estimation of the causal effects can be accomplished. Our approach for dealing with causal problems is entirely probabilistic, and we apply Bayesian ideas and techniques to deal with the corresponding statistical inference. In particular, we use the general framework of marked point processes for setting up the probability models, and consider posterior predictive distributions as providing the natural summary measures for assessing the causal effects. We also draw connections to relevant recent work in this area, notably to Judea Pearl's formulations based on graphical models and his calculus of so‐called do‐probabilities. Two examples illustrating different aspects of causal reasoning are discussed in detail.  相似文献   

15.
Published literature and regulatory agency guidance documents provide conflicting recommendations as to whether a pre‐specified subgroup analysis also requires for its validity that the study employ randomization that is stratified on subgroup membership. This is an important issue, as subgroup analyses are often required to demonstrate efficacy in the development of drugs with a companion diagnostic. Here, it is shown, for typical randomization methods, that the fraction of patients in the subgroup given experimental treatment matches, on average, the target fraction in the entire study. Also, mean covariate values are balanced, on average, between treatment arms in the subgroup, and it is argued that the variance in covariate imbalance between treatment arms in the subgroup is at worst only slightly increased versus a subgroup‐stratified randomization method. Finally, in an analysis of variance setting, a least‐squares treatment effect estimator within the subgroup is shown to be unbiased whether or not the randomization is stratified on subgroup membership. Thus, a requirement that a study be stratified on subgroup membership would place an artificial roadblock to innovation and the goals of personalized healthcare. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents a new class of designs (Big Stick Designs) for sequentially assigning experimental units to treatments, when only the time covariate is considered. By prescribing the degree of imbalance which the experimenters can tolerate, complete randomization is used as long as the imbalance of the treatment allocation does not exceed the prescribed value. Once it reaches the value, a deterministic assignment is made to lower the imbalance. Such designs can be easily implemented with no programming and little personnel support. They compare favorably with the Biased Coin Designs, the Permuted Black Designs, and the Urn Designs, as far as the accidental bias and selection bias are concerned. Generalizations of these designs are considered to achieve various purposes, e.g., avoidance of deterministic assignments, early balance, etc.  相似文献   

18.
Computing the Cox Model for Case Cohort Designs   总被引:2,自引:1,他引:1  
Prentice (1986) proposed a case-cohort design as an efficient subsampling mechanism for survival studies. Several other authors have expanded on these ideas to create a family of related sampling plans, along with estimators for the covariate effects. We describe how to obtain the proposed parameter estimates and their variance estimates using standard software packages, with SAS and SPLUS as particular examples.  相似文献   

19.
In this contribution we aim at improving ordinal variable selection in the context of causal models for credit risk estimation. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric thus keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to ordinal covariates that correspond to various ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear method for variable averaging of the estimated default probabilities.  相似文献   

20.
This article is devoted to the construction and asymptotic study of adaptive, group‐sequential, covariate‐adjusted randomized clinical trials analysed through the prism of the semiparametric methodology of targeted maximum likelihood estimation. We show how to build, as the data accrue group‐sequentially, a sampling design that targets a user‐supplied optimal covariate‐adjusted design. We also show how to carry out sound statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the independent and identically distributed setting only so far), and how group‐sequential testing applies on top of it. The procedure is robust (i.e. consistent even if the working model is mis‐specified). A simulation study confirms the theoretical results and validates the conjecture that the procedure may also be efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号