首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 151 毫秒
1.
Clinical trials are primarily conducted to understand the average effects treatments have on patients. However, patients are heterogeneous in the severity of the condition and in ways that affect what treatment effect they can expect. It is therefore important to understand and characterize how treatment effects vary. The design and analysis of clinical studies play critical roles in evaluating and characterizing heterogeneous treatment effects. This panel discussed considerations in design and analysis under the recognition that there are heterogeneous treatment effects across subgroups of patients. Panel members discussed many questions including: What is a good estimate of the treatment effect in me, a 65-year-old, bald, Caucasian-American, male patient? What magnitude of heterogeneity of treatment effects (HTE) is sufficiently large to merit attention? What role can prior evidence about HTE play in confirmatory trial design and analysis? Is there anything described in the 21st Century Cures Act that would benefit from greater attention to HTE? An example of a Bayesian approach addressing multiplicity when testing for treatment effects in subgroups will be provided. We can do more or better at understanding heterogeneous treatment effects and providing the best information on heterogeneous treatment effects.  相似文献   

2.
Fractional factorial (FF) designs are no doubt the most widely used designs in experimental investigations due to their efficient use of experimental runs. One price we pay for using FF designs is, clearly, our inability to obtain estimates of some important effects (main effects or second order interactions) that are separate from estimates of other effects (usually higher order interactions). When the estimate of an effect also includes the influence of one or more other effects the effects are said to be aliased. Folding over an FF design is a method for breaking the links between aliased effects in a design. The question is, how do we define the foldover structure for asymmetric FF designs, whether regular or nonregular? How do we choose the optimal foldover plan? How do we use optimal foldover plans to construct combined designs which have better capability of estimating lower order effects? The main objective of the present paper is to provide answers to these questions. Using the new results in this paper as benchmarks, we can implement a powerful and efficient algorithm for finding optimal foldover plans which can be used to break links between aliased effects.  相似文献   

3.
Subgroup analyses are a routine part of clinical trials to investigate whether treatment effects are homogeneous across the study population. Graphical approaches play a key role in subgroup analyses to visualise effect sizes of subgroups, to aid the identification of groups that respond differentially, and to communicate the results to a wider audience. Many existing approaches do not capture the core information and are prone to lead to a misinterpretation of the subgroup effects. In this work, we critically appraise existing visualisation techniques, propose useful extensions to increase their utility and attempt to develop an effective visualisation approach. We focus on forest plots, UpSet plots, Galbraith plots, subpopulation treatment effect pattern plot, and contour plots, and comment on other approaches whose utility is more limited. We illustrate the methods using data from a prostate cancer study.  相似文献   

4.
When one has information about a set of individuals on several variables, in different groups or contexts, and multivariate analysis is applied to each group the following questions arise: which groups show a similar response? how do the groups differ? how do the individuals differ in their responses in the different groups? These issues have led us to address a very interesting question in the practical context; the comparison and integration of the structures resulting from several multivariate analyses. Here we propose a method for the comparison and integration of the results arising from two Biplot analyses applied to the same variables in two different groups of individuals. By extension, we also develop the case of more than two Biplot analyses. Emphasis is placed on the underlying geometry and the interpretation of results, for which we offer indices that allow us to study the integrated structures and perform comparative analyses.  相似文献   

5.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

6.
Treatment effect in an observational study of relatively large scale can be described as a mixture of effects among subgroups. In particular, analysis for estimating the treatment effect at the level of an entire sample potentially involves not only differential effects across subgroups of the entire study cohort, but also differential propensities – probabilities of receiving treatment given study subjects’ pretreatment history. Such complex heterogeneity is of great research interest because the analysis of treatment effects can substantially depend on the hidden data structure for effect sizes and propensities. To uncover the unseen data structure, we propose a likelihood-based regression tree method which we call marginal tree (MT). The MT method is aimed at a simultaneous assessment of differential effects and propensity scores so that both become homogeneous within each terminal node of the resultant tree structure. We assess simulation performances of the MT method by comparing it with other existing tree methods and illustrate its use with a simulated data set, where the objective is to assess the effects of dieting behavior on its subsequent emotional distress among adolescent girls.  相似文献   

7.
Crossover designs, or repeated measurements designs, are used for experiments in which t treatments are applied to each of n experimental units successively over p time periods. Such experiments are widely used in areas such as clinical trials, experimental psychology and agricultural field trials. In addition to the direct effect on the response of the treatment in the period of application, there is also the possible presence of a residual, or carry-over, effect of a treatment from one or more previous periods. We use a model in which the residual effect from a treatment depends upon the treatment applied in the succeeding period; that is, a model which includes interactions between the treatment direct and residual effects. We assume that residual effects do not persist further than one succeeding period.A particular class of strongly balanced repeated measurements designs with n=t2 units and which are uniform on the periods is examined. A lower bound for the A-efficiency of the designs for estimating the direct effects is derived and it is shown that such designs are highly efficient for any number of periods p=2,…,2t.  相似文献   

8.
Heterogeneity is an enormously complex problem because there are so many dimensions and variables that can be considered when assessing which ones may influence an efficacy or safety outcome for an individual patient. This is difficult in randomized controlled trials and even more so in observational settings. An alternative approach is presented in which the individual patient becomes the “subgroup,” and similar patients are identified in the clinical trial database or electronic medical record that can be used to predict how that individual patient may respond to treatment.  相似文献   

9.
A Bayesian design criterion for selection experiments in plant breeding is derived using a utility function that minimizes the risk of an incorrect selection. A prior distribution on the heritability parameter is used to complete the definition of the design optimality criterion. An example is given with evaluations of the criterion for different prior distributions on the heritability. Though coming from a genetic motivation this criterion should prove useful for any other types of experiments with random treatment effects.  相似文献   

10.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   

11.
Education of clinical research staff in understanding statistical concepts is an area of importance for pharmaceutical companies. This understanding is needed to help them communicate with statisticians using a common language, in designing clinical trials and interpretation of clinical trial results. Such staff has little time for a one-semester or even a one-week continuing education course in statistics. Faced with this reality, we developed a 3-module course,for a total of 1.5 days, which was taught over a period of one month that addresses the needs of this audience. We describe the format and content of the course and provide references that can serve as a resource for teaching such a course.  相似文献   

12.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

13.
Crossover designs are used for a variety of different applications. While these designs have a number of attractive features, they also induce a number of special problems and concerns. One of these is the possible presence of carryover effects. Even with the use of washout periods, which are for many applications widely accepted as an indispensable component, the effect of a treatment from a previous period may not be completely eliminated. A model that has recently received renewed attention in the literature is the model in which first-order carryover effects are assumed to be proportional to direct treatment effects. Under this model, assuming that the constant of proportionality is known, we identify optimal and efficient designs for the direct effects for different values of the constant of proportionality. We also consider the implication of these results for the case that the constant of proportionality is not known.  相似文献   

14.
Higher‐order crossover designs have drawn considerable attention in clinical trials, because of their ability to test direct treatment effects in the presence of carry‐over effects. The important question, when applying higher‐order crossover designs in practice, is how to choose a design with both statistical and cost efficiencies from various alternatives. In this paper, we propose a general cost function and compare five statistically optimal or near‐optimal designs with this cost function for a two‐treatment study under different carry‐over models. Based on our study, to achieve both statistical and cost efficiencies, a four‐period, four‐sequence crossover design is generally recommended under the simple carry‐over or no carry‐over models, and a three‐period, two‐sequence crossover design is generally recommended under the steady‐state carry‐over models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
Adaptive designs are sometimes used in a phase III clinical trial with the goal of allocating a larger number of patients to the better treatment. In the present paper we use some adaptive designs in a two-treatment two-period crossover trial in the presence of possible carry-over effects, where the treatment responses are binary. We use some simple designs to choose between the possible treatment combinations AA, AB, BA or BB. The goal is to use the better treatment a larger proportion of times. We calculate the allocation proportions to the possible treatment combinations and their standard deviations. We also investigate related inferential problems, for which related asymptotics are derived. The proposed procedure is compared with a possible competitor. Finally we use real data sets to illustrate the applicability of our proposed design.  相似文献   

16.
When the data from a long-term clinical trial are reviewed continually over time for evidence of adverse or beneficial treatment effects, the classical significance tests are not appropriate. A simulation procedure is described which provides the correct critical regions corresponding to specified frequencies of data reviews over the course of the trial. Several different types of critical boundaries (e.g., horizontal, sloping,stepped, and asymmetric) are compared with respect to statistical power and closeness to the classical critical values at the final data review.  相似文献   

17.
The Cox regression model is often used when analyzing survival data as it provides a convenient way of summarizing covariate effects in terms of relative risks. The proportional hazards assumption may not hold, however. A typical violation of the assumption is time-changing covariate effects. Under such scenarios one may use more flexible models but the results from such models may be complicated to communicate and it is desirable to have simple measures of a treatment effect, say. In this paper we focus on the odds-of-concordance measure that was recently studied by Schemper et al. (Stat Med 28:2473?C2489, 2009). They suggested to estimate this measure using weighted Cox regression (WCR). Although WCR may work in many scenarios no formal proof can be established. We suggest an alternative estimator of the odds-of-concordance measure based on the Aalen additive hazards model. In contrast to the WCR, one may derive the large sample properties for this estimator making formal inference possible. The estimator also allows for additional covariate effects.  相似文献   

18.
Summary.  There is a large literature on methods of analysis for randomized trials with noncompliance which focuses on the effect of treatment on the average outcome. The paper considers evaluating the effect of treatment on the entire distribution and general functions of this effect. For distributional treatment effects, fully non-parametric and fully parametric approaches have been proposed. The fully non-parametric approach could be inefficient but the fully parametric approach is not robust to the violation of distribution assumptions. We develop a semiparametric instrumental variable method based on the empirical likelihood approach. Our method can be applied to general outcomes and general functions of outcome distributions and allows us to predict a subject's latent compliance class on the basis of an observed outcome value in observed assignment and treatment received groups. Asymptotic results for the estimators and likelihood ratio statistic are derived. A simulation study shows that our estimators of various treatment effects are substantially more efficient than the currently used fully non-parametric estimators. The method is illustrated by an analysis of data from a randomized trial of an encouragement intervention to improve adherence to prescribed depression treatments among depressed elderly patients in primary care practices.  相似文献   

19.
For testing the effectiveness of a treatment on a binary outcome, a bewildering range of methods have been proposed. How similar are all these tests? What are their theoretical strengths and weaknesses? Which are to be recommended and what is a coherent basis for deciding? In this paper, we take seven standard but imperfect tests and apply three different methods of adjustment to ensure size control: maximization (M), restricted maximization (B) and bootstrap/estimation (E). Across a wide conditions, we compute exact size and power of the 7 basic and 21 adjusted tests. We devise two new measures of size bias and intrinsic power, and employ novel graphical tools to summarise a huge set of results. Amongst the 7 basic tests, Liebermeister’s test best controls size but can still be conservative. Amongst the adjusted tests, E-tests clearly have the best power and results are very stable across different conditions.  相似文献   

20.
When we look around an imperfect world, we feel an understandable impulse to improve matters. We may therefore decide to intervene by prescribing medical treatment or by introducing crime reduction measures. But how do we know that what we do is likely to work? In medicine the standard answer is to do a trial; not surprisingly the same is true in crime reduction. But, says Paul Marchant, the lessons learned from medical trials have not been implemented in the latter field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号