首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Baseline adjustment is an important consideration in thorough QT studies for non‐antiarrhythmic drugs. For crossover studies with period‐specific pre‐dose baselines, we propose a by‐time‐point analysis of covariance model with change from pre‐dose baseline as response, treatment as a fixed effect, pre‐dose baseline for current treatment and pre‐dose baseline averaged across treatments as covariates, and subject as a random effect. Additional factors such as period and sex should be included in the model as appropriate. Multiple pre‐dose measurements can be averaged to obtain a pre‐dose‐averaged baseline and used in the model. We provide conditions under which the proposed model is more efficient than other models. We demonstrate the efficiency and robustness of the proposed model both analytically and through simulation studies. The advantage of the proposed model is also illustrated using the data from a real clinical trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Baseline adjustment is an important consideration in thorough QT studies for nonantiarrhythmic drugs. For crossover studies with period‐specific baseline days, we propose an analysis of covariance model with change from time‐matched baseline as response, time‐matched baseline for the current treatment, day‐averaged baseline for the current treatment, time‐matched baseline averaged across treatments, and day‐averaged baseline averaged across treatments as covariates. This model adjusts for within‐subject diurnal effects for each treatment and is more efficient than commonly used models for treatment comparisons. We illustrate the benefit using real clinical trial data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Multiple assessments of an efficacy variable are often conducted prior to the initiation of randomized treatments in clinical trials as baseline information. Two goals are investigated in this article, where the first goal is to investigate the choice of these baselines in the analysis of covariance (ANCOVA) to increase the statistical power, and the second to investigate the magnitude of power loss when a continuous efficacy variable is dichotomized to categorical variable as commonly reported the biomedical literature. A statistical power analysis is developed with extensive simulations based on data from clinical trials in study participants with end stage renal disease (ESRD). It is found that the baseline choices primarily depend on the correlations among the baselines and the efficacy variable, with substantial gains for correlations greater than 0.6 and negligible for less than 0.2. Continuous efficacy variables always give higher statistical power in the ANCOVA modeling and dichotomizing the efficacy variable generally decreases the statistical power by 25%, which is an important practicum in designing clinical trials for study sample size and realistically budget. These findings can be easily applied in and extended to other clinical trials with similar design.  相似文献   

4.
The analysis of covariance (ANCOVA) is often used in analyzing clinical trials that make use of “baseline” response. Unlike Crager [1987. Analysis of covariance in parallel-group clinical trials with pretreatment baseline. Biometrics 43, 895–901.], we show that for random baseline covariate, the ordinary least squares (OLS)-based ANCOVA method provides invalid unconditional inference for the test of treatment effect when heterogeneous regression exists for the baseline covariate across different treatments. To correctly address the random feature of baseline response, we propose to directly model the pre- and post-treatment measurements as repeated outcome values of a subject. This bivariate modeling method is evaluated and compared with the ANCOVA method by a simulation study under a wide variety of settings. We find that the bivariate modeling method, applying the Kenward–Roger approximation and assuming distinct general variance–covariance matrix for different treatments, performs the best in analyzing a clinical trial that makes use of random baseline measurements.  相似文献   

5.
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies. In this study, we investigate sample size calculation for the comparison of time-averaged responses between treatment groups in clinical trials with longitudinally observed binary outcomes. The generalized estimating equation (GEE) approach is used to derive a closed-form sample size formula, which is flexible enough to account for arbitrary missing patterns and correlation structures. In particular, we demonstrate that the proposed sample size can accommodate a mixture of missing patterns, which is frequently encountered by practitioners in clinical trials. To our knowledge, this is the first study that considers the mixture of missing patterns in sample size calculation. Our simulation shows that the nominal power and type I error are well preserved over a wide range of design parameters. Sample size calculation is illustrated through an example.  相似文献   

6.
We consider a two-period crossover study in which each patients measured on the response variable at the start as well as at the end of both periods. We examine models in which the carryover effect at the start of the second period may be different from the carryover effect at the end, and in which the correlations between observations decrease as a function of the time between them.

In trials with a relatively short washout period, we recommend that the second baseline measurement not be incorporated into the analysis and that the data be evaluated by analysis of covariance, with the difference between the post-treatment values as the response variable and the first period's baseline value as the covariate. The absence of carryover effects must be assumed.

When the washout period is moderately long (comparable in length to either treatment period), the preferred analysis for a difference between direct treatment effects will again generally be based on the differences between post-treatment values. An analysis based on changes from baseline would, under certain assumptions about the form of the variance-covariance matrix, be preferred only for quite long washout periods and large correlations between observations. Even then, the efficiency of the test for equality of direct effects is improved if the difference between the baseline values is used as the covariate.  相似文献   

7.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   

8.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

9.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Confirmatory randomized clinical trials with a stratified design may have ordinal response outcomes, ie, either ordered categories or continuous determinations that are not compatible with an interval scale. Also, multiple endpoints are often collected when 1 single endpoint does not represent the overall efficacy of the treatment. In addition, random baseline imbalances and missing values can add another layer of difficulty in the analysis plan. Therefore, the development of an approach that provides a consolidated strategy to all issues collectively is essential. For a real case example that is from a clinical trial comparing a test treatment and a control for the pain management for patients with osteoarthritis, which has all aforementioned issues, multivariate Mann‐Whitney estimators with stratification adjustment are applicable to the strictly ordinal responses with stratified design. Randomization based nonparametric analysis of covariance is applied to account for the possible baseline imbalances. Several approaches that handle missing values are provided. A global test followed by a closed testing procedure controls the family wise error rate in the strong sense for the analysis of multiple endpoints. Four outcomes indicating joint pain, stiffness, and functional status were analyzed collectively and also individually through the procedures. Treatment efficacy was observed in the combined endpoint as well as in the individual endpoints. The proposed approach is effective in addressing the aforementioned problems simultaneously and straightforward to implement.  相似文献   

11.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In pharmaceutical‐related research, we usually use clinical trials methods to identify valuable treatments and compare their efficacy with that of a standard control therapy. Although clinical trials are essential for ensuring the efficacy and postmarketing safety of a drug, conducting clinical trials is usually costly and time‐consuming. Moreover, to allocate patients to the little therapeutic effect treatments is inappropriate due to the ethical and cost imperative. Hence, there are several 2‐stage designs in the literature where, for reducing cost and shortening duration of trials, they use the conditional power obtained from interim analysis results to appraise whether we should continue the lower efficacious treatments in the next stage. However, there is a lack of discussion about the influential impacts on the conditional power of a trial at the design stage in the literature. In this article, we calculate the optimal conditional power via the receiver operating characteristic curve method to assess the impacts on the quality of a 2‐stage design with multiple treatments and propose an optimal design using the minimum expected sample size for choosing the best or promising treatment(s) among several treatments under an optimal conditional power constraint. In this paper, we provide tables of the 2‐stage design subject to optimal conditional power for various combinations of design parameters and use an example to illustrate our methods.  相似文献   

13.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

14.
The Multiple Comparison Procedures with Modeling Techniques (MCP-Mod) framework has been recently approved by the U.S. Food, Administration, and European Medicines Agency as fit-for-purpose for phase II studies. Nonetheless, this approach relies on the asymptotic properties of Maximum Likelihood (ML) estimators, which might not be reasonable for small sample sizes. In this paper, we derived improved ML estimators and correction for their covariance matrices in the censored Weibull regression model based on the corrective and preventive approaches. We performed two simulation studies to evaluate ML and improved ML estimators with their covariance matrices in (i) a regression framework (ii) the Multiple Comparison Procedures with Modeling Techniques framework. We have shown that improved ML estimators are less biased than ML estimators yielding Wald-type statistics that controls type I error without loss of power in both frameworks. Therefore, we recommend the use of improved ML estimators in the MCP-Mod approach to control type I error at nominal value for sample sizes ranging from 5 to 25 subjects per dose.  相似文献   

15.

We present a new estimator of the restricted mean survival time in randomized trials where there is right censoring that may depend on treatment and baseline variables. The proposed estimator leverages prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. Under regularity conditions and random censoring within strata of treatment and baseline variables, the proposed estimator has the following features: (i) it is interpretable under violations of the proportional hazards assumption; (ii) it is consistent and at least as precise as the Kaplan–Meier and inverse probability weighted estimators, under identifiability conditions; (iii) it remains consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) it achieves the nonparametric efficiency bound when both of these distributions are consistently estimated. We illustrate the performance of our method using simulations based on resampling data from a completed, phase 3 randomized clinical trial of a new surgical treatment for stroke; the proposed estimator achieves a 12% gain in relative efficiency compared to the Kaplan–Meier estimator. The proposed estimator has potential advantages over existing approaches for randomized trials with time-to-event outcomes, since existing methods either rely on model assumptions that are untenable in many applications, or lack some of the efficiency and consistency properties (i)–(iv). We focus on estimation of the restricted mean survival time, but our methods may be adapted to estimate any treatment effect measure defined as a smooth contrast between the survival curves for each study arm. We provide R code to implement the estimator.

  相似文献   

16.
The authors study the estimation of domain totals and means under survey‐weighted regression imputation for missing items. They use two different approaches to inference: (i) design‐based with uniform response within classes; (ii) model‐assisted with ignorable response and an imputation model. They show that the imputed domain estimators are biased under (i) but approximately unbiased under (ii). They obtain a bias‐adjusted estimator that is approximately unbiased under (i) or (ii). They also derive linearization variance estimators. They report the results of a simulation study on the bias ratio and efficiency of alternative estimators, including a complete case estimator that requires the knowledge of response indicators.  相似文献   

17.
Data on the Likert scale are ubiquitous in medical research, including randomized trials. Statistical analysis of such data may be conducted using the means of raw scores or the rank information of the scores. In the context of parallel-group randomized trials, we quantify treatment effects by the probability that a subject in the treatment group has a better score than (or a win over) a subject in the control group. Asymptotic parametric and nonparametric confidence intervals for this win probability and associated sample size formulas are derived for studies with only follow-up scores, and those with both baseline and follow-up measurements. We assessed the performance of both the parametric and nonparametric approaches using simulation studies based on real studies with Likert item and Likert scale data. The simulation results demonstrate that even without baseline adjustment, the parametric methods did not perform well, in terms of bias, interval coverage percentage, balance of tail error, and assurance of achieving a pre-specified precision. In contrast, the nonparametric approach performed very well for both the unadjusted and adjusted win probability. We illustrate the methods with two examples: one using Likert item data and the other using Like scale data. We conclude that non-parametric methods are preferable for two-group randomization trials with Likert data. Illustrative SAS code for the nonparametric approach using existing procedures is provided.  相似文献   

18.
A 3‐arm trial design that includes an experimental treatment, an active reference treatment, and a placebo is useful for assessing the noninferiority of an experimental treatment. The inclusion of a placebo arm enables the assessment of assay sensitivity and internal validation, in addition to the testing of the noninferiority of the experimental treatment compared with the reference treatment. In 3‐arm noninferiority trials, various statistical test procedures have been considered to evaluate the following 3 hypotheses: (i) superiority of the experimental treatment over the placebo, (ii) superiority of the reference treatment over the placebo, and (iii) noninferiority of the experimental treatment compared with the reference treatment. However, hypothesis (ii) can be insufficient and may not accurately assess the assay sensitivity for the noninferiority of the experimental treatment compared with the reference treatment. Thus, demonstrating that the superiority of the reference treatment over the placebo is greater than the noninferiority margin (the nonsuperiority of the reference treatment compared with the placebo) can be necessary. Here, we propose log‐rank statistical procedures for evaluating data obtained from 3‐arm noninferiority trials to assess assay sensitivity with a prespecified margin Δ. In addition, we derive the approximate sample size and optimal allocation required to minimize the total sample size and that of the placebo treatment sample size, hierarchically.  相似文献   

19.
The crossover trial design (AB/BA design) is often used to compare the effects of two treatments in medical science because it performs within‐subject comparisons, which increase the precision of a treatment effect (i.e., a between‐treatment difference). However, the AB/BA design cannot be applied in the presence of carryover effects and/or treatments‐by‐period interaction. In such cases, Balaam's design is a more suitable choice. Unlike the AB/BA design, Balaam's design inflates the variance of an estimate of the treatment effect, thereby reducing the statistical power of tests. This is a serious drawback of the design. Although the variance of parameter estimators in Balaam's design has been extensively studied, the estimators of the treatment effect to improve the inference have received little attention. If the estimate of the treatment effect is obtained by solving the mixed model equations, the AA and BB sequences are excluded from the estimation process. In this study, we develop a new estimator of the treatment effect and a new test statistic using the estimator. The aim is to improve the statistical inference in Balaam's design. Simulation studies indicate that the type I error of the proposed test is well controlled, and that the test is more powerful and has more suitable characteristics than other existing tests when interactions are substantial. The proposed test is also applied to analyze a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号