首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Generalized linear mixed models (GLMM) are commonly used to model the treatment effect over time while controlling for important clinical covariates. Standard software procedures often provide estimates of the outcome based on the mean of the covariates; however, these estimates will be biased for the true group means in the GLMM. Implementing GLMM in the frequentist framework can lead to issues of convergence. A simulation study demonstrating the use of fully Bayesian GLMM for providing unbiased estimates of group means is shown. These models are very straightforward to implement and can be used for a broad variety of outcomes (eg, binary, categorical, and count data) that arise in clinical trials. We demonstrate the proposed method on a data set from a clinical trial in diabetes.  相似文献   

3.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

4.
Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing ??2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence of considerable inconsistency can the results of an MTC (consistency) model be trusted. However, inconsistency model specification is non-trivial when multi-arm trials are present in the evidence structure. In this paper, we define the parameterization problem for inconsistency models in mathematical terms and provide an algorithm for the generation of inconsistency models. We evaluate running-time of the algorithm by generating models for 15 published evidence structures.  相似文献   

5.
Power analysis for multi-center randomized control trials is quite difficult to perform for non-continuous responses when site differences are modeled by random effects using the generalized linear mixed-effects model (GLMM). First, it is not possible to construct power functions analytically, because of the extreme complexity of the sampling distribution of parameter estimates. Second, Monte Carlo (MC) simulation, a popular option for estimating power for complex models, does not work within the current context because of a lack of methods and software packages that would provide reliable estimates for fitting such GLMMs. For example, even statistical packages from software giants like SAS do not provide reliable estimates at the time of writing. Another major limitation of MC simulation is the lengthy running time, especially for complex models such as GLMM, especially when estimating power for multiple scenarios of interest. We present a new approach to address such limitations. The proposed approach defines a marginal model to approximate the GLMM and estimates power without relying on MC simulation. The approach is illustrated with both real and simulated data, with the simulation study demonstrating good performance of the method.  相似文献   

6.
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model‐based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model‐based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
With the development of molecular targeted drugs, predictive biomarkers have played an increasingly important role in identifying patients who are likely to receive clinically meaningful benefits from experimental drugs (i.e., sensitive subpopulation) even in early clinical trials. For continuous biomarkers, such as mRNA levels, it is challenging to determine cutoff value for the sensitive subpopulation, and widely accepted study designs and statistical approaches are not currently available. In this paper, we propose the Bayesian adaptive patient enrollment restriction (BAPER) approach to identify the sensitive subpopulation while restricting enrollment of patients from the insensitive subpopulation based on the results of interim analyses, in a randomized phase 2 trial with time‐to‐endpoint outcome and a single biomarker. Applying a four‐parameter change‐point model to the relationship between the biomarker and hazard ratio, we calculate the posterior distribution of the cutoff value that exhibits the target hazard ratio and use it for the restriction of the enrollment and the identification of the sensitive subpopulation. We also consider interim monitoring rules for termination because of futility or efficacy. Extensive simulations demonstrated that our proposed approach reduced the number of enrolled patients from the insensitive subpopulation, relative to an approach with no enrollment restriction, without reducing the likelihood of a correct decision for next trial (no‐go, go with entire population, or go with sensitive subpopulation) or correct identification of the sensitive subpopulation. Additionally, the four‐parameter change‐point model had a better performance over a wide range of simulation scenarios than a commonly used dichotomization approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Many phase I drug combination designs have been proposed to find the maximum tolerated combination (MTC). Due to the two‐dimension nature of drug combination trials, these designs typically require complicated statistical modeling and estimation, which limit their use in practice. In this article, we propose an easy‐to‐implement Bayesian phase I combination design, called Bayesian adaptive linearization method (BALM), to simplify the dose finding for drug combination trials. BALM takes the dimension reduction approach. It selects a subset of combinations, through a procedure called linearization, to convert the two‐dimensional dose matrix into a string of combinations that are fully ordered in toxicity. As a result, existing single‐agent dose‐finding methods can be directly used to find the MTC. In case that the selected linear path does not contain the MTC, a dose‐insertion procedure is performed to add new doses whose expected toxicity rate is equal to the target toxicity rate. Our simulation studies show that the proposed BALM design performs better than competing, more complicated combination designs.  相似文献   

9.
The longitudinal data from 2 published clinical trials in adult subjects with upper limb spasticity (a randomized placebo‐controlled study [NCT01313299] and its long‐term open‐label extension [NCT01313312]) were combined. Their study designs involved repeat intramuscular injections of abobotulinumtoxinA (Dysport®), and efficacy endpoints were collected accordingly. With the objective of characterizing the pattern of response across cycles, Mixed Model Repeated Measures analyses and Non‐Linear Random Coefficient (NLRC) analyses were performed and their results compared. The Mixed Model Repeated Measures analyses, commonly used in the context of repeated measures with missing dependent data, did not involve any parametric shape for the curve of changes over time. Based on clinical expectations, the NLRC included a negative exponential function of the number of treatment cycles, with its asymptote and rate included as random coefficients in the model. Our analysis focused on 2 specific efficacy parameters reflecting complementary aspects of efficacy in the study population. A simulation study based on a similar study design was also performed to further assess the performance of each method under different patterns of response over time. This highlighted a gain of precision with the NLRC model, and most importantly the need for its assumptions to be verified to avoid potentially biased estimates. These analyses describe a typical situation and the conditions under which non‐linear mixed modeling can provide additional insights on the behavior of efficacy parameters over time. Indeed, the resulting estimates from the negative exponential NLRC can help determine the expected maximal effect and the treatment duration required to reach it.  相似文献   

10.
Compliance with one specified dosing strategy of assigned treatments is a common problem in randomized drug clinical trials. Recently, there has been much interest in methods used for analysing treatment effects in randomized clinical trials that are subject to non-compliance. In this paper, we estimate and compare treatment effects based on the Grizzle model (GM) (ignorable non-compliance) as the custom model and the generalized Grizzle model (GGM) (non-ignorable non-compliance) as the new model. A real data set based on the treatment of knee osteoarthritis is used to compare these models. The results based on the likelihood ratio statistics and simulation study show the advantage of the proposed model (GGM) over the custom model (GGM).  相似文献   

11.
Nowadays, treatment regimens for cancer often involve a combination of drugs. The determination of the doses of each of the combined drugs in phase I dose escalation studies poses methodological challenges. The most common phase I design, the classic ‘3+3' design, has been criticized for poorly estimating the maximum tolerated dose (MTD) and for treating too many subjects at doses below the MTD. In addition, the classic ‘3+3' is not able to address the challenges posed by combinations of drugs. Here, we assume that a control drug (commonly used and well‐studied) is administered at a fixed dose in combination with a new agent (the experimental drug) of which the appropriate dose has to be determined. We propose a randomized design in which subjects are assigned to the control or to the combination of the control and experimental. The MTD is determined using a model‐based Bayesian technique based on the difference of probability of dose limiting toxicities (DLT) between the control and the combination arm. We show, through a simulation study, that this approach provides better and more accurate estimates of the MTD. We argue that this approach may differentiate between an extreme high probability of DLT observed from the control and a high probability of DLT of the combination. We also report on a fictive (simulation) analysis based on published data of a phase I trial of ifosfamide combined with sunitinib.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In this study, we investigate the concept of the mean response for a treatment group mean as well as its estimation and prediction for generalized linear models with a subject‐wise random effect. Generalized linear models are commonly used to analyze categorical data. The model‐based mean for a treatment group usually estimates the response at the mean covariate. However, the mean response for the treatment group for studied population is at least equally important in the context of clinical trials. New methods were proposed to estimate such a mean response in generalized linear models; however, this has only been done when there are no random effects in the model. We suggest that, in a generalized linear mixed model (GLMM), there are at least two possible definitions of a treatment group mean response that can serve as estimation/prediction targets. The estimation of these treatment group means is important for healthcare professionals to be able to understand the absolute benefit vs risk. For both of these treatment group means, we propose a new set of methods that suggests how to estimate/predict both of them in a GLMMs with a univariate subject‐wise random effect. Our methods also suggest an easy way of constructing corresponding confidence and prediction intervals for both possible treatment group means. Simulations show that proposed confidence and prediction intervals provide correct empirical coverage probability under most circumstances. Proposed methods have also been applied to analyze hypoglycemia data from diabetes clinical trials.  相似文献   

13.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

14.
Patients undergoing renal transplantation are prone to graft failure which causes lost of follow-up measures on their blood urea nitrogen and serum creatinine levels. These two outcomes are measured repeatedly over time to assess renal function following transplantation. Loss of follow-up on these bivariate measures results in informative right censoring, a common problem in longitudinal data that should be adjusted for so that valid estimates are obtained. In this study, we propose a bivariate model that jointly models these two longitudinal correlated outcomes and generates population and individual slopes adjusting for informative right censoring using a discrete survival approach. The proposed approach is applied to the clinical dataset of patients who had undergone renal transplantation. A simulation study validates the effectiveness of the approach.  相似文献   

15.
The use of parametric linear mixed models and generalized linear mixed models to analyze longitudinal data collected during randomized control trials (RCT) is conventional. The application of these methods, however, is restricted due to various assumptions required by these models. When the number of observations per subject is sufficiently large, and individual trajectories are noisy, functional data analysis (FDA) methods serve as an alternative to parametric longitudinal data analysis techniques. However, the use of FDA in RCTs is rare. In this paper, the effectiveness of FDA and linear mixed models (LMMs) was compared by analyzing data from rural persons living with HIV and comorbid depression enrolled in a depression treatment randomized clinical trial. Interactive voice response systems were used for weekly administrations of the 10-item Self-Administered Depression Scale (SADS) over 41 weeks. Functional principal component analysis and functional regression analysis methods detected a statistically significant difference in SADS between telphone-administered interpersonal psychotherapy (tele-IPT) and controls but linear mixed effects model results did not. Additional simulation studies were conducted to compare FDA and LMMs under a different nonlinear trajectory assumption. In this clinical trial with sufficient per subject measured outcomes and individual trajectories that are noisy and nonlinear, we found FDA methods to be a better alternative to LMMs.  相似文献   

16.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

17.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Summary.  Consider a clinical trial in which participants are randomized to a single-dose treatment or a placebo control and assume that the adherence level is accurately recorded. If the treatment is effective, then good adherers in the treatment group should do better than poor ad- herers because they received more drug; the treatment group data follow a dose–response curve. But, good adherers to the placebo often do better than poor adherers, so the observed adherence–response in the treatment group cannot be completely attributed to the treatment. Efron and Feldman proposed an adjustment to the observed adherence–response in the treatment group by using the adherence–response in the control group. It relies on a percentile invariance assumption under which each participant's adherence percentile within their assigned treatment group does not depend on the assigned group (active drug or placebo). The Efron and Feldman approach is valid under percentile invariance, but not necessarily under departures from it. We propose an analysis based on a generalization of percentile invariance that allows adherence percentiles to be stochastically permuted across treatment groups, using a broad class of stochastic permutation models. We show that approximate maximum likelihood estimates of the underlying dose–response curve perform well when the stochastic permutation process is correctly specified and are quite robust to model misspecification.  相似文献   

19.
Various statistical models have been proposed for two‐dimensional dose finding in drug‐combination trials. However, it is often a dilemma to decide which model to use when conducting a particular drug‐combination trial. We make a comprehensive comparison of four dose‐finding methods, and for fairness, we apply the same dose‐finding algorithm under the four model structures. Through extensive simulation studies, we compare the operating characteristics of these methods in various practical scenarios. The results show that different models may lead to different design properties and that no single model performs uniformly better in all scenarios. As a result, we propose using Bayesian model averaging to overcome the arbitrariness of the model specification and enhance the robustness of the design. We assign a discrete probability mass to each model as the prior model probability and then estimate the toxicity probabilities of combined doses in the Bayesian model averaging framework. During the trial, we adaptively allocated each new cohort of patients to the most appropriate dose combination by comparing the posterior estimates of the toxicity probabilities with the prespecified toxicity target. The simulation results demonstrate that the Bayesian model averaging approach is robust under various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed‐effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi‐level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed‐effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real‐life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号