首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Recent approaches to the statistical analysis of adverse event (AE) data in clinical trials have proposed the use of groupings of related AEs, such as by system organ class (SOC). These methods have opened up the possibility of scanning large numbers of AEs while controlling for multiple comparisons, making the comparative performance of the different methods in terms of AE detection and error rates of interest to investigators. We apply two Bayesian models and two procedures for controlling the false discovery rate (FDR), which use groupings of AEs, to real clinical trial safety data. We find that while the Bayesian models are appropriate for the full data set, the error controlling methods only give similar results to the Bayesian methods when low incidence AEs are removed. A simulation study is used to compare the relative performances of the methods. We investigate the differences between the methods over full trial data sets, and over data sets with low incidence AEs and SOCs removed. We find that while the removal of low incidence AEs increases the power of the error controlling procedures, the estimated power of the Bayesian methods remains relatively constant over all data sizes. Automatic removal of low-incidence AEs however does have an effect on the error rates of all the methods, and a clinically guided approach to their removal is needed. Overall we found that the Bayesian approaches are particularly useful for scanning the large amounts of AE data gathered.  相似文献   

2.
3.
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases.  相似文献   

4.
Sponsors have a responsibility to minimise risk to participants in clinical studies through safety monitoring. The FDA Final Rule for IND Safety Reporting requires routine aggregate safety evaluation, including in ongoing blinded studies. We are interested in estimating the probability that the true adverse event rate in the experimental arm exceeds that in the control arm. We developed a Bayesian approach that specifies an informative meta-analytic predictive prior on the event probability in the control arm and an uninformative prior on that in the experimental arm. We combined these priors with a mixture likelihood that considers each patient in the ongoing blinded study may belong to the experimental or control arm. This allowed us to estimate the quantity of interest without unblinding. We evaluated our method by simulation, pairing scenarios that differed only in whether a safety signal was present or missing, and quantifying the ability of our model to discriminate using signal detection theory. Our approach shows benefit. It detects safety signals more reliably with greater sample sizes and for common rather than rare events. Performance does not deteriorate markedly when historical studies exhibit heterogeneous hazards or non-constant hazards. Our method will allow us to monitor safety signals in ongoing blinded studies with the goal of earlier identification and risk mitigation. Our method could be adapted to use informative priors on both arms or predictive covariates where pertinent data exist. We stress that ongoing safety monitoring should involve a multi-disciplinary team where statistical methods are paired with medical judgement.  相似文献   

5.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

6.
A new analytic statistical technique for predictive event modeling in ongoing multicenter clinical trials with waiting time to response is developed. It allows for the predictive mean and predictive bounds for the number of events to be constructed over time, accounting for the newly recruited patients and patients already at risk in the trial, and for different recruitment scenarios. For modeling patient recruitment, an advanced Poisson-gamma model is used, which accounts for the variation in recruitment over time, the variation in recruitment rates between different centers and the opening or closing of some centers in the future. A few models for event appearance allowing for 'recurrence', 'death' and 'lost-to-follow-up' events and using finite Markov chains in continuous time are considered. To predict the number of future events over time for an ongoing trial at some interim time, the parameters of the recruitment and event models are estimated using current data and then the predictive recruitment rates in each center are adjusted using individual data and Bayesian re-estimation. For a typical scenario (continue to recruit during some time interval, then stop recruitment and wait until a particular number of events happens), the closed-form expressions for the predictive mean and predictive bounds of the number of events at any future time point are derived under the assumptions of Markovian behavior of the event progression. The technique is efficiently applied to modeling different scenarios for some ongoing oncology trials. Case studies are considered.  相似文献   

7.
With rapid development of computing technology, Bayesian statistics have increasingly gained more attention in various areas of public health. However, the full potential of Bayesian sequential methods applied to vaccine safety surveillance has not yet been realized, despite acknowledged practical benefits and philosophical advantages of Bayesian statistics. In this paper, we describe how sequential analysis can be performed in a Bayesian paradigm in the field of vaccine safety. We compared the performance of the frequentist sequential method, specifically, Maximized Sequential Probability Ratio Test (MaxSPRT), and a Bayesian sequential method using simulations and a real world vaccine safety example. The performance is evaluated using three metrics: false positive rate, false negative rate, and average earliest time to signal. Depending on the background rate of adverse events, the Bayesian sequential method could significantly improve the false negative rate and decrease the earliest time to signal. We consider the proposed Bayesian sequential approach to be a promising alternative for vaccine safety surveillance.  相似文献   

8.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

9.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

11.
The choice between single-arm designs versus randomized double-arm designs has been contentiously debated in the literature of phase II oncology trials. Recently, as a compromise, the single-to-double arm transition design was proposed, combining the two designs into one trial over two stages. Successful implementation of the two-stage transition design requires a suspension period at the end of the first stage to collect the response data of the already enrolled patients. When the evaluation of the primary efficacy endpoint is overly long, the between-stage suspension period may unfavorably prolong the trial duration and cause a delay in treating future eligible patients. To accelerate the trial, we propose a Bayesian single-to-double arm design with short-term endpoints (BSDS), where an intermediate short-term endpoint is used for making early termination decisions at the end of the single-arm stage, followed by an evaluation of the long-term endpoint at the end of the subsequent double-arm stage. Bayesian posterior probabilities are used as the primary decision-making tool at the end of the trial. Design calibration steps are proposed for this Bayesian monitoring process to control the frequentist operating characteristics and minimize the expected sample size. Extensive simulation studies have demonstrated that our design has comparable power and average sample size but a much shorter trial duration than conventional single-to-double arm design. Applications of the design are illustrated using two phase II oncology trials with binary endpoints.  相似文献   

12.
For clinical trials with multiple endpoints, the primary interest is usually to evaluate the relationship of these endpoints and treatment interventions. Studying the correlation of two clinical trial endpoints can also be of interests. For example, the association between patient‐reported outcome and clinically assessed endpoint could answer important research questions and also generate interesting hypothesis for future research. However, it is not straightforward to quantify such association. In this article, we proposed a multiple event approach to profile such association with a temporal correlation function, visualized by a correlation function plot over time with a confidence band. We developed this approach by extending the existing methodology in recurrent event literature. This approach was shown to be generally unbiased and could be a useful tool for data visualization and inference. We demonstrated the use of this method with data from a real clinical trial. Although this approach was developed to evaluate the association between patient‐reported outcome and adverse events, it can also be used to evaluate the association of any two endpoints that can be translated to time‐to‐event endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A model is presented to generate a distribution for the probability of an ACR response at six months for a new treatment for rheumatoid arthritis given evidence from a one- or three-month clinical trial. The model is based on published evidence from 11 randomized controlled trials on existing treatments. A hierarchical logistic regression model is used to find the relationship between the proportion of patients achieving ACR20 and ACR50 at one and three months and the proportion at six months. The model is assessed by Bayesian predictive P-values that demonstrate that the model fits the data well. The model can be used to predict the number of patients with an ACR response for proposed six-month clinical trials given data from clinical trials of one or three months duration.  相似文献   

14.
For the cancer clinical trials with immunotherapy and molecularly targeted therapy, time-to-event endpoint is often a desired endpoint. In this paper, we present an event-driven approach for Bayesian one-stage and two-stage single-arm phase II trial designs. Two versions of Bayesian one-stage designs were proposed with executable algorithms and meanwhile, we also develop theoretical relationships between the frequentist and Bayesian designs. These findings help investigators who want to design a trial using Bayesian approach have an explicit understanding of how the frequentist properties can be achieved. Moreover, the proposed Bayesian designs using the exact posterior distributions accommodate the single-arm phase II trials with small sample sizes. We also proposed an optimal two-stage approach, which can be regarded as an extension of Simon's two-stage design with the time-to-event endpoint. Comprehensive simulations were conducted to explore the frequentist properties of the proposed Bayesian designs and an R package BayesDesign can be assessed via R CRAN for convenient use of the proposed methods.  相似文献   

15.
Frequently in process monitoring, situations arise in which the order that events occur cannot be distinguished, motivating the need to accommodate multiple observations occurring at the same time, or concurrent observations. The risk-adjusted Bernoulli cumulative sum (CUSUM) control chart can be used to monitor the rate of an adverse event by fitting a risk-adjustment model, followed by a likelihood ratio-based scoring method that produces a statistic that can be monitored. In our paper, we develop a risk-adjusted Bernoulli CUSUM control chart for concurrent observations. Furthermore, we adopt a novel approach that uses a combined mixture model and kernel density estimation approach in order to perform risk-adjustment with regard to spatial location. Our proposed method allows for monitoring binary outcomes through time with multiple observations at each time point, where the chart is spatially adjusted for each Bernoulli observation's estimated probability of the adverse event. A simulation study is presented to assess the performance of the proposed monitoring scheme. We apply our method using data from Wayne County, Michigan between 2005 and 2014 to monitor the rate of foreclosure as a percentage of all housing transactions.  相似文献   

16.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

17.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

18.
A new method to perform meta - analysis of controlled clinical trials with binary response variable is developed using a Bayesian approach. It consists of three parts: (1) For each trial, the risk difference (the proportion of successes in the treated group minus the proportion of successes in the control group) is estimated; (2) The homogeneity of the risk difference among the different trials is tested; and (3) The hypothesis - the effect of the treatment for the homogeneous pool of trials is greater than or equal to a given fixed constant - is tested. The performance of the Bayesian procedure to test the homogeneity of the risk difference among trials is compared with the chi - square test proposed by DerSimonian and Laird (Controlled Clinical Trials 7, 177-188, 1986) by means of pseudo - random simulation. The conclusion was that the Bayes test is more reliable, either in its exact or asymptotic versions, since it makes fewer decision errors than the chi-square test. As an illustration, the Bayesian method is applied to data of chemotherapeutic prophylaxis of superficial bladder cancer.  相似文献   

19.
A robust Bayesian design is presented for a single-arm phase II trial with an early stopping rule to monitor a time to event endpoint. The assumed model is a piecewise exponential distribution with non-informative gamma priors on the hazard parameters in subintervals of a fixed follow up interval. As an additional comparator, we also define and evaluate a version of the design based on an assumed Weibull distribution. Except for the assumed models, the piecewise exponential and Weibull model based designs are identical to an established design that assumes an exponential event time distribution with an inverse gamma prior on the mean event time. The three designs are compared by simulation under several log-logistic and Weibull distributions having different shape parameters, and for different monitoring schedules. The simulations show that, compared to the exponential inverse gamma model based design, the piecewise exponential design has substantially better performance, with much higher probabilities of correctly stopping the trial early, and shorter and less variable trial duration, when the assumed median event time is unacceptably low. Compared to the Weibull model based design, the piecewise exponential design does a much better job of maintaining small incorrect stopping probabilities in cases where the true median survival time is desirably large.  相似文献   

20.
Basket trials evaluate a single drug targeting a single genetic variant in multiple cancer cohorts. Empirical findings suggest that treatment efficacy across baskets may be heterogeneous. Most modern basket trial designs use Bayesian methods. These methods require the prior specification of at least one parameter that permits information sharing across baskets. In this study, we provide recommendations for selecting a prior for scale parameters for adaptive basket trials by using Bayesian hierarchical modeling. Heterogeneity among baskets attracts much attention in basket trial research, and substantial heterogeneity challenges the basic assumption of exchangeability of Bayesian hierarchical approach. Thus, we also allowed each stratum-specific parameter to be exchangeable or nonexchangeable with similar strata by using data observed in an interim analysis. Through a simulation study, we evaluated the overall performance of our design based on statistical power and type I error rates. Our research contributes to the understanding of the properties of Bayesian basket trial designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号