首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Investigators who manage multicenter clinical trials need to pay careful attention to patterns of subject accrual, and the prediction of activation time for pending centers is potentially crucial for subject accrual prediction. We propose a Bayesian hierarchical model to predict subject accrual for multicenter clinical trials in which center activation times vary. We define center activation time as the time at which a center can begin enrolling patients in the trial. The difference in activation times between centers is assumed to follow an exponential distribution, and the model of subject accrual integrates prior information for the study with actual enrollment progress. We apply our proposed Bayesian multicenter accrual model to two multicenter clinical studies. The first is the PAIN‐CONTRoLS study, a multicenter clinical trial with a goal of activating 40 centers and enrolling 400 patients within 104 weeks. The second is the HOBIT trial, a multicenter clinical trial with a goal of activating 14 centers and enrolling 200 subjects within 36 months. In summary, the Bayesian multicenter accrual model provides a prediction of subject accrual while accounting for both center‐ and individual patient‐level variation.  相似文献   

2.
To explore the operation characteristics of survival group sequential trials with a fixed follow-up period, the accrual time and total trial duration to ensure power and type I error rate requirements are explained and investigated for hazard ratios ranging from 1.3 to 3.0, with slow or high accrual rate, and in the presence or absence of censoring. Impacts of hazard rate, accrual rate, and competitive censoring on accrual time and subsequently on total trial duration are carefully illustrated. Real time for interim analyses, needed number of events, and recruited number of subjects at time of interim analyses are also tabulated.  相似文献   

3.
This study considers the detection of treatment‐by‐subset interactions in a stratified, randomised clinical trial with a binary‐response variable. The focus lies on the detection of qualitative interactions. In addition, the presented method is useful more generally, as it can assess the inconsistency of the treatment effects among strata by using an a priori‐defined inconsistency margin. The methodology presented is based on the construction of ratios of treatment effects. In addition to multiplicity‐adjusted p‐values, simultaneous confidence intervals are recommended to use in detecting the source and the amount of a potential qualitative interaction. The proposed method is demonstrated on a multi‐regional trial using the open‐source statistical software R . Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   

5.
The goal of a phase I clinical trial in oncology is to find a dose with acceptable dose‐limiting toxicity rate. Often, when a cytostatic drug is investigated or when the maximum tolerated dose is defined using a toxicity score, the main endpoint in a phase I trial is continuous. We propose a new method to use in a dose‐finding trial with continuous endpoints. The new method selects the right dose on par with other methods and provides more flexibility in assigning patients to doses in the course of the trial when the rate of accrual is fast relative to the follow‐up time. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Summary. Interim analysis is important in a large clinical trial for ethical and cost considerations. Sometimes, an interim analysis needs to be performed at an earlier than planned time point. In that case, methods using stochastic curtailment are useful in examining the data for early stopping while controlling the inflation of type I and type II errors. We consider a three-arm randomized study of treatments to reduce perioperative blood loss following major surgery. Owing to slow accrual, an unplanned interim analysis was required by the study team to determine whether the study should be continued. We distinguish two different cases: when all treatments are under direct comparison and when one of the treatments is a control. We used simulations to study the operating characteristics of five different stochastic curtailment methods. We also considered the influence of timing of the interim analyses on the type I error and power of the test. We found that the type I error and power between the different methods can be quite different. The analysis for the perioperative blood loss trial was carried out at approximately a quarter of the planned sample size. We found that there is little evidence that the active treatments are better than a placebo and recommended closure of the trial.  相似文献   

7.
Determination of an adequate sample size is critical to the design of research ventures. For clustered right-censored data, Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621] proposed a sample size calculation based on considering the bivariate marginal distribution as Clayton copula model. In addition to the Clayton copula, other important family of copulas, such as Gumbel and Frank copulas are also well established in multivariate survival analysis. However, sample size calculation based on these assumptions has not been fully investigated yet. To broaden the scope of Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621]'s research and achieve a more flexible sample size calculation for clustered right-censored data, we extended the work by assuming the marginal distribution as bivariate Gumbel and Frank copulas. We evaluate the performance of the proposed method and investigate the impacts of the accrual times, follow-up times and the within-clustered correlation effect of the study. The proposed method is applied to two real-world studies, and the R code is made available to users.  相似文献   

8.
The design of a clinical trial is often complicated by the multi‐systemic nature of the disease; a single endpoint often cannot capture the spectrum of potential therapeutic benefits. Multi‐domain outcomes which take into account patient heterogeneity of disease presentation through measurements of multiple symptom/functional domains are an attractive alternative to a single endpoint. A multi‐domain test with adaptive weights is proposed to synthesize the evidence of treatment efficacy over numerous disease domains. The test is a weighted sum of domain‐specific test statistics with weights selected adaptively via a data‐driven algorithm. The null distribution of the test statistic is constructed empirically through resampling and does not require estimation of the covariance structure of domain‐specific test statistics. Simulations show that the proposed test controls the type I error rate, and has increased power over other methods such as the O'Brien and Wei‐Lachin tests in scenarios reflective of clinical trial settings. Data from a clinical trial in a rare lysosomal storage disorder were used to illustrate the properties of the proposed test. As a strategy of combining marginal test statistics, the proposed test is flexible and readily applicable to a variety of clinical trial scenarios.  相似文献   

9.
Group sequential trialswith time to event end points can be complicated to design. Notonly are there unlimited choices for the number of events requiredat each stage, but for each of these choices, there are unlimitedcombinations of accrual and follow-up at each stage that providethe required events. Methods are presented for determining optimalcombinations of accrual and follow-up for two-stage clinicaltrials with time to event end points. Optimization is based onminimizing the expected total study length as a function of theexpected accrual duration or sample size while providing an appropriateoverall size and power. Optimal values of expected accrual durationand minimum expected total study length are given assuming anexponential proportional hazards model comparing two treatmentgroups. The expected total study length can be substantiallydecreased by including a follow-up period during which accrualis suspended. Conditions that warrant an interim follow-up periodare considered, and the gain in efficiency achieved by includingan interim follow-up period is quantified. The gain in efficiencyshould be weighed against the practical difficulties in implementingsuch designs. An example is given to illustrate the use of thesetechniques in designing a clinical trial to compare two chemotherapyregimens for lung cancer. Practical considerations of includingan interim follow-up period are discussed.  相似文献   

10.
11.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   

12.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

14.
Confounding adjustment plays a key role in designing observational studies such as cross-sectional studies, case-control studies, and cohort studies. In this article, we propose a simple method for sample size calculation in observational research in the presence of confounding. The method is motivated by the notion of E-value, using some bounding factor to quantify the impact of confounders on the effect size. The method can be applied to calculate the needed sample size in observational research when the outcome variable is binary, continuous, or time-to-event. The method can be implemented straightforwardly using existing commercial software such as the PASS software. We demonstrate the performance of the proposed method through numerical examples, simulation studies, and a real application, which show that the proposed method is conservative in providing a slightly bigger sample size than what it needs to achieve a given power.  相似文献   

15.
Because of the complexity of cancer biology, often the target pathway is not well understood at the time that phase III trials are initiated. A 2‐stage trial design was previously proposed for identifying a subgroup of interest in a learn stage, on the basis of 1 or more baseline biomarkers, and then subsequently confirming it in a confirmation stage. In this article, we discuss some practical aspects of this type of design and describe an enhancement to this approach that can be built into the study randomization to increase the robustness of the evaluation. Furthermore, we show via simulation studies how the proportion of patients allocated to the learn stage versus the confirm stage impacts the power and provide recommendations.  相似文献   

16.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

17.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

18.
A multi‐sample test for equality of mean directions is developed for populations having Langevin‐von Mises‐Fisher distributions with a common unknown concentration. The proposed test statistic is a monotone transformation of the likelihood ratio. The high‐concentration asymptotic null distribution of the test statistic is derived. In contrast to previously suggested high‐concentration tests, the high‐concentration asymptotic approximation to the null distribution of the proposed test statistic is also valid for large sample sizes with any fixed nonzero concentration parameter. Simulations of size and power show that the proposed test outperforms competing tests. An example with three‐dimensional data from an anthropological study illustrates the practical application of the testing procedure.  相似文献   

19.
A versatile procedure is described comprising an application of statistical techniques to the analysis of the large, multi‐dimensional data arrays produced by electroencephalographic (EEG) measurements of human brain function. Previous analytical methods have been unable to identify objectively the precise times at which statistically significant experimental effects occur, owing to the large number of variables (electrodes) and small number of subjects, or have been restricted to two‐treatment experimental designs. Many time‐points are sampled in each experimental trial, making adjustment for multiple comparisons mandatory. Given the typically large number of comparisons and the clear dependence structure among time‐points, simple Bonferroni‐type adjustments are far too conservative. A three‐step approach is proposed: (i) summing univariate statistics across variables; (ii) using permutation tests for treatment effects at each time‐point; and (iii) adjusting for multiple comparisons using permutation distributions to control family‐wise error across the whole set of time‐points. Our approach provides an exact test of the individual hypotheses while asymptotically controlling family‐wise error in the strong sense, and can provide tests of interaction and main effects in factorial designs. An application to two experimental data sets from EEG studies is described, but the approach has application to the analysis of spatio‐temporal multivariate data gathered in many other contexts.  相似文献   

20.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号