首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Just as Bayes extensions of the frequentist optimal allocation design have been developed for the two-group case, we provide a Bayes extension of optimal allocation in the three-group case. We use the optimal allocations derived by Jeon and Hu [Optimal adaptive designs for binary response trials with three treatments. Statist Biopharm Res. 2010;2(3):310–318] and estimate success probabilities for each treatment arm using a Bayes estimator. We also introduce a natural lead-in design that allows adaptation to begin as early in the trial as possible. Simulation studies show that the Bayesian adaptive designs simultaneously increase the power and expected number of successfully treated patients compared to the balanced design. And compared to the standard adaptive design, the natural lead-in design introduced in this study produces a higher expected number of successes whilst preserving power.  相似文献   

2.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

3.
4.
Instead of using traditional separate phase I and II trials, in this article, we propose using a parallel three-stage phase I/II design, incorporating a dose expansion approach to flexibly evaluate the safety and efficacy of dose levels, and to select the optimal dose. In the proposed design, both the toxicity and efficacy responses are binary endpoints. A 3+3-based procedure is used for initial period of dose escalation at stage 1; at this level, the dose can be expanded to stage 2 for exploratory efficacy studies of phase IIa, while simultaneously, the safety testing can advance to a higher dose level. A beta-binomial model is used to model the efficacy responses. There are two placebo-controlled randomization interim monitoring analyses at stage 2 to select the promising doses to be recommended to stage 3 for further efficacy studies of phase IIb. An adaptive randomization approach is used to assign more patients to doses with higher efficacy levels at stage 3. We examine the properties of the proposed design through extensive simulation studies by using R programming language, and also compare the new design with the conventional design and a competing adaptive Bayesian design. The simulation results show that our design can efficiently assign more patients to doses with higher efficacy levels and is superior to the two competing designs in terms of total sample size reduction.  相似文献   

5.
Summary.  Prophylaxis of contacts of infectious cases such as household members and treatment of infectious cases are methods to prevent the spread of infectious diseases. We develop a method based on maximum likelihood to estimate the efficacy of such interventions and the transmission probabilities. We consider both the design with prospective follow-up of close contact groups and the design with ascertainment of close contact groups by an index case as well as randomization by groups and by individuals. We compare the designs by using simulations. We estimate the efficacy of the influenza antiviral agent oseltamivir in reducing susceptibility and infectiousness in two case-ascertained household trials.  相似文献   

6.
Designs for early phase dose finding clinical trials typically are either phase I based on toxicity, or phase I-II based on toxicity and efficacy. These designs rely on the implicit assumption that the dose of an experimental agent chosen using these short-term outcomes will maximize the agent's long-term therapeutic success rate. In many clinical settings, this assumption is not true. A dose selected in an early phase oncology trial may give suboptimal progression-free survival or overall survival time, often due to a high rate of relapse following response. To address this problem, a new family of Bayesian generalized phase I-II designs is proposed. First, a conventional phase I-II design based on short-term outcomes is used to identify a set of candidate doses, rather than selecting one dose. Additional patients then are randomized among the candidates, patients are followed for a predefined longer time period, and a final dose is selected to maximize the long-term therapeutic success rate, defined in terms of duration of response. Dose-specific sample sizes in the randomization are determined adaptively to obtain a desired level of selection reliability. The design was motivated by a phase I-II trial to find an optimal dose of natural killer cells as targeted immunotherapy for recurrent or treatment-resistant B-cell hematologic malignancies. A simulation study shows that, under a range of scenarios in the context of this trial, the proposed design has much better performance than two conventional phase I-II designs.  相似文献   

7.
The prognosis for patients with high grade gliomas is poor, with a median survival of 1 year. Treatment efficacy assessment is typically unavailable until 5-6 months post diagnosis. Investigators hypothesize that quantitative magnetic resonance imaging can assess treatment efficacy 3 weeks after therapy starts, thereby allowing salvage treatments to begin earlier. The purpose of this work is to build a predictive model of treatment efficacy by using quantitative magnetic resonance imaging data and to assess its performance. The outcome is 1-year survival status. We propose a joint, two-stage Bayesian model. In stage I, we smooth the image data with a multivariate spatiotemporal pairwise difference prior. We propose four summary statistics that are functionals of posterior parameters from the first-stage model. In stage II, these statistics enter a generalized non-linear model as predictors of survival status. We use the probit link and a multivariate adaptive regression spline basis. Gibbs sampling and reversible jump Markov chain Monte Carlo methods are applied iteratively between the two stages to estimate the posterior distribution. Through both simulation studies and model performance comparisons we find that we can achieve higher overall correct classification rates by accounting for the spatiotemporal correlation in the images and by allowing for a more complex and flexible decision boundary provided by the generalized non-linear model.  相似文献   

8.
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design.  相似文献   

9.
THE DESIGN OF CLINICAL TRIALS   总被引:1,自引:0,他引:1  
Clinical trials have grown in number and size since their introduction over 30 years ago. They embody Fisherian principles of experimentation, with emphasis on randomization rather than on complex structure. Randomization has recently come under attack, regrettably in the author's view. Much work has been done on methods of achieving balanced randomization, although the advantage of such schemes over post-trial adjustment is not clear.
Trials are often too small to achieve sensible objects. Size is mainly determined by power considerations, decision-theoretic models being difficult to relate to practical needs. Sequential methods are available, and have become more flexible.
Problems of planning and execution, on which the statistician's view is important, include the homogeneity or otherwise of patients, uniformity of treatment schedules, procedure for dealing with protocol departures and the avoidance of bias in assessment of response.
Cross-over designs are useful for the study of short-term alleviation of chronic disease symptoms, but more attention should be paid to the detection of treatment × period interaction, which is based on between-subject comparisons. More use should be made of factorial designs, although they give rise to difficulties of interpretation when factors interact.  相似文献   

10.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   

11.
Random assignment of experimental units to treatment and control groups is a conventional device tob create unbiased comparisons. However, when sample sizes are small and the units differ considerably, there is a significant risk that randomization will create seriously unbalanced partitions of the units into treatment and control groups. We develop and evaluate an alternative to complete randomization for small-sample comparisons involving ordinal data with partial information on ranks of units. For instance, we might know that, of eight units, Rank (A) < Rank (C), Rank (A) < Rank (E) and Rank(D) < Rank(H). We develop an efficient computational procedure to use such information as the basis for restricted randomization of units to the treatment group. We compare our methods to complete randomization in the context of the Mann-Whitney test. With sufficient ranking information, the restricted randomization results in more powerful comparisons.  相似文献   

12.
This article designs a Sequential Monte Carlo (SMC) algorithm for estimation of Bayesian semi-parametric Stochastic Volatility model for financial data. In particular, it makes use of one of the most recent particle filters called Particle Learning (PL). SMC methods are especially well suited for state-space models and can be seen as a cost-efficient alternative to Markov Chain Monte Carlo (MCMC), since they allow for online type inference. The posterior distributions are updated as new data is observed, which is exceedingly costly using MCMC. Also, PL allows for consistent online model comparison using sequential predictive log Bayes factors. A simulated data is used in order to compare the posterior outputs for the PL and MCMC schemes, which are shown to be almost identical. Finally, a short real data application is included.  相似文献   

13.
We consider the situation where sample surveys are to be undertaken on sensitive or stigmatizing issues. For such surveys, direct questioning methods usually lead to non-compliance or incorrect responses and so, the randomized response technique, where the responses are collected through some randomization device, is found to be useful. A majority of the literature on these techniques focus on dichotomous sensitive variables, while some techniques are also available for continuous sensitive variables. In this article, we focus on the extent of privacy protection available in sample surveys to respondents for continuous response variables. We also propose two measures of privacy protection. We demonstrate that the parameters of our randomization scheme can be so chosen as to achieve a pre-assigned level of privacy protection while at the same time yielding efficient estimates. We also show some numerical comparisons.  相似文献   

14.
In multiple comparisons of fixed effect parameters in linear mixed models, treatment effects can be reported as relative changes or ratios. Simultaneous confidence intervals for such ratios had been previously proposed based on Bonferroni adjustments or multivariate normal quantiles accounting for the correlation among the multiple contrasts. We propose Fieller-type intervals using multivariate t quantiles and the application of Markov chain Monte Carlo techniques to sample from the joint posterior distribution and construct percentile-based simultaneous intervals. The methods are compared in a simulation study including bioassay problems with random intercepts and slopes, repeated measurements designs, and multicenter clinical trials.  相似文献   

15.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
If a crossover design with more than two treatments is carryover balanced, then the usual randomization of experimental units and periods would destroy the neighbour structure of the design. As an alternative, Bailey [1985. Restricted randomization for neighbour-balanced designs. Statist. Decisions Suppl. 2, 237–248] considered randomization of experimental units and of treatment labels, which leaves the neighbour structure intact. She has shown that, if there are no carryover effects, this randomization validates the row–column model, provided the starting design is a generalized Latin square. We extend this result to generalized Youden designs where either the number of experimental units is a multiple of the number of treatments or the number of periods is equal to the number of treatments. For the situation when there are carryover effects we show for so-called totally balanced designs that the variance of the estimates of treatment differences does not change in the presence of carryover effects, while the estimated variance of this estimate becomes conservative.  相似文献   

17.

Bayesian monitoring strategies based on predictive probabilities are widely used in phase II clinical trials that involve a single efficacy binary variable. The essential idea is to control the predictive probability that the trial will show a conclusive result at the scheduled end of the study, given the information at the interim stage and the prior beliefs. In this paper, we present an extension of this approach to incorporate toxicity considerations in single-arm phase II trials. We consider two binary endpoints representing response and toxicity of the experimental treatment and define the result as successful at the conclusion of the study if the posterior probability of an high efficacy and that of a small toxicity are both sufficiently large. At any interim look, the Multinomial-Dirichlet distribution provides the predictive probability of each possible combination of future efficacy and toxicity outcomes. It is exploited to obtain the predictive probability that the trial will yield a positive outcome, if it continues to the planned end. Different possible interim situations are considered to investigate the behaviour of the proposed predictive rules and the differences with the monitoring strategies based on posterior probabilities are highlighted. Simulation studies are also performed to evaluate the frequentist operating characteristics of the proposed design and to calibrate the design parameters.

  相似文献   

18.
Typically, in the brief discussion of Bayesian inferential methods presented at the beginning of calculus-based undergraduate or graduate mathematical statistics courses, little attention is paid to the process of choosing the parameter value(s) for the prior distribution. Even less attention is paid to the impact of these choices on the predictive distribution of the data. Reasons for this include that the posterior can be found by ignoring the predictive distribution thereby streamlining the derivation of the posterior and/or that computer software can be used to find the posterior distribution. In this paper, the binomial, negative-binomial and Poisson distributions along with their conjugate beta and gamma priors are utilized to obtain the resulting predictive distributions. It is then demonstrated that specific choices of the parameters of the priors can lead to predictive distributions with properties that might be surprising to a non-expert user of Bayesian methods.  相似文献   

19.
We study the optimality, efficiency, and robustness of crossover designs for comparing several test treatments to a control treatment. Since A-optimality is a natural criterion in this context, we establish lower bounds for the trace of the inverse of the information matrix for the test treatments versus control comparisons under various models. These bounds are then used to obtain lower bounds for efficiencies of a design under these models. Two algorithms, both guided by these efficiencies and results from optimal design theory, are proposed for obtaining efficient designs under the various models.  相似文献   

20.
In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号