首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   

2.
When prior information on model parameters is weak or lacking, Bayesian statistical analyses are typically performed with so-called “default” priors. We consider the problem of constructing default priors for the parameters of survival models in the presence of censoring, using Jeffreys’ rule. We compare these Jeffreys priors to the “uncensored” Jeffreys priors, obtained without considering censored observations, for the parameters of the exponential and log-normal models. The comparison is based on the frequentist coverage of the posterior Bayes intervals obtained from these prior distributions.  相似文献   

3.
Model selection procedures often depend explicitly on the sample size n of the experiment. One example is the Bayesian information criterion (BIC) criterion and another is the use of Zellner–Siow priors in Bayesian model selection. Sample size is well-defined if one has i.i.d real observations, but is not well-defined for vector observations or in non-i.i.d. settings; extensions of critera such as BIC to such settings thus requires a definition of effective sample size that applies also in such cases. A definition of effective sample size that applies to fairly general linear models is proposed and illustrated in a variety of situations. The definition is also used to propose a suitable ‘scale’ for default proper priors for Bayesian model selection.  相似文献   

4.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

5.
When historical data are available, incorporating them in an optimal way into the current data analysis can improve the quality of statistical inference. In Bayesian analysis, one can achieve this by using quality-adjusted priors of Zellner, or using power priors of Ibrahim and coauthors. These rules are constructed by raising the prior and/or the sample likelihood to some exponent values, which act as measures of compatibility of their quality or proximity of historical data to current data. This paper presents a general, optimum procedure that unifies these rules and is derived by minimizing a Kullback–Leibler divergence under a divergence constraint. We show that the exponent values are directly related to the divergence constraint set by the user and investigate the effect of this choice theoretically and also through sensitivity analysis. We show that this approach yields ‘100% efficient’ information processing rules in the sense of Zellner. Monte Carlo experiments are conducted to investigate the effect of historical and current sample sizes on the optimum rule. Finally, we illustrate these methods by applying them on real data sets.  相似文献   

6.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

7.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

8.
Abstract. We study the problem of deciding which of two normal random samples, at least one of them of small size, has greater expected value. Unlike in the standard Bayesian approach, in which a single prior distribution and a single loss function are declared, we assume that a set of plausible priors and a set of plausible loss functions are elicited from the expert (the client or the sponsor of the analysis). The choice of the sample that has greater expected value is based on equilibrium priors, allowing for an impasse if for some plausible priors and loss functions choosing one and for others the other sample is associated with smaller expected loss.  相似文献   

9.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

10.
Abstract

This paper investigates the statistical analysis of grouped accelerated temperature cycling test data when the product lifetime follows a Weibull distribution. A log-linear acceleration equation is derived from the Coffin-Manson model. The problem is transformed to a constant-stress accelerated life test with grouped data and multiple acceleration variables. The Jeffreys prior and reference priors are derived. Maximum likelihood estimation and Bayesian estimation with objective priors are obtained by applying the technique of data augmentation. A simulation study shows that both of these two methods perform well when sample size is large, and the Bayesian method gives better performance under small sample sizes.  相似文献   

11.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

12.
Bayesian dynamic borrowing designs facilitate borrowing information from historical studies. Historical data, when perfectly commensurate with current data, have been shown to reduce the trial duration and the sample size, while inflation in the type I error and reduction in the power have been reported, when imperfectly commensurate. These results, however, were obtained without considering that Bayesian designs are calibrated to meet regulatory requirements in practice and even no‐borrowing designs may use information from historical data in the calibration. The implicit borrowing of historical data suggests that imperfectly commensurate historical data may similarly impact no‐borrowing designs negatively. We will provide a fair appraiser of Bayesian dynamic borrowing and no‐borrowing designs. We used a published selective adaptive randomization design and real clinical trial setting and conducted simulation studies under varying degrees of imperfectly commensurate historical control scenarios. The type I error was inflated under the null scenario of no intervention effect, while larger inflation was noted with borrowing. The larger inflation in type I error under the null setting can be offset by the greater probability to stop early correctly under the alternative. Response rates were estimated more precisely and the average sample size was smaller with borrowing. The expected increase in bias with borrowing was noted, but was negligible. Using Bayesian dynamic borrowing designs may improve trial efficiency by stopping trials early correctly and reducing trial length at the small cost of inflated type I error.  相似文献   

13.
In this paper, we develop a matching prior for the product of means in several normal distributions with unrestricted means and unknown variances. For this problem, properly assigning priors for the product of normal means has been issued because of the presence of nuisance parameters. Matching priors, which are priors matching the posterior probabilities of certain regions with their frequentist coverage probabilities, are commonly used but difficult to derive in this problem. We developed the first order probability matching priors for this problem; however, the developed matching priors are unproper. Thus, we apply an alternative method and derive a matching prior based on a modification of the profile likelihood. Simulation studies show that the derived matching prior performs better than the uniform prior and Jeffreys’ prior in meeting the target coverage probabilities, and meets well the target coverage probabilities even for the small sample sizes. In addition, to evaluate the validity of the proposed matching prior, Bayesian credible interval for the product of normal means using the matching prior is compared to Bayesian credible intervals using the uniform prior and Jeffrey’s prior, and the confidence interval using the method of Yfantis and Flatman.  相似文献   

14.
Leveraging historical data into the design and analysis of phase 2 randomized controlled trials can improve efficiency of drug development programs. Such approaches can reduce sample size without loss of power. Potential issues arise when the current control arm is inconsistent with historical data, which may lead to biased estimates of treatment efficacy, loss of power, or inflated type 1 error. Consideration as to how to borrow historical information is important, and in particular, adjustment for prognostic factors should be considered. This paper will illustrate two motivating case studies of oncology Bayesian augmented control (BAC) trials. In the first example, a glioblastoma study, an informative prior was used for the control arm hazard rate. Sample size savings were 15% to 20% by using a BAC design. In the second example, a pancreatic cancer study, a hierarchical model borrowing method was used, which enabled the extent of borrowing to be determined by consistency of observed study data with historical studies. Supporting Bayesian analyses also adjusted for prognostic factors. Incorporating historical data via Bayesian trial design can provide sample size savings, reduce study duration, and enable a more scientific approach to development of novel therapies by avoiding excess recruitment to a control arm. Various sensitivity analyses are necessary to interpret results. Current industry efforts for data transparency have meaningful implications for access to patient‐level historical data, which, while not critical, is helpful to adjust for potential imbalances in prognostic factors.  相似文献   

15.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

16.
To deal with high placebo response in clinical trials for psychiatric and other diseases, different enrichment designs, such as the sequential parallel design, two‐way enriched design, and sequential enriched design, have been proposed and implemented recently. Depending on the historical trial information and the trial sponsors' resources, detailed design elements are needed for determining which design to adopt. To assist in making more suitable decisions, we perform evaluations for selecting required design elements in terms of power optimization and sample size planning. We also discuss the implementation of the interim analysis related to its applicability.  相似文献   

17.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

18.
We propose Bayesian methods with five types of priors to estimate cell probabilities in an incomplete multi-way contingency table under nonignorable nonresponse. In this situation, the maximum likelihood (ML) estimates often fall in the boundary solution, causing the ML estimates to become unstable. To deal with such a multi-way table, we present an EM algorithm which generalizes the previous algorithm used for incomplete one-way tables. Three of the five types of priors were previously introduced while the other two are newly proposed to reflect different response patterns between respondents and nonrespondents. Data analysis and simulation studies show that Bayesian estimates based on the old three priors can be worse than the ML regardless of occurrence of boundary solution, contrary to previous studies. The Bayesian estimates from the two new priors are most preferable when a boundary solution occurs. We provide an illustrating example using data for a study of the relationship between a mother's smoking and her newborn's weight.  相似文献   

19.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

20.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号