首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

2.
To quantify uncertainty in a formal manner, statisticians play a vital role in identifying a prior distribution for a Bayesian‐designed clinical trial. However, when expert beliefs are to be used to form the prior, the literature is sparse on how feasible and how reliable it is to elicit beliefs from experts. For late‐stage clinical trials, high importance is placed on reliability; however, feasibility may be equally important in early‐stage trials. This article describes a case study to assess how feasible it is to conduct an elicitation session in a structured manner and to form a probability distribution that would be used in a hypothetical early‐stage trial. The case study revealed that by using a structured approach to planning, training and conduct, it is feasible to elicit expert beliefs and form a probability distribution in a timely manner. We argue that by further increasing the published accounts of elicitation of expert beliefs in drug development, there will be increased confidence in the feasibility of conducting elicitation sessions. Furthermore, this will lead to wider dissemination of the pertinent issues on how to quantify uncertainty to both practicing statisticians and others involved with designing trials in a Bayesian manner. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u‐shape very similar, but not equal, to a β‐distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Typically, in the brief discussion of Bayesian inferential methods presented at the beginning of calculus-based undergraduate or graduate mathematical statistics courses, little attention is paid to the process of choosing the parameter value(s) for the prior distribution. Even less attention is paid to the impact of these choices on the predictive distribution of the data. Reasons for this include that the posterior can be found by ignoring the predictive distribution thereby streamlining the derivation of the posterior and/or that computer software can be used to find the posterior distribution. In this paper, the binomial, negative-binomial and Poisson distributions along with their conjugate beta and gamma priors are utilized to obtain the resulting predictive distributions. It is then demonstrated that specific choices of the parameters of the priors can lead to predictive distributions with properties that might be surprising to a non-expert user of Bayesian methods.  相似文献   

5.
For the cancer clinical trials with immunotherapy and molecularly targeted therapy, time-to-event endpoint is often a desired endpoint. In this paper, we present an event-driven approach for Bayesian one-stage and two-stage single-arm phase II trial designs. Two versions of Bayesian one-stage designs were proposed with executable algorithms and meanwhile, we also develop theoretical relationships between the frequentist and Bayesian designs. These findings help investigators who want to design a trial using Bayesian approach have an explicit understanding of how the frequentist properties can be achieved. Moreover, the proposed Bayesian designs using the exact posterior distributions accommodate the single-arm phase II trials with small sample sizes. We also proposed an optimal two-stage approach, which can be regarded as an extension of Simon's two-stage design with the time-to-event endpoint. Comprehensive simulations were conducted to explore the frequentist properties of the proposed Bayesian designs and an R package BayesDesign can be assessed via R CRAN for convenient use of the proposed methods.  相似文献   

6.
The Bayesian design approach accounts for uncertainty of the parameter values on which optimal design depends, but Bayesian designs themselves depend on the choice of a prior distribution for the parameter values. This article investigates Bayesian D-optimal designs for two-parameter logistic models, using numerical search. We show three things: (1) a prior with large variance leads to a design that remains highly efficient under other priors, (2) uniform and normal priors lead to equally efficient designs, and (3) designs with four or five equidistant equally weighted design points are highly efficient relative to the Bayesian D-optimal designs.  相似文献   

7.
The author shows how geostatistical data that contain measurement errors can be analyzed objectively by a Bayesian approach using Gaussian random fields. He proposes a reference prior and two versions of Jeffreys' prior for the model parameters. He studies the propriety and the existence of moments for the resulting posteriors. He also establishes the existence of the mean and variance of the predictive distributions based on these default priors. His reference prior derives from a representation of the integrated likelihood that is particularly convenient for computation and analysis. He further shows that these default priors are not very sensitive to some aspects of the design and model, and that they have good frequentist properties. Finally, he uses a data set of carbon/nitrogen ratios from an agricultural field to illustrate his approach.  相似文献   

8.
The Jeffreys-rule prior and the marginal independence Jeffreys prior are recently proposed in Fonseca et al. [Objective Bayesian analysis for the Student-t regression model, Biometrika 95 (2008), pp. 325–333] as objective priors for the Student-t regression model. The authors showed that the priors provide proper posterior distributions and perform favourably in parameter estimation. Motivated by a practical financial risk management application, we compare the performance of the two Jeffreys priors with other priors proposed in the literature in a problem of estimating high quantiles for the Student-t model with unknown degrees of freedom. Through an asymptotic analysis and a simulation study, we show that both Jeffreys priors perform better in using a specific quantile of the Bayesian predictive distribution to approximate the true quantile.  相似文献   

9.
David R. Bickel 《Statistics》2018,52(3):552-570
Learning from model diagnostics that a prior distribution must be replaced by one that conflicts less with the data raises the question of which prior should instead be used for inference and decision. The same problem arises when a decision maker learns that one or more reliable experts express unexpected beliefs. In both cases, coherence of the solution would be guaranteed by applying Bayes's theorem to a distribution of prior distributions that effectively assigns the initial prior distribution a probability arbitrarily close to 1. The new distribution for inference would then be the distribution of priors conditional on the insight that the prior distribution lies in a closed convex set that does not contain the initial prior. A readily available distribution of priors needed for such conditioning is the law of the empirical distribution of sufficiently large number of independent parameter values drawn from the initial prior. According to the Gibbs conditioning principle from the theory of large deviations, the resulting new prior distribution minimizes the entropy relative to the initial prior. While minimizing relative entropy accommodates the necessity of going beyond the initial prior without departing from it any more than the insight demands, the large-deviation derivation also ensures the advantages of Bayesian coherence. This approach is generalized to uncertain insights by allowing the closed convex set of priors to be random.  相似文献   

10.
In this paper, we present an innovative method for constructing proper priors for the skewness (shape) parameter in the skew‐symmetric family of distributions. The proposed method is based on assigning a prior distribution on the perturbation effect of the shape parameter, which is quantified in terms of the total variation distance. We discuss strategies to translate prior beliefs about the asymmetry of the data into an informative prior distribution of this class. We show via a Monte Carlo simulation study that our non‐informative priors induce posterior distributions with good frequentist properties, similar to those of the Jeffreys prior. Our informative priors yield better results than their competitors from the literature. We also propose a scale‐invariant and location‐invariant prior structure for models with unknown location and scale parameters and provide sufficient conditions for the propriety of the corresponding posterior distribution. Illustrative examples are presented using simulated and real data.  相似文献   

11.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

12.
Abstract. We study the problem of deciding which of two normal random samples, at least one of them of small size, has greater expected value. Unlike in the standard Bayesian approach, in which a single prior distribution and a single loss function are declared, we assume that a set of plausible priors and a set of plausible loss functions are elicited from the expert (the client or the sponsor of the analysis). The choice of the sample that has greater expected value is based on equilibrium priors, allowing for an impasse if for some plausible priors and loss functions choosing one and for others the other sample is associated with smaller expected loss.  相似文献   

13.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

14.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   

15.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Using historical data for Bayesian sample size determination   总被引:2,自引:0,他引:2  
Summary.  We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data—i.e. observations from previous similar studies—for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts.  相似文献   

17.
In phase II single‐arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single‐arm two‐stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
Utilizing the notion of matching predictives as in Berger and Pericchi, we show that for the conjugate family of prior distributions in the normal linear model, the symmetric Kullback-Leibler divergence between two particular predictive densities is minimized when the prior hyperparameters are taken to be those corresponding to the predictive priors proposed in Ibrahim and Laud and Laud and Ibrahim. The main application for this result is for Bayesian variable selection.  相似文献   

19.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

20.
Construction methods for prior densities are investigated from a predictive viewpoint. Predictive densities for future observables are constructed by using observed data. The simultaneous distribution of future observables and observed data is assumed to belong to a parametric submodel of a multinomial model. Future observables and data are possibly dependent. The discrepancy of a predictive density to the true conditional density of future observables given observed data is evaluated by the Kullback-Leibler divergence. It is proved that limits of Bayesian predictive densities form an essentially complete class. Latent information priors are defined as priors maximizing the conditional mutual information between the parameter and the future observables given the observed data. Minimax predictive densities are constructed as limits of Bayesian predictive densities based on prior sequences converging to the latent information priors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号