首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

2.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
The process personnel always seek the opportunity to improve the processes. One of the essential steps for process improvement is to quickly recognize the starting time or the change point of a process disturbance. The proposed approach combines the X¯ control chart with the Bayesian estimation technique. We show that the control chart has some information about the change point and this information can be used to make an informative prior. Then two Bayes estimators corresponding to the informative and a non informative prior along with MLE are considered. Their efficiencies are compared through a series of simulations. The results show that the Bayes estimator with the informative prior is more accurate and more precise when the means of the process before and after the change point time are not too closed. In addition, the efficiency of the Bayes estimator with the informative prior increases as the change point goes away from the origin.  相似文献   

4.
5.
In the context of vaccine efficacy trial where the incidence rate is very low and a very large sample size is usually expected, incorporating historical data into a new trial is extremely attractive to reduce sample size and increase estimation precision. Nevertheless, for some infectious diseases, seasonal change in incidence rates poses a huge challenge in borrowing historical data and a critical question is how to properly take advantage of historical data borrowing with acceptable tolerance to between-trials heterogeneity commonly from seasonal disease transmission. In this article, we extend a probability-based power prior which determines the amount of information to be borrowed based on the agreement between the historical and current data, to make it applicable for either a single or multiple historical trials available, with constraint on the amount of historical information to be borrowed. Simulations are conducted to compare the performance of the proposed method with other methods including modified power prior (MPP), meta-analytic-predictive (MAP) prior and the commensurate prior methods. Furthermore, we illustrate the application of the proposed method for trial design in a practical setting.  相似文献   

6.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
In the framework of cluster analysis based on Gaussian mixture models, it is usually assumed that all the variables provide information about the clustering of the sample units. Several variable selection procedures are available in order to detect the structure of interest for the clustering when this structure is contained in a variable sub-vector. Currently, in these procedures a variable is assumed to play one of (up to) three roles: (1) informative, (2) uninformative and correlated with some informative variables, (3) uninformative and uncorrelated with any informative variable. A more general approach for modelling the role of a variable is proposed by taking into account the possibility that the variable vector provides information about more than one structure of interest for the clustering. This approach is developed by assuming that such information is given by non-overlapped and possibly correlated sub-vectors of variables; it is also assumed that the model for the variable vector is equal to a product of conditionally independent Gaussian mixture models (one for each variable sub-vector). Details about model identifiability, parameter estimation and model selection are provided. The usefulness and effectiveness of the described methodology are illustrated using simulated and real datasets.  相似文献   

8.
Borrowing data from external control has been an appealing strategy for evidence synthesis when conducting randomized controlled trials (RCTs). Often named hybrid control trials, they leverage existing control data from clinical trials or potentially real-world data (RWD), enable trial designs to allocate more patients to the novel intervention arm, and improve the efficiency or lower the cost of the primary RCT. Several methods have been established and developed to borrow external control data, among which the propensity score methods and Bayesian dynamic borrowing framework play essential roles. Noticing the unique strengths of propensity score methods and Bayesian hierarchical models, we utilize both methods in a complementary manner to analyze hybrid control studies. In this article, we review methods including covariate adjustments, propensity score matching and weighting in combination with dynamic borrowing and compare the performance of these methods through comprehensive simulations. Different degrees of covariate imbalance and confounding are examined. Our findings suggested that the conventional covariate adjustment in combination with the Bayesian commensurate prior model provides the highest power with good type I error control under the investigated settings. It has desired performance especially under scenarios of different degrees of confounding. To estimate efficacy signals in the exploratory setting, the covariate adjustment method in combination with the Bayesian commensurate prior is recommended.  相似文献   

9.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   

10.
The National Institute of Mental Health (NIMH) Collaborative Study of Long-Term Maintenance Drug Therapy in Recurrent Affective Illness was a multicenter randomized controlled clinical trial designed to determine the efficacy of a pharmacotherapy for the prevention of the recurrence of unipolar affective disorders. The outcome of interest in this study was the time until the recurrence of a depressive episode. The data show much heterogeneity between centers for the placebo group. The aim of this paper is to use Bayesian hierarchical survival models to investigate the heterogeneity of placebo effects among centers in the NIMH study. This heterogeneity is explored in terms of the marginal posterior distributions of parameters of interest and predictive distributions of future observations. The Gibbs sampling algorithm is used to approximate posterior and predictive distributions. Sensitivity of results to the assumption of a constant hazard survival distribution at the first stage of the hierarchy is examined by comparing results derived from a two component exponential mixture and a two component exponential changepoint model to the results derived from an exponential model. The second component of the mixture and changepoint models is assumed to be a surviving fraction. For each of these first stage parametric models sensitivity of results to second stage prior distributions is also examined. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

11.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

12.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

13.
When recruitment into a clinical trial is limited due to rarity of the disease of interest, or when recruitment to the control arm is limited due to ethical reasons (eg, pediatric studies or important unmet medical need), exploiting historical controls to augment the prospectively collected database can be an attractive option. Statistical methods for combining historical data with randomized data, while accounting for the incompatibility between the two, have been recently proposed and remain an active field of research. The current literature is lacking a rigorous comparison between methods but also guidelines about their use in practice. In this paper, we compare the existing methods based on a confirmatory phase III study design exercise done for a new antibacterial therapy with a binary endpoint and a single historical dataset. A procedure to assess the relative performance of the different methods for borrowing information from historical control data is proposed, and practical questions related to the selection and implementation of methods are discussed. Based on our examination, we found that the methods have a comparable performance, but we recommend the robust mixture prior for its ease of implementation.  相似文献   

14.
In drug development, it sometimes occurs that a new drug does not demonstrate effectiveness for the full study population but appears to be beneficial in a relevant subgroup. In case the subgroup of interest was not part of a confirmatory testing strategy, the inflation of the overall type I error is substantial and therefore such a subgroup analysis finding can only be seen as exploratory at best. To support such exploratory findings, an appropriate replication of the subgroup finding should be undertaken in a new trial. We should, however, be reasonably confident in the observed treatment effect size to be able to use this estimate in a replication trial in the subpopulation of interest. We were therefore interested in evaluating the bias of the estimate of the subgroup treatment effect, after selection based on significance for the subgroup in an overall “failed” trial. Different scenarios, involving continuous as well as dichotomous outcomes, were investigated via simulation studies. It is shown that the bias associated with subgroup findings in overall nonsignificant clinical trials is on average large and varies substantially across plausible scenarios. This renders the subgroup treatment estimate from the original trial of limited value to design the replication trial. An empirical Bayesian shrinkage method is suggested to minimize this overestimation. The proposed estimator appears to offer either a good or a conservative correction to the observed subgroup treatment effect hence provides a more reliable subgroup treatment effect estimate for adequate planning of future studies.  相似文献   

15.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

16.
Instrumental variable (IV) regression provides a number of statistical challenges due to the shape of the likelihood. We review the main Bayesian literature on instrumental variables and highlight these pathologies. We discuss Jeffreys priors, the connection to the errors-in-the-variables problems and more general error distributions. We propose, as an alternative to the inverted Wishart prior, a new Cholesky-based prior for the covariance matrix of the errors in IV regressions. We argue that this prior is more flexible and more robust thanthe inverted Wishart prior since it is not based on only one tightness parameter and therefore can be more informative about certain components of the covariance matrix and less informative about others. We show how prior-posterior inference can be formulated in a Gibbs sampler and compare its performance in the weak instruments case for synthetic as well as two illustrations based on well-known real data.  相似文献   

17.
Basket trials evaluate a single drug targeting a single genetic variant in multiple cancer cohorts. Empirical findings suggest that treatment efficacy across baskets may be heterogeneous. Most modern basket trial designs use Bayesian methods. These methods require the prior specification of at least one parameter that permits information sharing across baskets. In this study, we provide recommendations for selecting a prior for scale parameters for adaptive basket trials by using Bayesian hierarchical modeling. Heterogeneity among baskets attracts much attention in basket trial research, and substantial heterogeneity challenges the basic assumption of exchangeability of Bayesian hierarchical approach. Thus, we also allowed each stratum-specific parameter to be exchangeable or nonexchangeable with similar strata by using data observed in an interim analysis. Through a simulation study, we evaluated the overall performance of our design based on statistical power and type I error rates. Our research contributes to the understanding of the properties of Bayesian basket trial designs.  相似文献   

18.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

19.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

20.
An extended Gaussian max-stable process model for spatial extremes   总被引:1,自引:0,他引:1  
The extremes of environmental processes are often of interest due to the damage that can be caused by extreme levels of the processes. These processes are often spatial in nature and modelling the extremes jointly at many locations can be important. In this paper, an extension of the Gaussian max-stable process is developed, enabling data from a number of locations to be modelled under a more flexible framework than in previous applications. The model is applied to annual maximum rainfall data from five sites in South-West England. For estimation we employ a pairwise likelihood within a Bayesian analysis, incorporating informative prior information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号