首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Various statistical models have been proposed for two‐dimensional dose finding in drug‐combination trials. However, it is often a dilemma to decide which model to use when conducting a particular drug‐combination trial. We make a comprehensive comparison of four dose‐finding methods, and for fairness, we apply the same dose‐finding algorithm under the four model structures. Through extensive simulation studies, we compare the operating characteristics of these methods in various practical scenarios. The results show that different models may lead to different design properties and that no single model performs uniformly better in all scenarios. As a result, we propose using Bayesian model averaging to overcome the arbitrariness of the model specification and enhance the robustness of the design. We assign a discrete probability mass to each model as the prior model probability and then estimate the toxicity probabilities of combined doses in the Bayesian model averaging framework. During the trial, we adaptively allocated each new cohort of patients to the most appropriate dose combination by comparing the posterior estimates of the toxicity probabilities with the prespecified toxicity target. The simulation results demonstrate that the Bayesian model averaging approach is robust under various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation‐based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.  相似文献   

3.
Children represent a large underserved population of “therapeutic orphans,” as an estimated 80% of children are treated off‐label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or “borrowing”) of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure‐response information for antiepileptic drugs to pediatrics.  相似文献   

4.
Finite memory sources and variable‐length Markov chains have recently gained popularity in data compression and mining, in particular, for applications in bioinformatics and language modelling. Here, we consider denser data compression and prediction with a family of sparse Bayesian predictive models for Markov chains in finite state spaces. Our approach lumps transition probabilities into classes composed of invariant probabilities, such that the resulting models need not have a hierarchical structure as in context tree‐based approaches. This can lead to a substantially higher rate of data compression, and such non‐hierarchical sparse models can be motivated for instance by data dependence structures existing in the bioinformatics context. We describe a Bayesian inference algorithm for learning sparse Markov models through clustering of transition probabilities. Experiments with DNA sequence and protein data show that our approach is competitive in both prediction and classification when compared with several alternative methods on the basis of variable memory length.  相似文献   

5.
We analyse a combination of errant count data subject to under‐reported counts and inerrant count data to estimate multiple Poisson rates and reporting probabilities of cervical cancer for four European countries. Our analysis uses a Bayesian hierarchical model. Using a simulation study, we demonstrate the efficacy of our new simultaneous inference method and compare the utility of our method with an empirical Bayes approach developed by Fader and Hardie (J. Appl. Statist., 2000).  相似文献   

6.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

8.
Testing of a composite null hypothesis versus a composite alternative is considered when both have a related invariance structure. The goal is to develop conditional frequentist tests that allow the reporting of data-dependent error probabilities, error probabilities that have a strict frequentist interpretation and that reflect the actual amount of evidence in the data. The resulting tests are also seen to be Bayesian tests, in the strong sense that the reported frequentist error probabilities are also the posterior probabilities of the hypotheses under default choices of the prior distribution. The new procedures are illustrated in a variety of applications to model selection and multivariate hypothesis testing.  相似文献   

9.
In forensic science, the rare type match problem arises when the matching characteristic from the suspect and the crime scene is not in the reference database; hence, it is difficult to evaluate the likelihood ratio that compares the defense and prosecution hypotheses. A recent solution consists of modeling the ordered population probabilities according to the two-parameter Poisson–Dirichlet distribution, which is a well-known Bayesian nonparametric prior, and plugging the maximum likelihood estimates of the parameters into the likelihood ratio. We demonstrate that this approximation produces a systematic bias that fully Bayesian inference avoids. Motivated by this forensic application, we consider the need to learn the posterior distribution of the parameters that governs the two-parameter Poisson–Dirichlet using two sampling methods: Markov Chain Monte Carlo and approximate Bayesian computation. These methods are evaluated in terms of accuracy and efficiency. Finally, we compare the likelihood ratio that is obtained by our proposal with the existing solution using a database of Y-chromosome haplotypes.  相似文献   

10.
Large databases of routinely collected data are a valuable source of information for detecting potential associations between drugs and adverse events (AE). A pharmacovigilance system starts with a scan of these databases for potential signals of drug-AE associations that will subsequently be examined by experts to aid in regulatory decision-making. The signal generation process faces some key challenges: (1) an enormous volume of drug-AE combinations need to be tested (i.e. the problem of multiple testing); (2) the results are not in a format that allows the incorporation of accumulated experience and knowledge for future signal generation; and (3) the signal generation process ignores information captured from other processes in the pharmacovigilance system and does not allow feedback. Bayesian methods have been developed for signal generation in pharmacovigilance, although the full potential of these methods has not been realised. For instance, Bayesian hierarchical models will allow the incorporation of established medical and epidemiological knowledge into the priors for each drug-AE combination. Moreover, the outputs from this analysis can be incorporated into decision-making tools to help in signal validation and posterior actions to be taken by the regulators and companies. We discuss in this paper the apparent advantage of the Bayesian methods used in safety signal generation and the similarities and differences between the two widely used Bayesian methods. We will also propose the use of Bayesian hierarchical models to address the three key challenges and discuss the reasons why Bayesian methodology still have not been fully utilised in pharmacovigilance activities.  相似文献   

11.
We adopt a Bayesian approach to forecast the penetration of a new product into a market. We incorporate prior information from an existing product and/or management judgments into the data analysis. The penetration curve is assumed to be a nondecreasing function of time and may be under shape constraints. Markov-chain Monte Carlo methods are proposed and used to compute the Bayesian forecasts. An example on forecasting the penetration of color television using the information from black-and-white television is provided. The models considered can also be used to address the general bioassay and reliability stress-testing problems.  相似文献   

12.
Array-based comparative genomic hybridization (aCGH) is a high-resolution high-throughput technique for studying the genetic basis of cancer. The resulting data consists of log fluorescence ratios as a function of the genomic DNA location and provides a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimation of the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that utilizes information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, referred to as BDSAcgh (Bayesian Detection of Shared Aberrations in aCGH), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alteration that correspond to local false discovery rates. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set.  相似文献   

13.
Observation of adverse drug reactions during drug development can cause closure of the whole programme. However, if association between the genotype and the risk of an adverse event is discovered, then it might suffice to exclude patients of certain genotypes from future recruitment. Various sequential and non‐sequential procedures are available to identify an association between the whole genome, or at least a portion of it, and the incidence of adverse events. In this paper we start with a suspected association between the genotype and the risk of an adverse event and suppose that the genetic subgroups with elevated risk can be identified. Our focus is determination of whether the patients identified as being at risk should be excluded from further studies of the drug. We propose using a utility function to determine the appropriate action, taking into account the relative costs of suffering an adverse reaction and of failing to alleviate the patient's disease. Two illustrative examples are presented, one comparing patients who suffer from an adverse event with contemporary patients who do not, and the other making use of a reference control group. We also illustrate two classification methods, LASSO and CART, for identifying patients at risk, but we stress that any appropriate classification method could be used in conjunction with the proposed utility function. Our emphasis is on determining the action to take rather than on providing definitive evidence of an association. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Screening programs for breast cancer are widely used to reduce the impact of breast cancer in populations. For example, the South Australian Breast X–ray Service, BreastScreen SA, established in 1989, is a participant in the National Program of Early Detection of Breast Cancer. BreastScreen SA has collected information on both screening–detected and interval or self–reported cases, which enables the estimation of various important attributes of the screening mechanism. In this paper, a tailored model is fitted to the BreastScreen SA data. The probabilities that the screening detects a tumour of a given size and that an individual reports a tumour by a specified size in the absence of screening are estimated. Estimates of the distribution of sizes detected in the absence of screening, and at the first two screenings, are also given.  相似文献   

16.
Many phase I drug combination designs have been proposed to find the maximum tolerated combination (MTC). Due to the two‐dimension nature of drug combination trials, these designs typically require complicated statistical modeling and estimation, which limit their use in practice. In this article, we propose an easy‐to‐implement Bayesian phase I combination design, called Bayesian adaptive linearization method (BALM), to simplify the dose finding for drug combination trials. BALM takes the dimension reduction approach. It selects a subset of combinations, through a procedure called linearization, to convert the two‐dimensional dose matrix into a string of combinations that are fully ordered in toxicity. As a result, existing single‐agent dose‐finding methods can be directly used to find the MTC. In case that the selected linear path does not contain the MTC, a dose‐insertion procedure is performed to add new doses whose expected toxicity rate is equal to the target toxicity rate. Our simulation studies show that the proposed BALM design performs better than competing, more complicated combination designs.  相似文献   

17.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

18.
This paper describes a Bayesian approach to make inference for risk reserve processes with an unknown claim‐size distribution. A flexible model based on mixtures of Erlang distributions is proposed to approximate the special features frequently observed in insurance claim sizes, such as long tails and heterogeneity. A Bayesian density estimation approach for the claim sizes is implemented using reversible jump Markov chain Monte Carlo methods. An advantage of the considered mixture model is that it belongs to the class of phase‐type distributions, and thus explicit evaluations of the ruin probabilities are possible. Furthermore, from a statistical point of view, the parametric structure of the mixtures of the Erlang distribution offers some advantages compared with the whole over‐parametrized family of phase‐type distributions. Given the observed claim arrivals and claim sizes, we show how to estimate the ruin probabilities, as a function of the initial capital, and predictive intervals that give a measure of the uncertainty in the estimations.  相似文献   

19.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

20.
Summary.  The first British National Survey of Sexual Attitudes and Lifestyles (NATSAL) was conducted in 1990–1991 and the second in 1999–2001. When surveys are repeated, the changes in population parameters are of interest and are generally estimated from a comparison of the data between surveys. However, since all surveys may be subject to bias, such comparisons may partly reflect a change in bias. Typically limited external data are available to estimate the change in bias directly. However, one approach, which is often possible, is to define in each survey a sample of participants who are eligible for both surveys, and then to compare the reporting of selected events that occurred before the earlier survey time point. A difference in reporting suggests a change in overall survey bias between time points, although other explanations are possible. In NATSAL, changes in bias are likely to be similar for groups of sexual experiences. The grouping of experiences allows the information that is derived from the selected events to be incorporated into inference concerning population changes in other sexual experiences. We use generalized estimating equations, which incorporate weighting for differential probabilities of sampling and non-response in a relatively straightforward manner. The results, combined with estimates of the change in reporting, are used to derive minimum established population changes, based on NATSAL data. For some key population parameters, the change in reporting is seen to be consistent with a change in bias alone. Recommendations are made for the design of future surveys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号