首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With rapid development of computing technology, Bayesian statistics have increasingly gained more attention in various areas of public health. However, the full potential of Bayesian sequential methods applied to vaccine safety surveillance has not yet been realized, despite acknowledged practical benefits and philosophical advantages of Bayesian statistics. In this paper, we describe how sequential analysis can be performed in a Bayesian paradigm in the field of vaccine safety. We compared the performance of the frequentist sequential method, specifically, Maximized Sequential Probability Ratio Test (MaxSPRT), and a Bayesian sequential method using simulations and a real world vaccine safety example. The performance is evaluated using three metrics: false positive rate, false negative rate, and average earliest time to signal. Depending on the background rate of adverse events, the Bayesian sequential method could significantly improve the false negative rate and decrease the earliest time to signal. We consider the proposed Bayesian sequential approach to be a promising alternative for vaccine safety surveillance.  相似文献   

2.
Summary. A review of methods suggested in the literature for sequential detection of changes in public health surveillance data is presented. Many researchers have noted the need for prospective methods. In recent years there has been an increased interest in both the statistical and the epidemiological literature concerning this type of problem. However, most of the vast literature in public health monitoring deals with retrospective methods, especially spatial methods. Evaluations with respect to the statistical properties of interest for prospective surveillance are rare. The special aspects of prospective statistical surveillance and different ways of evaluating such methods are described. Attention is given to methods that include only the time domain as well as methods for detection where observations have a spatial structure. In the case of surveillance of a change in a Poisson process the likelihood ratio method and the Shiryaev–Roberts method are derived.  相似文献   

3.
The development of a new pneumococcal conjugate vaccine involves assessing the responses of the new serotypes included in the vaccine. The World Health Organization guidance states that the response from each new serotype in the new vaccine should be compared with the aggregate response from the existing vaccine to evaluate non-inferiority. However, no details are provided on how to define and estimate the aggregate response and what methods to use for non-inferiority comparisons. We investigate several methods to estimate the aggregate response based on binary data including simple average, model-based, and lowest response methods. The response of each new serotype is then compared with the estimated aggregate response for non-inferiority. The non-inferiority test p-value and confidence interval are obtained from Miettinen and Nurminen's method, using an effective sample size. The methods are evaluated using simulations and demonstrated with a real clinical trial example.  相似文献   

4.
The success of a seasonal influenza vaccine efficacy trial depends not only upon the design but also upon the annual epidemic characteristics. In this context, simulation methods are an essential tool in evaluating the performances of study designs under various circumstances. However, traditional methods for simulating time‐to‐event data are not suitable for the simulation of influenza vaccine efficacy trials because of the seasonality and heterogeneity of influenza epidemics. Instead, we propose a mathematical model parameterized with historical surveillance data, heterogeneous frailty among the subjects, survey‐based heterogeneous number of daily contact, and a mixed vaccine protection mechanism. We illustrate our methodology by generating multiple‐trial data similar to a large phase III trial that failed to show additional relative vaccine efficacy of an experimental adjuvanted vaccine compared with the reference vaccine. We show that small departures from the designing assumptions, such as a smaller range of strain protection for the experimental vaccine or the chosen endpoint, could lead to smaller probabilities of success in showing significant relative vaccine efficacy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
We consider the statistical evaluation and estimation of vaccine efficacy when the protective effect wanes with time. We reanalyse data from a 5-year trial of two oral cholera vaccines in Matlab, Bangladesh. In this field trial, one vaccine appears to confer better initial protection than the other, but neither appears to offer protection for a period longer than about 3 years. Time-dependent vaccine effects are estimated by obtaining smooth estimates of a time-varying relative risk RR( t ) using survival analysis. We compare two approaches based on the Cox model in terms of their strategies for detecting time-varying vaccine effects, and their estimation techniques for obtaining a time-dependent RR( t ) estimate. These methods allow an exploration of time-varying vaccine effects while making minimal parametric assumptions about the functional form of RR( t ) for vaccinated compared wit unvaccinated subjects.  相似文献   

6.
In this paper, we adapt recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m−1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters on-line. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. short-term interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. short-term interest rate.  相似文献   

7.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
9.
Sepsis is one of the biggest risks to patient safety, with a natural mortality rate between 25% and 50%. It is difficult to diagnose, and no validated standard for diagnosis currently exists. A commonly used scoring criteria is the quick sequential organ failure assessment (qSOFA). It demonstrates very low specificity in ICU populations, however. We develop a method to personalize thresholds in qSOFA that incorporates easily to measure patient baseline characteristics. We compare the personalized threshold method to qSOFA, five previously published methods that obtain an optimal constant threshold for a single biomarker, and to the machine learning algorithms based on logistic regression and AdaBoosting using patient data in the MIMIC-III database. The personalized threshold method achieves higher accuracy than qSOFA and the five published methods and has comparable performance to machine learning methods. Personalized thresholds, however, are much easier to adopt in real-life monitoring than machine learning methods as they are computed once for a patient and used in the same way as qSOFA, whereas the machine learning methods are hard to implement and interpret.  相似文献   

10.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

11.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
In this article we introduce a new likelihood based method, called the likelihood integrated method, which is distinct from the well-known integrated likelihood method. We use the likelihood integrated to propose a simple exploratory graphical analysis for the change point problem in the context of directional data. The method is applied to analysis of two real life data sets. The results obtained by application of this simple method are seen to be quite similar to those obtained earlier by different formal methods in most cases. A small simulation study is conducted to assess the effectiveness of this procedure in indicating presence of change point.  相似文献   

14.
A biosimilar drug is a biological product that is highly similar to and at the same time has no clinically meaningful difference from licensed product in terms of safety, purity, and potency. Biosimilar study design is essential to demonstrate the equivalence between biosimilar drug and reference product. However, existing designs and assessment methods are primarily based on binary and continuous endpoints. We propose a Bayesian adaptive design for biosimilarity trials with time-to-event endpoint. The features of the proposed design are twofold. First, we employ the calibrated power prior to precisely borrow relevant information from historical data for the reference drug. Second, we propose a two-stage procedure using the Bayesian biosimilarity index (BBI) to allow early stop and improve the efficiency. Extensive simulations are conducted to demonstrate the operating characteristics of the proposed method in contrast with some naive method. Sensitivity analysis and extension with respect to the assumptions are presented.  相似文献   

15.
In this article, a sequential correction of two linear methods: linear discriminant analysis (LDA) and perceptron is proposed. This correction relies on sequential joining of additional features on which the classifier is trained. These new features are posterior probabilities determined by a basic classification method such as LDA and perceptron. In each step, we add the probabilities obtained on a slightly different data set, because the vector of added probabilities varies at each step. We therefore have many classifiers of the same type trained on slightly different data sets. Four different sequential correction methods are presented based on different combining schemas (e.g. mean rule and product rule). Experimental results on different data sets demonstrate that the improvements are efficient, and that this approach outperforms classical linear methods, providing a significant reduction in the mean classification error rate.  相似文献   

16.
17.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   

18.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   

19.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

20.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号