首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 41 毫秒
1.
Bayesian alternatives to the sign test are proposed which incorporate the number of ties observed. These alternatives arise from different strategies in dealing with the number of ties. One strategy is incorporating the true proportion of ties into the hypotheses of interest. The Bayesian methods are compared to each other and to the typical sign test in a simulation study. Also, the new methods are compared to another version of the sign test proposed by Coakley and Heise (1996). This new version of the sign test was shown to perform especially well in situations where the probability of observing a tie is very high. Although one of the Bayesian methods appears to perform best overall in the simulation study, its performance is not dominating and the easy to use typical sign test generally performs very well.  相似文献   

2.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
It is well‐known that a spontaneous reporting system suffers from significant under‐reporting of adverse drug reactions from the source population. The existing methods do not adjust for such under‐reporting for the calculation of measures of association between a drug and the adverse drug reaction under study. Often there is direct and/or indirect information on the reporting probabilities. This work incorporates the reporting probabilities into existing methodologies, specifically to Bayesian confidence propagation neural network and DuMouchel's empirical Bayesian methods, and shows how the two methods lead to biased results in the presence of under‐reporting. Considering all the cases to be reported, the association measure for the source population can be estimated by using only exposure information through a reference sample from the source population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
This article focused on the definition and the study of a binary Bayesian criterion which measures a statistical agreement between a subjective prior and data information. The setting of this work is concrete Bayesian studies. It is an alternative and a complementary tool to the method recently proposed by Evans and Moshonov, [M. Evans and H. Moshonov, Checking for Prior-data conflict, Bayesian Anal. 1 (2006), pp. 893–914]. Both methods try to help the work of the Bayesian analyst, from preliminary to the posterior computation. Our criterion is defined as a ratio of Kullback–Leibler divergences; two of its main features are to make easy the check of a hierarchical prior and be used as a default calibration tool to obtain flat but proper priors in applications. Discrete and continuous distributions exemplify the approach and an industrial case study in reliability, involving the Weibull distribution, is highlighted.  相似文献   

5.
In this paper we present a simulation study for comparing differents methods for estimating the prediction error rate in a discrimination problem. We consider the Cross-validation, Bootstrap and Bayesian Bootstrap methods for such as problem, while also elaborating on both simple and Bayesian Bootstrap methods by smoothing techniques. We observe as the smoothing procedure lead to improvements in the estimation of the true error rate of the discrimination rule, specially in the case of the smooth Bayesian Bootstrap estimator, whose reduction in M.S.E. resulted from the high positive correlation between the true error rate and its estimations based in this method.  相似文献   

6.
Twenty-five years ago the use of Bayesian methods in Pharmaceutical R&D was non-existent. Today that is no longer true. In this paper I describe my own personal journey along the road of discovery of Bayesian methods to routine use in the pharmaceutical industry.  相似文献   

7.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

8.
In statistical process control applications, profiles functions are considered an efficient way of representing quality of products or processes. Classical and Bayesian thoughts are two chief sources of defining control charting structures for profiles monitoring. This Study introduces novel Bayesian CUSUM control structures for profiles monitoring. The comprehensive comparative study identifies that the proposed Bayesian CUSUM control charts under conjugate priors has better expected performance than competing methods. The implementation of Bayesian structures requires detailed information about process parameters which come up with considerable benefits. In addition, simulative example and case study further justified the superiority of proposed techniques.  相似文献   

9.
Recent approaches to the statistical analysis of adverse event (AE) data in clinical trials have proposed the use of groupings of related AEs, such as by system organ class (SOC). These methods have opened up the possibility of scanning large numbers of AEs while controlling for multiple comparisons, making the comparative performance of the different methods in terms of AE detection and error rates of interest to investigators. We apply two Bayesian models and two procedures for controlling the false discovery rate (FDR), which use groupings of AEs, to real clinical trial safety data. We find that while the Bayesian models are appropriate for the full data set, the error controlling methods only give similar results to the Bayesian methods when low incidence AEs are removed. A simulation study is used to compare the relative performances of the methods. We investigate the differences between the methods over full trial data sets, and over data sets with low incidence AEs and SOCs removed. We find that while the removal of low incidence AEs increases the power of the error controlling procedures, the estimated power of the Bayesian methods remains relatively constant over all data sizes. Automatic removal of low-incidence AEs however does have an effect on the error rates of all the methods, and a clinically guided approach to their removal is needed. Overall we found that the Bayesian approaches are particularly useful for scanning the large amounts of AE data gathered.  相似文献   

10.
Biomarkers have the potential to improve our understanding of disease diagnosis and prognosis. Biomarker levels that fall below the assay detection limits (DLs), however, compromise the application of biomarkers in research and practice. Most existing methods to handle non-detects focus on a scenario in which the response variable is subject to the DL; only a few methods consider explanatory variables when dealing with DLs. We propose a Bayesian approach for generalized linear models with explanatory variables subject to lower, upper, or interval DLs. In simulation studies, we compared the proposed Bayesian approach to four commonly used methods in a logistic regression model with explanatory variable measurements subject to the DL. We also applied the Bayesian approach and other four methods in a real study, in which a panel of cytokine biomarkers was studied for their association with acute lung injury (ALI). We found that IL8 was associated with a moderate increase in risk for ALI in the model based on the proposed Bayesian approach.  相似文献   

11.
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real‐life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Dealing with incomplete data is a pervasive problem in statistical surveys. Bayesian networks have been recently used in missing data imputation. In this research, we propose a new methodology for the multivariate imputation of missing data using discrete Bayesian networks and conditional Gaussian Bayesian networks. Results from imputing missing values in coronary artery disease data set and milk composition data set as well as a simulation study from cancer-neapolitan network are presented to demonstrate and compare the performance of three Bayesian network-based imputation methods with those of multivariate imputation by chained equations (MICE) and the classical hot-deck imputation method. To assess the effect of the structure learning algorithm on the performance of the Bayesian network-based methods, two methods called Peter-Clark algorithm and greedy search-and-score have been applied. Bayesian network-based methods are: first, the method introduced by Di Zio et al. [Bayesian networks for imputation, J. R. Stat. Soc. Ser. A 167 (2004), 309–322] in which, each missing item of a variable is imputed using the information given in the parents of that variable; second, the method of Di Zio et al. [Multivariate techniques for imputation based on Bayesian networks, Neural Netw. World 15 (2005), 303–310] which uses the information in the Markov blanket set of the variable to be imputed and finally, our new proposed method which applies the whole available knowledge of all variables of interest, consisting the Markov blanket and so the parent set, to impute a missing item. Results indicate the high quality of our new proposed method especially in the presence of high missingness percentages and more connected networks. Also the new method have shown to be more efficient than the MICE method for small sample sizes with high missing rates.  相似文献   

13.
14.
The use of Bayesian methods to support pharmaceutical product development has grown in recent years. In clinical statistics, the drive to provide faster access for patients to medical treatments has led to a heightened focus by industry and regulatory authorities on innovative clinical trial designs, including those that apply Bayesian methods. In nonclinical statistics, Bayesian applications have also made advances. However, they have been embraced far more slowly in the nonclinical area than in the clinical counterpart. In this article, we explore some of the reasons for this slower rate of adoption. We also present the results of a survey conducted for the purpose of understanding the current state of Bayesian application in nonclinical areas and for identifying areas of priority for the DIA/ASA-BIOP Nonclinical Bayesian Working Group. The survey explored current usage, hurdles, perceptions, and training needs for Bayesian methods among nonclinical statisticians. Based on the survey results, a set of recommendations is provided to help guide the future advancement of Bayesian applications in nonclinical pharmaceutical statistics.  相似文献   

15.
Survival data obtained from prevalent cohort study designs are often subject to length-biased sampling. Frequentist methods including estimating equation approaches, as well as full likelihood methods, are available for assessing covariate effects on survival from such data. Bayesian methods allow a perspective of probability interpretation for the parameters of interest, and may easily provide the predictive distribution for future observations while incorporating weak prior knowledge on the baseline hazard function. There is lack of Bayesian methods for analyzing length-biased data. In this paper, we propose Bayesian methods for analyzing length-biased data under a proportional hazards model. The prior distribution for the cumulative hazard function is specified semiparametrically using I-Splines. Bayesian conditional and full likelihood approaches are developed for analyzing simulated and real data.  相似文献   

16.
This paper gives an exposition of the use of the posterior likelihood ratio for testing point null hypotheses in a fully Bayesian framework. Connections between the frequentist P-value and the posterior distribution of the likelihood ratio are used to interpret and calibrate P-values in a Bayesian context, and examples are given to show the use of simple posterior simulation methods to provide Bayesian tests of common hypotheses.  相似文献   

17.
In this article, we consider some problems of estimation and prediction when progressive Type-I interval censored competing risks data are from the proportional hazards family. The maximum likelihood estimators of the unknown parameters are obtained. Based on gamma priors, the Lindely's approximation and importance sampling methods are applied to obtain Bayesian estimators under squared error and linear–exponential loss functions. Several classical and Bayesian point predictors of censored units are provided. Also, based on given producer's and consumer's risks accepting sampling plans are considered. Finally, the simulation study is given by Monte Carlo simulations to evaluate the performances of the different methods.  相似文献   

18.
This paper develops Bayesian analysis in the context of progressively Type II censored data from the compound Rayleigh distribution. The maximum likelihood and Bayes estimates along with associated posterior risks are derived for reliability performances under balanced loss functions by assuming continuous priors for parameters of the distribution. A practical example is used to illustrate the estimation methods. A simulation study has been carried out to compare the performance of estimates. The study indicates that Bayesian estimation should be preferred over maximum likelihood estimation. In Bayesian estimation, the balance general entropy loss function can be effectively employed for optimal decision-making.  相似文献   

19.
Abstract

Many methods used in spatial statistics are computationally demanding, and so, the development of more computationally efficient methods has received attention. A important development is the integrated nested Laplace approximation method which is carry out Bayesian analysis more efficiently This method, for geostatistical data, is done considering the SPDE approach that requires the creation of a mesh overlying the study area and all the obtained results depend on it. The impact of the mesh on inference and prediction is investigated through simulations. As there is no formal procedure to specify it, we investigate a guideline to create an optimal mesh.  相似文献   

20.
Bayesian methods have proved effective for quantile estimation, including for financial Value-at-Risk forecasting. Expected shortfall (ES) is a competing tail risk measure, favoured by the Basel Committee, that can be semi-parametrically estimated via asymmetric least squares. An asymmetric Gaussian density is proposed, allowing a likelihood to be developed, that facilitates both pseudo-maximum likelihood and Bayesian semi-parametric estimation, and leads to forecasts of quantiles, expectiles and ES. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are developed for the Bayesian estimation. The proposed models are favoured in an empirical study forecasting eight financial return series: evidence of more accurate ES forecasting, compared to a range of competing methods, is found, while Bayesian estimated models tend to be more accurate. However, during a financial crisis period most models perform badly, while two existing models perform best.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号