首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

2.
In this paper we propose a general cure rate aging model. Our approach enables different underlying activation mechanisms which lead to the event of interest. The number of competing causes of the event of interest is assumed to follow a logarithmic distribution. The model is parameterized in terms of the cured fraction which is then linked to covariates. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis for the proposed model. Moreover, some discussions on the model selection to compare the fitted models are given, as well as case deletion influence diagnostics are developed for the joint posterior distribution based on the ψ-divergence, which has several divergence measures as particular cases, such as the Kullback–Leibler (K-L), J-distance, L1 norm, and χ2-square divergence measures. Simulation studies are performed and experimental results are illustrated based on a real malignant melanoma data.  相似文献   

3.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally, the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study.  相似文献   

5.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   

7.
In event history analysis, the problem of modeling two interdependent processes is still not completely solved. In a frequentist framework, there are two most general approaches: the causal approach and the system approach. The recent growing interest in Bayesian statistics suggests some interesting works on survival models and event history analysis in a Bayesian perspective. In this work we present a possible solution for the analysis of dynamic interdependence by a Bayesian perspective in a graphical duration model framework, using marked point processes. Main results from the Bayesian approach and the comparison with the frequentist one are illustrated on a real example: the analysis of the dynamic relationship between fertility and female employment.  相似文献   

8.
In this paper we propose a new lifetime model for multivariate survival data with a surviving fraction. We develop this model assuming that there are m types of unobservable competing risks, where each risk is related to a time of the occurrence of an event of interest. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis for the proposed model. We also perform a simulation study in order to analyse the frequentist coverage probabilities of credible interval derived from posteriors. Our modelling is illustrated through a real data set.  相似文献   

9.

Meta-analysis refers to quantitative methods for combining results from independent studies in order to draw overall conclusions. Hierarchical models including selection models are introduced and shown to be useful in such Bayesian meta-analysis. Semiparametric hierarchical models are proposed using the Dirichlet process prior. These rich class of models combine the information of independent studies, allowing investigation of variability both between and within studies, and weight function. Here we investigate sensitivity of results to unobserved studies by considering a hierarchical selection model with including unknown weight function and use Markov chain Monte Carlo methods to develop inference for the parameters of interest. Using Bayesian method, this model is used on a meta-analysis of twelve studies comparing the effectiveness of two different types of flouride, in preventing cavities. Clinical informative prior is assumed. Summaries and plots of model parameters are analyzed to address questions of interest.  相似文献   

10.
In this article, Bayesian inference for the half-normal and half-t distributions using uninformative priors is considered. It is shown that exact Bayesian inference can be undertaken for the half-normal distribution without the need for Gibbs sampling. Simulation is then used to compare the sampling properties of Bayesian point and interval estimators with those of their maximum likelihood based counterparts. Inference for the half-t distribution based on the use of Gibbs sampling is outlined, and an approach to model comparison based on the use of Bayes factors is discussed. The fitting of the half-normal and half-t models is illustrated using real data on the body fat measurements of elite athletes.  相似文献   

11.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

12.
This paper aims at introducing a Bayesian robust error-in-variable regression model in which the dependent variable is censored. We extend previous works by assuming a multivariate t distribution for jointly modelling the behaviour of the errors and the latent explanatory variable. Inference is done under the Bayesian paradigm. We use a data augmentation approach and develop a Markov chain Monte Carlo algorithm to sample from the posterior distributions. We run a Monte Carlo study to evaluate the efficiency of the posterior estimators in different settings. We compare the proposed model to three other models previously discussed in the literature. As a by-product we also provide a Bayesian analysis of the t-tobit model. We fit all four models to analyse the 2001 Medical Expenditure Panel Survey data.  相似文献   

13.
The multinomial logistic regression model (MLRM) can be interpreted as a natural extension of the binomial model with logit link function to situations where the response variable can have three or more possible outcomes. In addition, when the categories of the response variable are nominal, the MLRM can be expressed in terms of two or more logistic models and analyzed in both frequentist and Bayesian approaches. However, few discussions about post modeling in categorical data models are found in the literature, and they mainly use Bayesian inference. The objective of this work is to present classic and Bayesian diagnostic measures for categorical data models. These measures are applied to a dataset (status) of patients undergoing kidney transplantation.  相似文献   

14.
近年来以风险平价为代表的基于风险的配置模型广为流行。这些模型的一大特点是放弃回报信息。而以均值方差模型代表的基于回报的配置模型则认为回报很重要而且默认对回报的预测是准确的。这两种做法都有问题。考虑到回报的可预测性得到了大量经验研究的支持,那么对于基于风险的配置模型而言,完全放弃回报则意味着有关回报的有用信息得不到充分利用。对于基于回报的配置模型而言,不考虑参数估计误差而且对输入参数敏感的缺点也大大抵消了它们利用回报信息带来的好处。那么,回报是否重要以及应该如何使用回报成了资产配置研究所面临的一个重大问题。为此,本文提出以风险平价为配置基准,以贝叶斯VAR回报预测为主观观点的Black-Litterman(贝叶斯BL)模型回答这一命题。利用1952-2016年的美国股票和债券季度数据,本文将贝叶斯BL模型与现有配置模型进行比较研究。实证结果表明,相比基于回报的配置模型,贝叶斯BL模型降低了组合风险;相比基于风险的配置模型,贝叶斯BL模型增强了组合回报。这些特性来自于它既能利用回报可预测性带来的有用信息,又能够发挥基于风险的配置模型在控制风险方面的优势。因此该模型表现出增强回报和控制风险兼具的特点,是一条具有潜力的资产配置新方案。  相似文献   

15.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

16.
ABSTRACT

In the reliability analysis of mechanical repairable equipment subjected to reliability deterioration with operating time, two forms of the non-homogeneous Poisson processes, namely the Power-Law (PL) and the Log-Linear (LL) model, have found general acceptance in the literature. Inferential procedures, conditioned on the assumption of the PL or LL model, underestimate the overall uncertainty about a quantity of interest because the PL and LL models can provide different estimates of the quantity of interest, even when both of them adequately fit the observed data. In this paper, a composite estimation procedure, which uses the PL and LL models as competing models, is proposed in the framework of Bayesian statistics, thus allowing the uncertainty involved in model selection to be considered. A model-free approach is then proposed for incorporating technical information on the failure mechanism into the inferential procedure. Such an approach, which is based on two model-free quantities defined irrespectively of the functional form of the failure model, prevents that the prior information on the failure mechanism can improperly introduce prior probabilities on the adequacy of each model to fit the observed data. Finally, numerical applications are provided to illustrate the proposed procedures.  相似文献   

17.
This paper deals with the analysis of multivariate survival data from a Bayesian perspective using Markov-chain Monte Carlo methods. The Metropolis along with the Gibbs algorithm is used to calculate some of the marginal posterior distributions. A multivariate survival model is proposed, since survival times within the same group are correlated as a consequence of a frailty random block effect. The conditional proportional-hazards model of Clayton and Cuzick is used with a martingale structured prior process (Arjas and Gasbarra) for the discretized baseline hazard. Besides the calculation of the marginal posterior distributions of the parameters of interest, this paper presents some Bayesian EDA diagnostic techniques to detect model adequacy. The methodology is exemplified with kidney infection data where the times to infections within the same patients are expected to be correlated.  相似文献   

18.
Summary.  Short-term forecasts of air pollution levels in big cities are now reported in news-papers and other media outlets. Studies indicate that even short-term exposure to high levels of an air pollutant called atmospheric particulate matter can lead to long-term health effects. Data are typically observed at fixed monitoring stations throughout a study region of interest at different time points. Statistical spatiotemporal models are appropriate for modelling these data. We consider short-term forecasting of these spatiotemporal processes by using a Bayesian kriged Kalman filtering model. The spatial prediction surface of the model is built by using the well-known method of kriging for optimum spatial prediction and the temporal effects are analysed by using the models underlying the Kalman filtering method. The full Bayesian model is implemented by using Markov chain Monte Carlo techniques which enable us to obtain the optimal Bayesian forecasts in time and space. A new cross-validation method based on the Mahalanobis distance between the forecasts and observed data is also developed to assess the forecasting performance of the model implemented.  相似文献   

19.
We propose a mixture model that combines a discrete-time survival model for analyzing the correlated times between recurrent events, e.g. births, with a logistic regression model for the probability of never experiencing the event of interest, i.e., being a long-term survivor. The proposed survival model incorporates both observed and unobserved heterogeneity in the probability of experiencing the event of interest. We use Gibbs sampling for the fitting of such mixture models, which leads to a computationally intensive solution to the problem of fitting survival models for multiple event time data with long-term survivors. We illustrate our Bayesian approach through an analysis of Hutterite birth histories.  相似文献   

20.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号