首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
We extend the standard multivariate mixed model by incorporating a smooth time effect and relaxing distributional assumptions. We propose a semiparametric Bayesian approach to multivariate longitudinal data using a mixture of Polya trees prior distribution. Usually, the distribution of random effects in a longitudinal data model is assumed to be Gaussian. However, the normality assumption may be suspect, particularly if the estimated longitudinal trajectory parameters exhibit multimodality and skewness. In this paper we propose a mixture of Polya trees prior density to address the limitations of the parametric random effects distribution. We illustrate the methodology by analyzing data from a recent HIV-AIDS study.  相似文献   

2.
Summary. The paper focuses on a Bayesian treatment of measurement error problems and on the question of the specification of the prior distribution of the unknown covariates. It presents a flexible semiparametric model for this distribution based on a mixture of normal distributions with an unknown number of components. Implementation of this prior model as part of a full Bayesian analysis of measurement error problems is described in classical set-ups that are encountered in epidemiological studies: logistic regression between unknown covariates and outcome, with a normal or log-normal error model and a validation group. The feasibility of this combined model is tested and its performance is demonstrated in a simulation study that includes an assessment of the influence of misspecification of the prior distribution of the unknown covariates and a comparison with the semiparametric maximum likelihood method of Roeder, Carroll and Lindsay. Finally, the methodology is illustrated on a data set on coronary heart disease and cholesterol levels in blood.  相似文献   

3.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

4.
Typical joint modeling of longitudinal measurements and time to event data assumes that two models share a common set of random effects with a normal distribution assumption. But, sometimes the underlying population that the sample is extracted from is a heterogeneous population and detecting homogeneous subsamples of it is an important scientific question. In this paper, a finite mixture of normal distributions for the shared random effects is proposed for considering the heterogeneity in the population. For detecting whether the unobserved heterogeneity exits or not, we use a simple graphical exploratory diagnostic tool proposed by Verbeke and Molenberghs [34] to assess whether the traditional normality assumption for the random effects in the mixed model is adequate. In the joint modeling setting, in the case of evidence against normality (homogeneity), a finite mixture of normals is used for the shared random-effects distribution. A Bayesian MCMC procedure is developed for parameter estimation and inference. The methodology is illustrated using some simulation studies. Also, the proposed approach is used for analyzing a real HIV data set, using the heterogeneous joint model for this data set, the individuals are classified into two groups: a group with high risk and a group with moderate risk.  相似文献   

5.
In this article, we propose a denoising methodology in the wavelet domain based on a Bayesian hierarchical model using Double Weibull prior. We propose two estimators, one based on posterior mean (Double Weibull Wavelet Shrinker, DWWS) and the other based on larger posterior mode (DWWS-LPM), and show how to calculate them efficiently. Traditionally, mixture priors have been used for modeling sparse wavelet coefficients. The interesting feature of this article is the use of non-mixture prior. We show that the methodology provides good denoising performance, comparable even to state-of-the-art methods that use mixture priors and empirical Bayes setting of hyperparameters, which is demonstrated by extensive simulations on standardly used test functions. An application to real-word dataset is also considered.  相似文献   

6.
Conditional Prior Proposals in Dynamic Models   总被引:2,自引:0,他引:2  
ABSTRACT. Dynamic models extend state space models to non-normal observations. This paper suggests a specific hybrid Metropolis–Hastings algorithm as a simple device for Bayesian inference via Markov chain Monte Carlo in dynamic models. Hastings proposals from the (conditional) prior distribution of the unknown, time-varying parameters are used to update the corresponding full conditional distributions. It is shown through simulated examples that the methodology has optimal performance in situations where the prior is relatively strong compared to the likelihood. Typical examples include smoothing priors for categorical data. A specific blocking strategy is proposed to ensure good mixing and convergence properties of the simulated Markov chain. It is also shown that the methodology is easily extended to robust transition models using mixtures of normals. The applicability is illustrated with an analysis of a binomial and a binary time series, known in the literature.  相似文献   

7.
The problem of estimating a Poisson mean is considered using incomplete prior information. The user is only able to assess two fractiles of the prior distribution. A class of mixture distributions is constructed to model this prior information; variation within this class primarily occurs in the tail region where little prior information exists. The posterior analysis using the mixture class is attractive computationally and compares favorably with the conjugate posterior analysis.  相似文献   

8.
The authors propose methods for Bayesian inference for generalized linear models with missing covariate data. They specify a parametric distribution for the covariates that is written as a sequence of one‐dimensional conditional distributions. They propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. They examine the properties of the proposed prior and resulting posterior distributions. They also present a Bayesian criterion for comparing various models, and a calibration is derived for it. A detailed simulation is conducted and two real data sets are examined to demonstrate the methodology.  相似文献   

9.
One of the fundamental issues in analyzing microarray data is to determine which genes are expressed and which ones are not for a given group of subjects. In datasets where many genes are expressed and many are not expressed (i.e., underexpressed), a bimodal distribution for the gene expression levels often results, where one mode of the distribution represents the expressed genes and the other mode represents the underexpressed genes. To model this bimodality, we propose a new class of mixture models that utilize a random threshold value for accommodating bimodality in the gene expression distribution. Theoretical properties of the proposed model are carefully examined. We use this new model to examine the problem of differential gene expression between two groups of subjects, develop prior distributions, and derive a new criterion for determining which genes are differentially expressed between the two groups. Prior elicitation is carried out using empirical Bayes methodology in order to estimate the threshold value as well as elicit the hyperparameters for the two component mixture model. The new gene selection criterion is demonstrated via several simulations to have excellent false positive rate and false negative rate properties. A gastric cancer dataset is used to motivate and illustrate the proposed methodology.  相似文献   

10.
Assessment of efficacy in important subgroups – such as those defined by sex, age, race and region – in confirmatory trials is typically performed using separate analysis of the specific subgroup. This ignores relevant information from the complementary subgroup. Bayesian dynamic borrowing uses an informative prior based on analysis of the complementary subgroup and a weak prior distribution centred on a mean of zero to construct a robust mixture prior. This combination of priors allows for dynamic borrowing of prior information; the analysis learns how much of the complementary subgroup prior information to borrow based on the consistency between the subgroup of interest and the complementary subgroup. A tipping point analysis can be carried out to identify how much prior weight needs to be placed on the complementary subgroup component of the robust mixture prior to establish efficacy in the subgroup of interest. An attractive feature of the tipping point analysis is that it enables the evidence from the source subgroup, the evidence from the target subgroup, and the combined evidence to be displayed alongside each other. This method is illustrated with an example trial in severe asthma where efficacy in the adolescent subgroup was assessed using a mixture prior combining an informative prior from the adult data in the same trial with a non-informative prior.  相似文献   

11.
The purpose of this note is to derive the Bayes and the empirical Bayes estimators of an unknown survival function F under progressively censored data with respect to the squared error loss function and a Dirichlet process prior using the fact that the posterior distribution of F given the data is a mixture of Dirichlet processes, and the assumption that the survival and the censor in0- distributions are continuous.  相似文献   

12.
Abstract. We propose a Bayesian semiparametric methodology for quantile regression modelling. In particular, working with parametric quantile regression functions, we develop Dirichlet process mixture models for the error distribution in an additive quantile regression formulation. The proposed non‐parametric prior probability models allow the shape of the error density to adapt to the data and thus provide more reliable predictive inference than models based on parametric error distributions. We consider extensions to quantile regression for data sets that include censored observations. Moreover, we employ dependent Dirichlet processes to develop quantile regression models that allow the error distribution to change non‐parametrically with the covariates. Posterior inference is implemented using Markov chain Monte Carlo methods. We assess and compare the performance of our models using both simulated and real data sets.  相似文献   

13.
A parametric modelling for interval data is proposed, assuming a multivariate Normal or Skew-Normal distribution for the midpoints and log-ranges of the interval variables. The intrinsic nature of the interval variables leads to special structures of the variance–covariance matrix, which is represented by five different possible configurations. Maximum likelihood estimation for both models under all considered configurations is studied. The proposed modelling is then considered in the context of analysis of variance and multivariate analysis of variance testing. To access the behaviour of the proposed methodology, a simulation study is performed. The results show that, for medium or large sample sizes, tests have good power and their true significance level approaches nominal levels when the constraints assumed for the model are respected; however, for small samples, sizes close to nominal levels cannot be guaranteed. Applications to Chinese meteorological data in three different regions and to credit card usage variables for different card designations, illustrate the proposed methodology.  相似文献   

14.
The authors develop a methodology for predicting unobserved values in a conditionally lognormal random spatial field like those commonly encountered in environmental risk analysis. These unobserved values are of two types. The first come from spatial locations where the field has never been monitored, the second, from currently monitored sites which have been only recently installed. Thus the monitoring data exhibit a monotone pattern, resembling a staircase whose highest step comes from the oldest monitoring sites. The authors propose a hierarchical Bayesian approach using the lognormal sampling distribution, in conjunction with a conjugate generalized Wishart distribution. This prior distribution allows different degrees of freedom to be fitted for individual steps, taking into account the differential amounts of information available from sites at the different steps in the staircase. The resulting hierarchical model is a predictive distribution for the unobserved values of the field. The method is demonstrated by application to the ambient ozone field for the southwestern region of British Columbia.  相似文献   

15.
In recent years, Bayesian statistics methods in neuroscience have been showing important advances. In particular, detection of brain signals for studying the complexity of the brain is an active area of research. Functional magnetic resonance imagining (fMRI) is an important tool to determine which parts of the brain are activated by different types of physical behavior. According to recent results, there is evidence that the values of the connectivity brain signal parameters are close to zero and due to the nature of time series fMRI data with high-frequency behavior, Bayesian dynamic models for identifying sparsity are indeed far-reaching. We propose a multivariate Bayesian dynamic approach for model selection and shrinkage estimation of the connectivity parameters. We describe the coupling or lead-lag between any pair of regions by using mixture priors for the connectivity parameters and propose a new weakly informative default prior for the state variances. This framework produces one-step-ahead proper posterior predictive results and induces shrinkage and robustness suitable for fMRI data in the presence of sparsity. To explore the performance of the proposed methodology, we present simulation studies and an application to functional magnetic resonance imaging data.  相似文献   

16.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper develops Bayesian inference of extreme value models with a flexible time-dependent latent structure. The generalized extreme value distribution is utilized to incorporate state variables that follow an autoregressive moving average (ARMA) process with Gumbel-distributed innovations. The time-dependent extreme value distribution is combined with heavy-tailed error terms. An efficient Markov chain Monte Carlo algorithm is proposed using a state-space representation with a finite mixture of normal distributions to approximate the Gumbel distribution. The methodology is illustrated by simulated data and two different sets of real data. Monthly minima of daily returns of stock price index, and monthly maxima of hourly electricity demand are fit to the proposed model and used for model comparison. Estimation results show the usefulness of the proposed model and methodology, and provide evidence that the latent autoregressive process and heavy-tailed errors play an important role to describe the monthly series of minimum stock returns and maximum electricity demand.  相似文献   

18.
Recently, mixture distribution becomes more and more popular in many scientific fields. Statistical computation and analysis of mixture models, however, are extremely complex due to the large number of parameters involved. Both EM algorithms for likelihood inference and MCMC procedures for Bayesian analysis have various difficulties in dealing with mixtures with unknown number of components. In this paper, we propose a direct sampling approach to the computation of Bayesian finite mixture models with varying number of components. This approach requires only the knowledge of the density function up to a multiplicative constant. It is easy to implement, numerically efficient and very practical in real applications. A simulation study shows that it performs quite satisfactorily on relatively high dimensional distributions. A well-known genetic data set is used to demonstrate the simplicity of this method and its power for the computation of high dimensional Bayesian mixture models.  相似文献   

19.
We review Bayesian analysis of hierarchical non-standard Poisson regression models with an emphasis on microlevel heterogeneity and macrolevel autocorrelation. For the former case, we confirm that negative binomial regression usually accounts for microlevel heterogeneity (overdispersion) satisfactorily; for the latter case, we apply the simple first-order Markov transition model to conveniently capture the macrolevel autocorrelation which often arises from temporal and/or spatial count data, rather than attaching complex random effects directly to the regression parameters. Specifically, we extend the hierarchical (multilevel) Poisson model into negative binomial models with macrolevel autocorrelation using restricted gamma mixture with unit mean and Markov transition covariate created from preceding residuals. We prove a mild sufficient condition for posterior propriety under flat prior for the interesting fixed effects. Our methodology is implemented by analyzing the Baltic sea peracarids diurnal activity data published in the marine biology and ecology literature.  相似文献   

20.
This paper presents a Bayesian non-parametric approach to survival analysis based on arbitrarily right censored data. The analysis is based on posterior predictive probabilities using a Polya tree prior distribution on the space of probability measures on [0, ∞). In particular we show that the estimate generalizes the classical Kaplanndash;Meier non-parametric estimator, which is obtained in the limiting case as the weight of prior information tends to zero.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号