首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
This paper presents a Bayesian analysis of partially linear additive models for quantile regression. We develop a semiparametric Bayesian approach to quantile regression models using a spectral representation of the nonparametric regression functions and the Dirichlet process (DP) mixture for error distribution. We also consider Bayesian variable selection procedures for both parametric and nonparametric components in a partially linear additive model structure based on the Bayesian shrinkage priors via a stochastic search algorithm. Based on the proposed Bayesian semiparametric additive quantile regression model referred to as BSAQ, the Bayesian inference is considered for estimation and model selection. For the posterior computation, we design a simple and efficient Gibbs sampler based on a location-scale mixture of exponential and normal distributions for an asymmetric Laplace distribution, which facilitates the commonly used collapsed Gibbs sampling algorithms for the DP mixture models. Additionally, we discuss the asymptotic property of the sempiparametric quantile regression model in terms of consistency of posterior distribution. Simulation studies and real data application examples illustrate the proposed method and compare it with Bayesian quantile regression methods in the literature.  相似文献   

2.
Discrete data are collected in many application areas and are often characterised by highly-skewed distributions. An example of this, which is considered in this paper, is the number of visits to a specialist, often taken as a measure of demand in healthcare. A discrete Weibull regression model was recently proposed for regression problems with a discrete response and it was shown to possess desirable properties. In this paper, we propose the first Bayesian implementation of this model. We consider a general parametrization, where both parameters of the discrete Weibull distribution can be conditioned on the predictors, and show theoretically how, under a uniform non-informative prior, the posterior distribution is proper with finite moments. In addition, we consider closely the case of Laplace priors for parameter shrinkage and variable selection. Parameter estimates and their credible intervals can be readily calculated from their full posterior distribution. A simulation study and the analysis of four real datasets of medical records show promises for the wide applicability of this approach to the analysis of count data. The method is implemented in the R package BDWreg.  相似文献   

3.
Categorical data frequently arise in applications in the Social Sciences. In such applications, the class of log-linear models, based on either a Poisson or (product) multinomial response distribution, is a flexible model class for inference and prediction. In this paper we consider the Bayesian analysis of both Poisson and multinomial log-linear models. It is often convenient to model multinomial or product multinomial data as observations of independent Poisson variables. For multinomial data, Lindley (1964) [20] showed that this approach leads to valid Bayesian posterior inferences when the prior density for the Poisson cell means factorises in a particular way. We develop this result to provide a general framework for the analysis of multinomial or product multinomial data using a Poisson log-linear model. Valid finite population inferences are also available, which can be particularly important in modelling social data. We then focus particular attention on multivariate normal prior distributions for the log-linear model parameters. Here, an improper prior distribution for certain Poisson model parameters is required for valid multinomial analysis, and we derive conditions under which the resulting posterior distribution is proper. We also consider the construction of prior distributions across models, and for model parameters, when uncertainty exists about the appropriate form of the model. We present classes of Poisson and multinomial models, invariant under certain natural groups of permutations of the cells. We demonstrate that, if prior belief concerning the model parameters is also invariant, as is the case in a ‘reference’ analysis, then the choice of prior distribution is considerably restricted. The analysis of multivariate categorical data in the form of a contingency table is considered in detail. We illustrate the methods with two examples.  相似文献   

4.
Previous approaches to establishing posterior consistency of Bayesian regression problems have used general theorems that involve verifying sufficient conditions for posterior consistency. In this article, we consider a direct approach by computing the posterior density explicitly and evaluating its asymptotic behavior. For this purpose, we deal with a sample size dependent prior based on a truncated regression function with increasing sample size, and evaluate the asymptotic properties of the resulting posterior. Based on a concept called posterior density consistency, we attempt to understand posterior consistency. As an application, we illustrate that the posterior density of an orthogonal semiparametric regression model is consistent.  相似文献   

5.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

6.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   

7.
In this paper, we consider the problems of prediction and tests of hypotheses for directional data in a semiparametric Bayesian set-up. Observations are assumed to be independently drawn from the von Mises distribution and uncertainty in the location parameter is modelled by a Dirichlet process. For the prediction problem, we present a method to obtain the predictive density of a future observation, and, for the testing problem, we present a method of computing the Bayes factor by obtaining the posterior probabilities of the hypotheses under consideration. The semiparametric model is seen to be flexible and robust against prior misspecifications. While analytical expressions are intractable, the methods are easily implemented using the Gibbs sampler. We illustrate the methods with data from two real-life examples.  相似文献   

8.
In semiparametric inference we distinguish between the parameter of interest which may be a location parameter, and a nuisance parameter that determines the remaining shape of the sampling distribution. As was pointed out by Diaconis and Freedman the main problem in semiparametric Bayesian inference is to obtain a consistent posterior distribution for the parameter of interest. The present paper considers a semiparametric Bayesian method based on a pivotal likelihood function. It is shown that when the parameter of interest is the median, this method produces a consistent posterior distribution and is easily implemented, Numerical comparisons with classical methods and with Bayesian methods based on a Dirichlet prior are provided. It is also shown that in the case of symmetric intervals, the classical confidence coefficients have a Bayesian interpretation as the limiting posterior probability of the interval based on the Dirichlet prior with a parameter that converges to zero.  相似文献   

9.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

10.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

11.
This article deals with Bayesian analysis of quarter plane moving average (MA) models observed on a rectangular part of a lattice. We present some properties concerning the autocorrelation function of MA models. These properties relate correlation parameters with the original model parameters providing much more understandable interpretation of results concerning the model. Simulation experiment is developed to explore the sensitivity of the posterior distribution when the process is contaminated with innovation and additive contamination. We show by simulation that the correlation structure of the model is seriously affected when the process contains additive contamination. We then propose a more general class of MA models which automatically deals with the contamination phenomenon [contaminated MA (CMA) model]. Also, we establish theoretical properties of the correlation function analogous with those in the previous model. Finally, we consider two applications of the CMA model. The results obtained in numerical examples show the goodness of the CMA model under contaminated data.  相似文献   

12.
The generalized lognormal distribution plays an important role in analysing data from different life testing experiments. In this paper, we consider Bayesian analysis of this distribution using various objective priors for the model parameters. Specifically, we derive expressions for the Jeffreys-type priors, the reference priors with different group orderings of the parameters, and the first-order matching priors. We also study the properties of the posterior distributions of the parameters under these improper priors. It is shown that only two of them result in proper posterior distributions. Numerical simulation studies are conducted to compare the performances of the Bayesian estimators under the considered priors and the maximum likelihood estimates. Finally, a real-data application is also provided for illustrative purposes.  相似文献   

13.
The mixed random effect model is commonly used in longitudinal data analysis within either frequentist or Bayesian framework. Here we consider a case, in which we have prior knowledge on partial parameters, while no such information on the rest of the parameters. Thus, we use the hybrid approach on the random-effects model with partial parameters. The parameters are estimated via Bayesian procedure, and the rest of parameters by the frequentist maximum likelihood estimation (MLE), simultaneously on the same model. In practice, we often know partial prior information such as, covariates of age, gender, etc. These information can be used, and accurate estimations in mixed random-effects model can be obtained. A series of simulation studies were performed to compare the results with the commonly used random-effects model with and without partial prior information. The results in hybrid estimation (HYB) and MLE were very close to each other. The estimated θ values in with partial prior information model (HYB) were more closer to true θ values, and showed less variances than without partial prior information in MLE. To compare with true θ values, the mean square of errors are much less in HYB than in MLE. This advantage of HYB is very obvious in longitudinal data with a small sample size. The methods of HYB and MLE are applied to a real longitudinal data for illustration purposes.  相似文献   

14.
Summary.  Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models , where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.  相似文献   

15.
We consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.  相似文献   

16.
We consider an empirical Bayes approach to standard nonparametric regression estimation using a nonlinear wavelet methodology. Instead of specifying a single prior distribution on the parameter space of wavelet coefficients, which is usually the case in the existing literature, we elicit the ?-contamination class of prior distributions that is particularly attractive to work with when one seeks robust priors in Bayesian analysis. The type II maximum likelihood approach to prior selection is used by maximizing the predictive distribution for the data in the wavelet domain over a suitable subclass of the ?-contamination class of prior distributions. For the prior selected, the posterior mean yields a thresholding procedure which depends on one free prior parameter and it is level- and amplitude-dependent, thus allowing better adaptation in function estimation. We consider an automatic choice of the free prior parameter, guided by considerations on an exact risk analysis and on the shape of the thresholding rule, enabling the resulting estimator to be fully automated in practice. We also compute pointwise Bayesian credible intervals for the resulting function estimate using a simulation-based approach. We use several simulated examples to illustrate the performance of the proposed empirical Bayes term-by-term wavelet scheme, and we make comparisons with other classical and empirical Bayes term-by-term wavelet schemes. As a practical illustration, we present an application to a real-life data set that was collected in an atomic force microscopy study.  相似文献   

17.
Abstract.  The traditional Cox proportional hazards regression model uses an exponential relative risk function. We argue that under various plausible scenarios, the relative risk part of the model should be bounded, suggesting also that the traditional model often might overdramatize the hazard rate assessment for individuals with unusual covariates. This motivates our working with proportional hazards models where the relative risk function takes a logistic form. We provide frequentist methods, based on the partial likelihood, and then go on to semiparametric Bayesian constructions. These involve a Beta process for the cumulative baseline hazard function and any prior with a density, for example that dictated by a Jeffreys-type argument, for the regression coefficients. The posterior is derived using machinery for Lévy processes, and a simulation recipe is devised for sampling from the posterior distribution of any quantity. Our methods are illustrated on real data. A Bernshtĕn–von Mises theorem is reached for our class of semiparametric priors, guaranteeing asymptotic normality of the posterior processes.  相似文献   

18.
Aoristic data can be described by a marked point process in time in which the points cannot be observed directly but are known to lie in observable intervals, the marks. We consider Bayesian state estimation for the latent points when the marks are modeled in terms of an alternating renewal process in equilibrium and the prior is a Markov point process. We derive the posterior distribution, estimate its parameters and present some examples that illustrate the influence of the prior distribution. The model is then used to estimate times of occurrence of interval censored crimes.  相似文献   

19.
Interval-censored survival data arise often in medical applications and clinical trials [Wang L, Sun J, Tong X. Regression analyis of case II interval-censored failure time data with the additive hazards model. Statistica Sinica. 2010;20:1709–1723]. However, most of existing interval-censored survival analysis techniques suffer from challenges such as heavy computational cost or non-proportionality of hazard rates due to complicated data structure [Wang L, Lin X. A Bayesian approach for analyzing case 2 interval-censored data under the semiparametric proportional odds model. Statistics & Probability Letters. 2011;81:876–883; Banerjee T, Chen M-H, Dey DK, et al. Bayesian analysis of generalized odds-rate hazards models for survival data. Lifetime Data Analysis. 2007;13:241–260]. To address these challenges, in this paper, we introduce a flexible Bayesian non-parametric procedure for the estimation of the odds under interval censoring, case II. We use Bernstein polynomials to introduce a prior for modeling the odds and propose a novel and easy-to-implement sampling manner based on the Markov chain Monte Carlo algorithms to study the posterior distributions. We also give general results on asymptotic properties of the posterior distributions. The simulated examples show that the proposed approach is quite satisfactory in the cases considered. The use of the proposed method is further illustrated by analyzing the hemophilia study data [McMahan CS, Wang L. A package for semiparametric regression analysis of interval-censored data; 2015. http://CRAN.R-project.org/package=ICsurv.  相似文献   

20.
The independent additive errors linear model consists of a structure for the mean and a separate structure for the error distribution. The error structure may be parametric or it may be semiparametric. Under alternative values of the mean structure, the best fitting additive errors model has an error distribution which can be represented as the convolution of the actual error distribution and the marginal distribution of a misspecification term. The model misspecification term results from the covariates' distribution. Conditions are developed to distinguish when the semiparametric model yields sharper inference than the parametric model and vice versa. The main conditions concern the actual error distribution and the covariates' distribution. The theoretical results explain a paradoxical finding in semiparametric Bayesian modelling, where the posterior distribution under a semiparametric model is found to be more concentrated than is the posterior distribution under a corresponding parametric model. The paradox is illustrated on a set of allometric data. The Canadian Journal of Statistics 39: 165–180; 2011 ©2011 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号