首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented.  相似文献   

2.
Summary This paper introduces a Bayesian nonparametric estimator for an unknown distribution function based on left censored observations. Hjort (1990)/Lo (1993) introduced Bayesian nonparametric estimators derived from beta/beta-neutral processes which allow for right censoring. These processes are taken as priors from the class ofneutral to the right processes (Doksum, 1974). The Kaplan-Meier nonparametric product limit estimator can be obtained from these Bayesian nonparametric estimators in the limiting case of a vague prior. The present paper introduces what can be seen as the correspondingleft beta/beta-neutral process prior which allow for left censoring. The Bayesian nonparametyric estimator is obtained as in the corresponding product limit estimator based on left censored data.  相似文献   

3.
Markov Beta and Gamma Processes for Modelling Hazard Rates   总被引:1,自引:0,他引:1  
This paper generalizes the discrete time independent increment beta process of Hjort (1990 ), for modelling discrete failure times, and also generalizes the independent gamma process for modelling piecewise constant hazard rates ( Walker and Mallick, 1997 ). The generalizations are from independent increment to Markov increment prior processes allowing the modelling of smoothness. We derive posterior distributions and undertake a full Bayesian analysis.  相似文献   

4.
5.
We consider an efficient Bayesian approach to estimating integration-based posterior summaries from a separate Bayesian application. In Bayesian quadrature we model an intractable posterior density function f(·) as a Gaussian process, using an approximating function g(·), and find a posterior distribution for the integral of f(·), conditional on a few evaluations of f (·) at selected design points. Bayesian quadrature using normal g (·) is called Bayes-Hermite quadrature. We extend this theory by allowing g(·) to be chosen from two wider classes of functions. One is a family of skew densities and the other is the family of finite mixtures of normal densities. For the family of skew densities we describe an iterative updating procedure to select the most suitable approximation and apply the method to two simulated posterior density functions.  相似文献   

6.
The Dirichlet process can be regarded as a random probability measure for which the authors examine various sum representations. They consider in particular the gamma process construction of Ferguson (1973) and the “stick‐breaking” construction of Sethuraman (1994). They propose a Dirichlet finite sum representation that strongly approximates the Dirichlet process. They assess the accuracy of this approximation and characterize the posterior that this new prior leads to in the context of Bayesian nonpara‐metric hierarchical models.  相似文献   

7.
This paper gives an interpretation for the scale parameter of a Dirichlet process when the aim is to estimate a linear functional of an unknown probability distribution. We provide exact first and second posterior moments for such functionals under both informative and noninformative prior specifications. The noninformative case provides a normal approximation to the Bayesian bootstrap.  相似文献   

8.
Stochastic kinetic models are often used to describe complex biological processes. Typically these models are analytically intractable and have unknown parameters which need to be estimated from observed data. Ideally we would have measurements on all interacting chemical species in the process, observed continuously in time. However, in practice, measurements are taken only at a relatively few time‐points. In some situations, only very limited observation of the process is available, for example settings in which experimenters can only observe noisy observations on the proportion of cells that are alive. This makes the inference task even more problematic. We consider a range of data‐poor scenarios and investigate the performance of various computationally intensive Bayesian algorithms in determining the posterior distribution using data on proportions from a simple birth‐death process.  相似文献   

9.
A random distribution function on the positive real line which belongs to the class of neutral to the right priors is defined. It corresponds to the superposition of independent beta processes at the cumulative hazard level. The definition is constructive and starts with a discrete time process with random probability masses obtained from suitably defined products of independent beta random variables. The continuous time version is derived as the corresponding infinitesimal weak limit and is described in terms of completely random measures. It takes the interpretation of the survival distribution resulting from independent competing failure times. We discuss prior specification and illustrate posterior inference on a real data example.  相似文献   

10.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   

11.
Ordinary differential equations are arguably the most popular and useful mathematical tool for describing physical and biological processes in the real world. Often, these physical and biological processes are observed with errors, in which case the most natural way to model such data is via regression where the mean function is defined by an ordinary differential equation believed to provide an understanding of the underlying process. These regression based dynamical models are called differential equation models. Parameter inference from differential equation models poses computational challenges mainly due to the fact that analytic solutions to most differential equations are not available. In this paper, we propose an approximation method for obtaining the posterior distribution of parameters in differential equation models. The approximation is done in two steps. In the first step, the solution of a differential equation is approximated by the general one-step method which is a class of numerical numerical methods for ordinary differential equations including the Euler and the Runge-Kutta procedures; in the second step, nuisance parameters are marginalized using Laplace approximation. The proposed Laplace approximated posterior gives a computationally fast alternative to the full Bayesian computational scheme (such as Makov Chain Monte Carlo) and produces more accurate and stable estimators than the popular smoothing methods (called collocation methods) based on frequentist procedures. For a theoretical support of the proposed method, we prove that the Laplace approximated posterior converges to the actual posterior under certain conditions and analyze the relation between the order of numerical error and its Laplace approximation. The proposed method is tested on simulated data sets and compared with the other existing methods.  相似文献   

12.
We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we show that this support is the space of all continuous functions when the usual covariance kernels are chosen and an appropriate prior is used on the smoothing parameters of the covariance kernel. We then show that a large class of Gaussian process priors achieve weak as well as strong posterior consistency (under some regularity conditions) at true densities that are either continuous or piecewise continuous.  相似文献   

13.
Quasi-life tables, in which the data arise from many concurrent, independent, discrete-time renewal processes, were defined by Baxter (1994, Biometrika 81:567–577), who outlined some methods for estimation. The processes are not observed individually; only the total numbers of renewals at each time point are observed. Crowder and Stephens (2003, Lifetime Data Anal 9:345–355) implemented a formal estimating-equation approach that invokes large-sample theory. However, these asymptotic methods fail to yield sensible estimates for smaller samples. In this paper, we implement a Bayesian analysis based on MCMC computation that works equally well for large and small sample sizes. We give three simulated examples, studying the Bayesian results, the impact of changing prior specification, and empirical properties of the Bayesian estimators of the lifetime distribution parameters. We also study the Baxter (1994, Biometrika 81:567–577) data, and uncover structure that has not been commented upon previously.  相似文献   

14.
The max-stable process is a natural approach for modelling extrenal dependence in spatial data. However, the estimation is difficult due to the intractability of the full likelihoods. One approach that can be used to estimate the posterior distribution of the parameters of the max-stable process is to employ composite likelihoods in the Markov chain Monte Carlo (MCMC) samplers, possibly with adjustment of the credible intervals. In this paper, we investigate the performance of the composite likelihood-based MCMC samplers under various settings of the Gaussian extreme value process and the Brown–Resnick process. Based on our findings, some suggestions are made to facilitate the application of this estimator in real data.  相似文献   

15.
Partial specification of a prior distribution can be appealing to an analyst, but there is no conventional way to update a partial prior. In this paper, we show how a framework for Bayesian updating with data can be based on the Dirichlet(a) process. Within this framework, partial information predictors generalize standard minimax predictors and have interesting multiple-point shrinkage properties. Approximations to partial-information estimators for squared error loss are defined straightforwardly, and an estimate of the mean shrinks the sample mean. The proposed updating of the partial prior is a consequence of four natural requirements when the Dirichlet parameter a is continuous. Namely, the updated partial posterior should be calculable from knowledge of only the data and partial prior, it should be faithful to the full posterior distribution, it should assign positive probability to every observed event {X,}, and it should not assign probability to unobserved events not included in the partial prior specification.  相似文献   

16.
Survival data obtained from prevalent cohort study designs are often subject to length-biased sampling. Frequentist methods including estimating equation approaches, as well as full likelihood methods, are available for assessing covariate effects on survival from such data. Bayesian methods allow a perspective of probability interpretation for the parameters of interest, and may easily provide the predictive distribution for future observations while incorporating weak prior knowledge on the baseline hazard function. There is lack of Bayesian methods for analyzing length-biased data. In this paper, we propose Bayesian methods for analyzing length-biased data under a proportional hazards model. The prior distribution for the cumulative hazard function is specified semiparametrically using I-Splines. Bayesian conditional and full likelihood approaches are developed for analyzing simulated and real data.  相似文献   

17.
In this paper, we adapt recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m−1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters on-line. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. short-term interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. short-term interest rate.  相似文献   

18.
Linear models with a growing number of parameters have been widely used in modern statistics. One important problem about this kind of model is the variable selection issue. Bayesian approaches, which provide a stochastic search of informative variables, have gained popularity. In this paper, we will study the asymptotic properties related to Bayesian model selection when the model dimension p is growing with the sample size n. We consider pn and provide sufficient conditions under which: (1) with large probability, the posterior probability of the true model (from which samples are drawn) uniformly dominates the posterior probability of any incorrect models; and (2) the posterior probability of the true model converges to one in probability. Both (1) and (2) guarantee that the true model will be selected under a Bayesian framework. We also demonstrate several situations when (1) holds but (2) fails, which illustrates the difference between these two properties. Finally, we generalize our results to include g-priors, and provide simulation examples to illustrate the main results.  相似文献   

19.
Bivariate count data arise in several different disciplines (epidemiology, marketing, sports statistics just to name a few) and the bivariate Poisson distribution being a generalization of the Poisson distribution plays an important role in modelling such data. In the present paper we present a Bayesian estimation approach for the parameters of the bivariate Poisson model and provide the posterior distributions in closed forms. It is shown that the joint posterior distributions are finite mixtures of conditionally independent gamma distributions for which their full form can be easily deduced by a recursively updating scheme. Thus, the need of applying computationally demanding MCMC schemes for Bayesian inference in such models will be removed, since direct sampling from the posterior will become available, even in cases where the posterior distribution of functions of the parameters is not available in closed form. In addition, we define a class of prior distributions that possess an interesting conjugacy property which extends the typical notion of conjugacy, in the sense that both prior and posteriors belong to the same family of finite mixture models but with different number of components. Extension to certain other models including multivariate models or models with other marginal distributions are discussed.  相似文献   

20.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号