首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian Survival Analysis Using Bernstein Polynomials   总被引:1,自引:0,他引:1  
Abstract.  Bayesian survival analysis of right-censored survival data is studied using priors on Bernstein polynomials and Markov chain Monte Carlo methods. These priors easily take into consideration geometric information like convexity or initial guess on the cumulative hazard functions, select only smooth functions, can have large enough support, and can be easily specified and generated. Certain frequentist asymptotic properties of the posterior distribution are established. Simulation studies indicate that these Bayes methods are quite satisfactory.  相似文献   

2.
Abstract.  Hazard rate estimation is an alternative to density estimation for positive variables that is of interest when variables are times to event. In particular, it is here shown that hazard rate estimation is useful for seismic hazard assessment. This paper suggests a simple, but flexible, Bayesian method for non-parametric hazard rate estimation, based on building the prior hazard rate as the convolution mixture of a Gaussian kernel with an exponential jump-size compound Poisson process. Conditions are given for a compound Poisson process prior to be well-defined and to select smooth hazard rates, an elicitation procedure is devised to assign a constant prior expected hazard rate while controlling prior variability, and a Markov chain Monte Carlo approximation of the posterior distribution is obtained. Finally, the suggested method is validated in a simulation study, and some Italian seismic event data are analysed.  相似文献   

3.
Bayesian analysis of system failure data under a competing-failure framework is considered when the failure causes have not been exactly identified but narrowed down to a subset of all potential failure causes. The usual assumption of independence of failure causes is relaxed. We obtain the posterior distribution of the joint survival function, assuming a Dirichlet process prior, and derive the limiting posterior distribution. We show that the posterior estimate of the reliability of the series system of interest in practice is consistent. A numerical example shows that our approach is feasible.  相似文献   

4.
In estimating individual choice behaviour using multivariate aggregate choice data, the method of data augmentation requires the imputation of individual choices given their partial sums. This article proposes and develops an efficient procedure of simulating multivariate individual choices given their aggregate sums, capitalizing on a sequence of auxiliary distributions. In this framework, a joint distribution of multiple binary vectors given their sums is approximated as a sequence of conditional Bernoulli distributions. The proposed approach is evaluated through a simulation study and is applied to a political science study.  相似文献   

5.
Bayesian analysis for a simple but widely applied dynamic programming model is obtained. The setting is the prototypal job-search model. The general case of wage and duration data, with potential censoring, is studied. The optimality condition implied by the dynamic programming setup is fully imposed. The posterior distribution reveals a “ridge” reflecting the characteristic nonstandard nature of the inference problem. Marginal distributions and moments are obtained in a canonical parameterization after a suitable approximation. The adequacy of the approximation is easily assessed. Simulation is applied to study alternative parameterizations and prior robustness and to facilitate prior elicitations. Finally, we illustrate the applicability of our methods by giving posterior distributions for the elasticities of unemployment durations and reemployment wages with respect to unemployment income. Our analysis is easy to implement and all computations are simple to perform.  相似文献   

6.
We consider the hierarchical Bayesian models of change-point problem in a sequence of random variables having either normal population or skew-normal population. Further, we consider the problem of detecting an influential point concerning change point using Bayes factors. Our proposed models are illustrated with the real data example, the annual flow volume data of Nile River at Aswan from 1871 to 1970. The result using our proposed models indicated the largest influential observation in the year 1888 among outliers. We have shown that it is useful to measure the influence of observations on Bayes factors. Here, we consider omitting single observation as well.  相似文献   

7.
Based on the Bayesian framework of utilizing a Gaussian prior for the univariate nonparametric link function and an asymmetric Laplace distribution (ALD) for the residuals, we develop a Bayesian treatment for the Tobit quantile single-index regression model (TQSIM). With the location-scale mixture representation of the ALD, the posterior inferences of the latent variables and other parameters are achieved via the Markov Chain Monte Carlo computation method. TQSIM broadens the scope of applicability of the Tobit models by accommodating nonlinearity in the data. The proposed method is illustrated by two simulation examples and a labour supply dataset.  相似文献   

8.
ABSTRACT

Motivated by a longitudinal oral health study, the Signal-Tandmobiel® study, a Bayesian approach has been developed to model misclassified ordinal response data. Two regression models have been considered to incorporate misclassification in the categorical response. Specifically, probit and logit models have been developed. The computational difficulties have been avoided by using data augmentation. This idea is exploited to derive efficient Markov chain Monte Carlo methods. Although the method is proposed for ordered categories, it can also be implemented for unordered ones in a simple way. The model performance is shown through a simulation-based example and the analysis of the motivating study.  相似文献   

9.
Non-parametric Tests for Recurrent Events under Competing Risks   总被引:1,自引:0,他引:1  
Abstract.  We consider a data set on nosocomial infections of patients hospitalized in a French intensive care facility. Patients may suffer from recurrent infections of different types and they also have a high risk of death. To deal with such situations, a model of recurrent events with competing risks and a terminal event is introduced. Our aim was to compare the occurrence rates of two types of events. For this purpose, we propose two tests: one to detect if the occurrence rate of a given type of event increases with time; a second to detect if the instantaneous probability of experiencing an event of a given type is always greater than the one of another type. The asymptotic properties of the test statistics are derived and Monte Carlo methods are used to study the power of the tests. Finally, the procedures developed are applied to the French nosocomial infections data set.  相似文献   

10.
We investigate marked non-homogeneous Poisson processes using finite mixtures of bivariate normal components to model the spatial intensity function. We employ a Bayesian hierarchical framework for estimation of the parameters in the model, and propose an approach for including covariate information in this context. The methodology is exemplified through an application involving modeling of and inference for tornado occurrences.  相似文献   

11.
In this paper, we propose a new Bayesian inference approach for classification based on the traditional hinge loss used for classical support vector machines, which we call the Bayesian Additive Machine (BAM). Unlike existing approaches, the new model has a semiparametric discriminant function where some feature effects are nonlinear and others are linear. This separation of features is achieved automatically during model fitting without user pre-specification. Following the literature on sparse regression of high-dimensional models, we can also identify the irrelevant features. By introducing spike-and-slab priors using two sets of indicator variables, these multiple goals are achieved simultaneously and automatically, without any parameter tuning such as cross-validation. An efficient partially collapsed Markov chain Monte Carlo algorithm is developed for posterior exploration based on a data augmentation scheme for the hinge loss. Our simulations and three real data examples demonstrate that the new approach is a strong competitor to some approaches that were proposed recently for dealing with challenging classification examples with high dimensionality.  相似文献   

12.
In this paper, we consider the Bayesian inference of the unknown parameters of the randomly censored Weibull distribution. A joint conjugate prior on the model parameters does not exist; we assume that the parameters have independent gamma priors. Since closed-form expressions for the Bayes estimators cannot be obtained, we use Lindley's approximation, importance sampling and Gibbs sampling techniques to obtain the approximate Bayes estimates and the corresponding credible intervals. A simulation study is performed to observe the behaviour of the proposed estimators. A real data analysis is presented for illustrative purposes.  相似文献   

13.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

14.
In the study of earthquakes, several aspects of the underlying physical process, such as the time non-stationarity of the process, are not yet well understood, because we lack clear indications about its evolution in time. Taking as our point of departure the theory that the seismic process evolves in phases with different activity patterns, we have attempted to identify these phases through the variations in the interevent time probability distribution within the framework of the multiple-changepoint problem. In a nonparametric Bayesian setting, the distribution under examination has been considered a random realization from a mixture of Dirichlet processes, the parameter of which is proportional to a generalized gamma distribution. In this way we could avoid making precise assumptions about the functional form of the distribution. The number and location in time of the phases are unknown and are estimated at the same time as the interevent time distributions. We have analysed the sequence of main shocks that occurred in Irpinia, a particularly active area in southern Italy: the method consistently identifies changepoints at times when strong stress releases were recorded. The estimation problem can be solved by stochastic simulation methods based on Markov chains, the implementation of which is improved, in this case, by the good analytical properties of the Dirichlet process.  相似文献   

15.
We consider a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows us to use Bayesian variable selection methods for covariance selection. We search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors. With this method we are able to learn from the data for each effect whether it is random or not, and whether covariances among random effects are zero. An application in marketing shows a substantial reduction of the number of free elements in the variance-covariance matrix.  相似文献   

16.
Summary.  The evaluation of the performance of a continuous diagnostic measure is a commonly encountered task in medical research. We develop Bayesian non-parametric models that use Dirichlet process mixtures and mixtures of Polya trees for the analysis of continuous serologic data. The modelling approach differs from traditional approaches to the analysis of receiver operating characteristic curve data in that it incorporates a stochastic ordering constraint for the distributions of serologic values for the infected and non-infected populations. Biologically such a constraint is virtually always feasible because serologic values from infected individuals tend to be higher than those for non-infected individuals. The models proposed provide data-driven inferences for the infected and non-infected population distributions, and for the receiver operating characteristic curve and corresponding area under the curve. We illustrate and compare the predictive performance of the Dirichlet process mixture and mixture of Polya trees approaches by using serologic data for Johne's disease in dairy cattle.  相似文献   

17.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

18.
Hierarchical models enable the encoding of a variety of parametric structures. However, when presented with a large number of covariates upon which some component of a model hierarchy depends, the modeller may be unwilling or unable to specify a form for that dependence. Data-mining methods are designed to automatically discover relationships between many covariates and a response surface, easily accommodating non-linearities and higher-order interactions. We present a method of wrapping hierarchical models around data-mining methods, preserving the best qualities of the two paradigms. We fit the resulting semi-parametric models using an approximate Gibbs sampler called HEBBRU. Using a simulated dataset, we show that HEBBRU is useful for exploratory analysis and displays excellent predictive accuracy. Finally, we apply HEBBRU to an ornithological dataset drawn from the eBird database.  相似文献   

19.
Due to the high reliability and high testing cost of electro-explosive devices, even though an accelerated test is performed, one may observe very few failures or even no failures at all due to censoring. In this paper, we consider modelling the reliability of such devices by an exponential lifetime distribution in which the failure rate is assumed to be a function of some covariates and that the observed data are binary. The Bayesian approach, with three different prior settings, is used to develop inference on the failure rate, lifetime and the reliability under some settings. A Monte Carlo simulation study is carried out to show that this approach is quite useful and suitable for analysing data of the considered form, especially when the failure rates are very small. Finally, illustrative data are analysed using this approach.  相似文献   

20.
We propose an estimation procedure for time-series regression models under the Bayesian inference framework. With the exact method of Wise [Wise, J. (1955). The autocorrelation function and spectral density function. Biometrika, 42, 151–159], an exact likelihood function can be obtained instead of the likelihood conditional on initial observations. The constraints on the parameter space arising from the stationarity conditions are handled by a reparametrization, which was not taken into consideration by Chib [Chib, S. (1993). Bayes regression with autoregressive errors: A Gibbs sampling approach. J. Econometrics, 58, 275–294] or Chib and Greenberg [Chib, S. and Greenberg, E. (1994). Bayes inference in regression model with ARMA(p, q) errors. J. Econometrics, 64, 183–206]. Simulation studies show that our method leads to better inferential results than their results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号