首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Bayesian analysis based on the partial likelihood for Cox's proportional hazards model is frequently used because of its simplicity. The Bayesian partial likelihood approach is often justified by showing that it approximates the full Bayesian posterior of the regression coefficients with a diffuse prior on the baseline hazard function. This, however, may not be appropriate when ties exist among uncensored observations. In that case, the full Bayesian and Bayesian partial likelihood posteriors can be much different. In this paper, we propose a new Bayesian partial likelihood approach for many tied observations and justify its use.  相似文献   

2.
The article presents the Bayesian inference for the parameters of randomly censored Burr-type XII distribution with proportional hazards. The joint conjugate prior of the proposed model parameters does not exist; we consider two different systems of priors for Bayesian estimation. The explicit forms of the Bayes estimators are not possible; we use Lindley's method to obtain the Bayes estimates. However, it is not possible to obtain the Bayesian credible intervals with Lindley's method; we suggest the Gibbs sampling procedure for this purpose. Numerical experiments are performed to check the properties of the different estimators. The proposed methodology is applied to a real-life data for illustrative purposes. The Bayes estimators are compared with the Maximum likelihood estimators via numerical experiments and real data analysis. The model is validated using posterior predictive simulation in order to ascertain its appropriateness.  相似文献   

3.
Bayesian modelling of spatial compositional data   总被引:1,自引:0,他引:1  
Compositional data are vectors of proportions, specifying fractions of a whole. Aitchison (1986) defines logistic normal distributions for compositional data by applying a logistic transformation and assuming the transformed data to be multi- normal distributed. In this paper we generalize this idea to spatially varying logistic data and thereby define logistic Gaussian fields. We consider the model in a Bayesian framework and discuss appropriate prior distributions. We consider both complete observations and observations of subcompositions or individual proportions, and discuss the resulting posterior distributions. In general, the posterior cannot be analytically handled, but the Gaussian base of the model allows us to define efficient Markov chain Monte Carlo algorithms. We use the model to analyse a data set of sediments in an Arctic lake. These data have previously been considered, but then without taking the spatial aspect into account.  相似文献   

4.
ABSTRACT

This paper considers posterior consistency in the context of high-dimensional variable selection using the Bayesian lasso algorithm. In a frequentist setting, consistency is perhaps the most basic property that we expect any reasonable estimator to achieve. However, in a Bayesian setting, consistency is often ignored or taken for granted, especially in more complex hierarchical Bayesian models. In this paper, we have derived sufficient conditions for posterior consistency in the Bayesian lasso model with the orthogonal design, where the number of parameters grows with the sample size.  相似文献   

5.
The mixed random effect model is commonly used in longitudinal data analysis within either frequentist or Bayesian framework. Here we consider a case, in which we have prior knowledge on partial parameters, while no such information on the rest of the parameters. Thus, we use the hybrid approach on the random-effects model with partial parameters. The parameters are estimated via Bayesian procedure, and the rest of parameters by the frequentist maximum likelihood estimation (MLE), simultaneously on the same model. In practice, we often know partial prior information such as, covariates of age, gender, etc. These information can be used, and accurate estimations in mixed random-effects model can be obtained. A series of simulation studies were performed to compare the results with the commonly used random-effects model with and without partial prior information. The results in hybrid estimation (HYB) and MLE were very close to each other. The estimated θ values in with partial prior information model (HYB) were more closer to true θ values, and showed less variances than without partial prior information in MLE. To compare with true θ values, the mean square of errors are much less in HYB than in MLE. This advantage of HYB is very obvious in longitudinal data with a small sample size. The methods of HYB and MLE are applied to a real longitudinal data for illustration purposes.  相似文献   

6.
We use a Bayesian multivariate time series model for the analysis of the dynamics of carbon monoxide atmospheric concentrations. The data are observed at four sites. It is assumed that the logarithm of the observed process can be represented as the sum of unobservable components: a trend, a daily periodicity, a stationary autoregressive signal and an erratic term. Bayesian analysis is performed via Gibbs sampling. In particular, we consider the problem of joint temporal prediction when data are observed at a few sites and it is not possible to fit a complex space–time model. A retrospective analysis of the trend component is also given, which is important in that it explains the evolution of the variability in the observed process.  相似文献   

7.
In this paper, we consider the Bayesian inference of the unknown parameters of the randomly censored Weibull distribution. A joint conjugate prior on the model parameters does not exist; we assume that the parameters have independent gamma priors. Since closed-form expressions for the Bayes estimators cannot be obtained, we use Lindley's approximation, importance sampling and Gibbs sampling techniques to obtain the approximate Bayes estimates and the corresponding credible intervals. A simulation study is performed to observe the behaviour of the proposed estimators. A real data analysis is presented for illustrative purposes.  相似文献   

8.
Bayesian palaeoclimate reconstruction   总被引:1,自引:0,他引:1  
Summary.  We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.  相似文献   

9.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   

10.
Imprisonment levels vary widely across the United States, with some state imprisonment rates six times higher than others. Imposition of prison sentences also varies between counties within states, with previous research suggesting that covariates such as crime rate, unemployment level, racial composition, political conservatism, geographic region, and sentencing policies account for some of this variation. Other studies, using court data on individual felons, demonstrate how type of offense, demographics, criminal history, and case characteristics affect sentence severity. This article considers the effects of both county-level and individual-level covariates on whether a convicted felon receives a prison sentence rather than a jail or non-custodial sentence. We analyze felony court case processing data from May 1998 for 39 of the nation's most populous urban counties using a Bayesian hierarchical logistic regression model. By adopting a Bayesian approach, we are able to overcome a number of challenges. The model allows individual-level effects to vary by county, but relates these effects across counties using county-level covariates. We account for missing data using imputation via additional Gibbs sampling steps when estimating the model. Finally, we use posterior samples to construct novel predictor effect plots to aid communication of results to criminal justice policy-makers.  相似文献   

11.
ABSTRACT

In this paper, we consider an effective Bayesian inference for censored Student-t linear regression model, which is a robust alternative to the usual censored Normal linear regression model. Based on the mixture representation of the Student-t distribution, we propose a non-iterative Bayesian sampling procedure to obtain independently and identically distributed samples approximately from the observed posterior distributions, which is different from the iterative Markov Chain Monte Carlo algorithm. We conduct model selection and influential analysis using the posterior samples to choose the best fitted model and to detect latent outliers. We illustrate the performance of the procedure through simulation studies, and finally, we apply the procedure to two real data sets, one is the insulation life data with right censoring and the other is the wage rates data with left censoring, and we get some interesting results.  相似文献   

12.
We consider the problem of change-point detection in multivariate time-series. The multivariate distribution of the observations is supposed to follow a graphical model, whose graph and parameters are affected by abrupt changes throughout time. We demonstrate that it is possible to perform exact Bayesian inference whenever one considers a simple class of undirected graphs called spanning trees as possible structures. We are then able to integrate on the graph and segmentation spaces at the same time by combining classical dynamic programming with algebraic results pertaining to spanning trees. In particular, we show that quantities such as posterior distributions for change-points or posterior edge probabilities over time can efficiently be obtained. We illustrate our results on both synthetic and experimental data arising from biology and neuroscience.  相似文献   

13.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

14.
In this paper, we consider the Bayesian analysis of binary time series with different priors, namely normal, Students' t, and Jeffreys prior, and compare the results with the frequentist methods through some simulation experiments and one real data on daily rainfall in inches at Mount Washington, NH. Among Bayesian methods, our results show that the Jeffreys prior perform better in most of the situations for both the simulation and the rainfall data. Furthermore, among weakly informative priors considered, Student's t prior with 7 degrees of freedom fits the data most adequately.  相似文献   

15.
We propose a latent semi-parametric model for ordinal data in which the single-index model is used to evaluate the effects of the latent covariates on the latent response. We develop a Bayesian sampling-based method with free-knot splines to analyze the proposed model. As the index may vary from minus infinity to plus infinity, the traditional spline that is defined on a finite interval cannot be applied directly to approximate the unknown link function. We consider a modified version to address this problem by first transforming the index into the unit interval via a continuously cumulative distribution function and then constructing the spline bases on the unit interval. To obtain a rapidly convergent algorithm, we make use of the partial collapse and parameter expansion and reparameterization techniques, improve the movement step of Bayesian splines with free knots so that all the knots can be relocated each time instead of only one knot, and design a generalized Gibbs step. We check the performance of the proposed model and estimation method by a simulation study and apply them to analyze a real dataset.  相似文献   

16.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

17.
For the exploratory analysis of three-way data, the Tucker3 is one of the most applied models to study three-way arrays when the data are quadrilinear. When the data consist of vectors of positive values summing to a unit, as in the case of compositional data, this model should consider the specific problems that compositional data analysis brings. The main purpose of this paper is to describe how to do a Tucker3 analysis of compositional data, and to show the relationships between the loading matrices when different preprocessing procedures are used.  相似文献   

18.
The modelling of discrete such as binary time series, unlike the continuous time series, is not easy. This is due to the fact that there is no unique way to model the correlation structure of the repeated binary data. Some models may also provide a complicated correlation structure with narrow ranges for the correlations. In this paper, we consider a nonlinear dynamic binary time series model that provides a correlation structure which is easy to interpret and the correlations under this model satisfy the full?1 to 1 range. For the estimation of the parameters of this nonlinear model, we use a conditional generalized quasilikelihood (CGQL) approach which provides the same estimates as those of the well-known maximum likelihood approach. Furthermore, we consider a competitive linear dynamic binary time series model and examine the performance of the CGQL approach through a simulation study in estimating the parameters of this linear model. The model mis-specification effects on estimation as well as forecasting are also examined through simulations.  相似文献   

19.
We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1, 1990 to April 18, 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.  相似文献   

20.

In this paper we consider a Bayesian analysis for an autoregressive model with random normal coefficients (RCA). For the proposed procedure we use conjugate priors for some parameters and improper vague priors for others. The inference for the parameters is made via Gibbs sampler and the convergence is assessed with multiple chains and Gelman and Rubin criterium. Forecasts are based on the predictive density of future observations. Some remarks are also made regarding order determination and stationarity. Applications to simulated and real series are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号