首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large.  相似文献   

2.
Shookri and Consul (1989) and Scollnik (1995) have previously considered the Bayesian analysis of an overdispersed generalized Poisson model. Scollnik (1995) also considered the Bayesian analysis of an ordinary Poisson and over-dispersed generalized Poisson mixture model. In this paper, we discuss the Bayesian analysis of these models when they are utilised in a regression context. Markov chain Monte Carlo methods are utilised, and an illustrative analysis is provided.  相似文献   

3.
The authors present theoretical results that show how one can simulate a mixture distribution whose components live in subspaces of different dimension by reformulating the problem in such a way that observations may be drawn from an auxiliary continuous distribution on the largest subspace and then transformed in an appropriate fashion. Motivated by the importance of enlarging the set of available Markov chain Monte Carlo (MCMC) techniques, the authors show how their results can be fruitfully employed in problems such as model selection (or averaging) of nested models, or regeneration of Markov chains for evaluating standard deviations of estimated expectations derived from MCMC simulations.  相似文献   

4.
5.
The authors consider the problem of simultaneous transformation and variable selection for linear regression. They propose a fully Bayesian solution to the problem, which allows averaging over all models considered including transformations of the response and predictors. The authors use the Box‐Cox family of transformations to transform the response and each predictor. To deal with the change of scale induced by the transformations, the authors propose to focus on new quantities rather than the estimated regression coefficients. These quantities, referred to as generalized regression coefficients, have a similar interpretation to the usual regression coefficients on the original scale of the data, but do not depend on the transformations. This allows probabilistic statements about the size of the effect associated with each variable, on the original scale of the data. In addition to variable and transformation selection, there is also uncertainty involved in the identification of outliers in regression. Thus, the authors also propose a more robust model to account for such outliers based on a t‐distribution with unknown degrees of freedom. Parameter estimation is carried out using an efficient Markov chain Monte Carlo algorithm, which permits moves around the space of all possible models. Using three real data sets and a simulated study, the authors show that there is considerable uncertainty about variable selection, choice of transformation, and outlier identification, and that there is advantage in dealing with all three simultaneously. The Canadian Journal of Statistics 37: 361–380; 2009 © 2009 Statistical Society of Canada  相似文献   

6.
The article considers a Gaussian model with the mean and the variance modeled flexibly as functions of the independent variables. The estimation is carried out using a Bayesian approach that allows the identification of significant variables in the variance function, as well as averaging over all possible models in both the mean and the variance functions. The computation is carried out by a simulation method that is carefully constructed to ensure that it converges quickly and produces iterates from the posterior distribution that have low correlation. Real and simulated examples demonstrate that the proposed method works well. The method in this paper is important because (a) it produces more realistic prediction intervals than nonparametric regression estimators that assume a constant variance; (b) variable selection identifies the variables in the variance function that are important; (c) variable selection and model averaging produce more efficient prediction intervals than those obtained by regular nonparametric regression.  相似文献   

7.
In most practical applications, the quality of count data is often compromised due to errors-in-variables (EIVs). In this paper, we apply Bayesian approach to reduce bias in estimating the parameters of count data regression models that have mismeasured independent variables. Furthermore, the exposure model is misspecified with a flexible distribution, hence our approach remains robust against any departures from normality in its true underlying exposure distribution. The proposed method is also useful in realistic situations as the variance of EIVs is estimated instead of assumed as known, in contrast with other methods of correcting bias especially in count data EIVs regression models. We conduct simulation studies on synthetic data sets using Markov chain Monte Carlo simulation techniques to investigate the performance of our approach. Our findings show that the flexible Bayesian approach is able to estimate the values of the true regression parameters consistently and accurately.  相似文献   

8.
In this paper, we adopt the Bayesian approach to expectile regression employing a likelihood function that is based on an asymmetric normal distribution. We demonstrate that improper uniform priors for the unknown model parameters yield a proper joint posterior. Three simulated data sets were generated to evaluate the proposed method which show that Bayesian expectile regression performs well and has different characteristics comparing with Bayesian quantile regression. We also apply this approach into two real data analysis.  相似文献   

9.
This paper is concerned with selection of explanatory variables in generalized linear models (GLM). The class of GLM's is quite large and contains e.g. the ordinary linear regression, the binary logistic regression, the probit model and Poisson regression with linear or log-linear parameter structure. We show that, through an approximation of the log likelihood and a certain data transformation, the variable selection problem in a GLM can be converted into variable selection in an ordinary (unweighted) linear regression model. As a consequence no specific computer software for variable selection in GLM's is needed. Instead, some suitable variable selection program for linear regression can be used. We also present a simulation study which shows that the log likelihood approximation is very good in many practical situations. Finally, we mention briefly possible extensions to regression models outside the class of GLM's.  相似文献   

10.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

11.
We propose a simulation-based Bayesian approach to the analysis of long memory stochastic volatility models, stationary and nonstationary. The main tool used to reduce the likelihood function to a tractable form is an approximate state-space representation of the model, A data set of stock market returns is analyzed with the proposed method. The approach taken here allows a quantitative assessment of the empirical evidence in favor of the stationarity, or nonstationarity, of the instantaneous volatility of the data.  相似文献   

12.
Multivariate adaptive regression spline fitting or MARS (Friedman 1991) provides a useful methodology for flexible adaptive regression with many predictors. The MARS methodology produces an estimate of the mean response that is a linear combination of adaptively chosen basis functions. Recently, a Bayesian version of MARS has been proposed (Denison, Mallick and Smith 1998a, Holmes and Denison, 2002) combining the MARS methodology with the benefits of Bayesian methods for accounting for model uncertainty to achieve improvements in predictive performance. In implementation of the Bayesian MARS approach, Markov chain Monte Carlo methods are used for computations, in which at each iteration of the algorithm it is proposed to change the current model by either (a) Adding a basis function (birth step) (b) Deleting a basis function (death step) or (c) Altering an existing basis function (change step). In the algorithm of Denison, Mallick and Smith (1998a), when a birth step is proposed, the type of basis function is determined by simulation from the prior. This works well in problems with a small number of predictors, is simple to program, and leads to a simple form for Metropolis-Hastings acceptance probabilities. However, in problems with very large numbers of predictors where many of the predictors are useless it may be difficult to find interesting interactions with such an approach. In the original MARS algorithm of Friedman (1991) a heuristic is used of building up higher order interactions from lower order ones, which greatly reduces the complexity of the search for good basis functions to add to the model. While we do not exactly follow the intuition of the original MARS algorithm in this paper, we nevertheless suggest a similar idea in which the Metropolis-Hastings proposals of Denison, Mallick and Smith (1998a) are altered to allow dependence on the current model. Our modification allows more rapid identification and exploration of important interactions, especially in problems with very large numbers of predictor variables and many useless predictors. Performance of the algorithms is compared in simulation studies.  相似文献   

13.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.  相似文献   

14.
This paper provides a Bayesian estimation procedure for monotone regression models incorporating the monotone trend constraint subject to uncertainty. For monotone regression modeling with stochastic restrictions, we propose a Bayesian Bernstein polynomial regression model using two-stage hierarchical prior distributions based on a family of rectangle-screened multivariate Gaussian distributions extended from the work of Gurtis and Ghosh [7 S.M. Curtis and S.K. Ghosh, A variable selection approach to monotonic regression with Bernstein polynomials, J. Appl. Stat. 38 (2011), pp. 961976. doi: 10.1080/02664761003692423[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. This approach reflects the uncertainty about the prior constraint, and thus proposes a regression model subject to monotone restriction with uncertainty. Based on the proposed model, we derive the posterior distributions for unknown parameters and present numerical schemes to generate posterior samples. We show the empirical performance of the proposed model based on synthetic data and real data applications and compare the performance to the Bernstein polynomial regression model of Curtis and Ghosh [7 S.M. Curtis and S.K. Ghosh, A variable selection approach to monotonic regression with Bernstein polynomials, J. Appl. Stat. 38 (2011), pp. 961976. doi: 10.1080/02664761003692423[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] for the shape restriction with certainty. We illustrate the effectiveness of our proposed method that incorporates the uncertainty of the monotone trend and automatically adapts the regression function to the monotonicity, through empirical analysis with synthetic data and real data applications.  相似文献   

15.
Simulation of truncated normal variables   总被引:3,自引:0,他引:3  
We provide simulation algorithms for one-sided and two-sided truncated normal distributions. These algorithms are then used to simulate multivariate normal variables with convex restricted parameter space for any covariance structure.  相似文献   

16.
In this paper, we develop a variable selection framework with the spike-and-slab prior distribution via the hazard function of the Cox model. Specifically, we consider the transformation of the score and information functions for the partial likelihood function evaluated at the given data from the parameter space into the space generated by the logarithm of the hazard ratio. Thereby, we reduce the nonlinear complexity of the estimation equation for the Cox model and allow the utilization of a wider variety of stable variable selection methods. Then, we use a stochastic variable search Gibbs sampling approach via the spike-and-slab prior distribution to obtain the sparsity structure of the covariates associated with the survival outcome. Additionally, we conduct numerical simulations to evaluate the finite-sample performance of our proposed method. Finally, we apply this novel framework on lung adenocarcinoma data to find important genes associated with decreased survival in subjects with the disease.  相似文献   

17.
As a result of their good performance in practice and their desirable analytical properties, Gaussian process regression models are becoming increasingly of interest in statistics, engineering and other fields. However, two major problems arise when the model is applied to a large data-set with repeated measurements. One stems from the systematic heterogeneity among the different replications, and the other is the requirement to invert a covariance matrix which is involved in the implementation of the model. The dimension of this matrix equals the sample size of the training data-set. In this paper, a Gaussian process mixture model for regression is proposed for dealing with the above two problems, and a hybrid Markov chain Monte Carlo (MCMC) algorithm is used for its implementation. Application to a real data-set is reported.  相似文献   

18.
Based on the Bayesian framework of utilizing a Gaussian prior for the univariate nonparametric link function and an asymmetric Laplace distribution (ALD) for the residuals, we develop a Bayesian treatment for the Tobit quantile single-index regression model (TQSIM). With the location-scale mixture representation of the ALD, the posterior inferences of the latent variables and other parameters are achieved via the Markov Chain Monte Carlo computation method. TQSIM broadens the scope of applicability of the Tobit models by accommodating nonlinearity in the data. The proposed method is illustrated by two simulation examples and a labour supply dataset.  相似文献   

19.
Random Bernstein Polynomials   总被引:5,自引:0,他引:5  
Random Bernstein polynomials which are also probability distribution functions on the closed unit interval are studied. The probability law of a Bernstein polynomial so defined provides a novel prior on the space of distribution functions on [0, 1] which has full support and can easily select absolutely continuous distribution functions with a continuous and smooth derivative. In particular, the Bernstein polynomial which approximates a Dirichlet process is studied. This may be of interest in Bayesian non-parametric inference. In the second part of the paper, we study the posterior from a Bernstein–Dirichlet prior and suggest a hybrid Monte Carlo approximation of it. The proposed algorithm has some aspects of novelty since the problem under examination has a changing dimension parameter space.  相似文献   

20.
We develop a Bayesian estimation method to non-parametric mixed-effect models under shape-constrains. The approach uses a hierarchical Bayesian framework and characterizations of shape-constrained Bernstein polynomials (BPs). We employ Markov chain Monte Carlo methods for model fitting, using a truncated normal distribution as the prior for the coefficients of BPs to ensure the desired shape constraints. The small sample properties of the Bayesian shape-constrained estimators across a range of functions are provided via simulation studies. Two real data analysis are given to illustrate the application of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号