首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The article considers a Gaussian model with the mean and the variance modeled flexibly as functions of the independent variables. The estimation is carried out using a Bayesian approach that allows the identification of significant variables in the variance function, as well as averaging over all possible models in both the mean and the variance functions. The computation is carried out by a simulation method that is carefully constructed to ensure that it converges quickly and produces iterates from the posterior distribution that have low correlation. Real and simulated examples demonstrate that the proposed method works well. The method in this paper is important because (a) it produces more realistic prediction intervals than nonparametric regression estimators that assume a constant variance; (b) variable selection identifies the variables in the variance function that are important; (c) variable selection and model averaging produce more efficient prediction intervals than those obtained by regular nonparametric regression.  相似文献   

2.
In this paper, we extend the structural probit measurement error model by considering that the unobserved covariate follows a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically which can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques to generate from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which is the case with the naive procedure and it seems to be more efficient than using the structural probit model when the distribution of the covariate (predictor) is skew.  相似文献   

3.
This paper develops a new Bayesian approach to change-point modeling that allows the number of change-points in the observed autocorrelated times series to be unknown. The model we develop assumes that the number of change-points have a truncated Poisson distribution. A genetic algorithm is used to estimate a change-point model, which allows for structural changes with autocorrelated errors. We focus considerable attention on the construction of autocorrelated structure for each regime and for the parameters that characterize each regime. Our techniques are found to work well in the simulation with a few change-points. An empirical analysis is provided involving the annual flow of the Nile River and the monthly total energy production in South Korea to lead good estimates for structural change-points.  相似文献   

4.
Quantile regression (QR) is a natural alternative for depicting the impact of covariates on the conditional distributions of a outcome variable instead of the mean. In this paper, we investigate Bayesian regularized QR for the linear models with autoregressive errors. LASSO-penalized type priors are forced on regression coefficients and autoregressive parameters of the model. Gibbs sampler algorithm is employed to draw the full posterior distributions of unknown parameters. Finally, the proposed procedures are illustrated by some simulation studies and applied to a real data analysis of the electricity consumption.  相似文献   

5.
Nonparametric regression using linear combinations of basis functions   总被引:1,自引:0,他引:1  
This paper discusses a Bayesian approach to nonparametric regression initially proposed by Smith and Kohn (1996. Journal of Econometrics 75: 317–344). In this approach the regression function is represented as a linear combination of basis terms. The basis terms can be univariate or multivariate functions and can include polynomials, natural splines and radial basis functions. A Bayesian hierarchical model is used such that the coefficient of each basis term can be zero with positive prior probability. The presence of basis terms in the model is determined by latent indicator variables. The posterior mean is estimated by Markov chain Monte Carlo simulation because it is computationally intractable to compute the posterior mean analytically unless a small number of basis terms is used. The present article updates the work of Smith and Kohn (1996. Journal of Econometrics 75: 317–344) to take account of work by us and others over the last three years. A careful discussion is given to all aspects of the model specification, function estimation and the use of sampling schemes. In particular, new sampling schemes are introduced to carry out the variable selection methodology.  相似文献   

6.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

7.
This article presents a Bayesian analysis of a multinomial probit model by building on previous work that specified priors on identified parameters. The main contribution of our article is to propose a prior on the covariance matrix of the latent utilities that permits elements of the inverse of the covariance matrix to be identically zero. This allows a parsimonious representation of the covariance matrix when such parsimony exists. The methodology is applied to both simulated and real data, and its ability to obtain more efficient estimators of the covariance matrix and regression coefficients is assessed using simulated data.  相似文献   

8.
The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.  相似文献   

9.
In most practical applications, the quality of count data is often compromised due to errors-in-variables (EIVs). In this paper, we apply Bayesian approach to reduce bias in estimating the parameters of count data regression models that have mismeasured independent variables. Furthermore, the exposure model is misspecified with a flexible distribution, hence our approach remains robust against any departures from normality in its true underlying exposure distribution. The proposed method is also useful in realistic situations as the variance of EIVs is estimated instead of assumed as known, in contrast with other methods of correcting bias especially in count data EIVs regression models. We conduct simulation studies on synthetic data sets using Markov chain Monte Carlo simulation techniques to investigate the performance of our approach. Our findings show that the flexible Bayesian approach is able to estimate the values of the true regression parameters consistently and accurately.  相似文献   

10.
In this paper, we adopt the Bayesian approach to expectile regression employing a likelihood function that is based on an asymmetric normal distribution. We demonstrate that improper uniform priors for the unknown model parameters yield a proper joint posterior. Three simulated data sets were generated to evaluate the proposed method which show that Bayesian expectile regression performs well and has different characteristics comparing with Bayesian quantile regression. We also apply this approach into two real data analysis.  相似文献   

11.
In this paper, we propose a new Bayesian inference approach for classification based on the traditional hinge loss used for classical support vector machines, which we call the Bayesian Additive Machine (BAM). Unlike existing approaches, the new model has a semiparametric discriminant function where some feature effects are nonlinear and others are linear. This separation of features is achieved automatically during model fitting without user pre-specification. Following the literature on sparse regression of high-dimensional models, we can also identify the irrelevant features. By introducing spike-and-slab priors using two sets of indicator variables, these multiple goals are achieved simultaneously and automatically, without any parameter tuning such as cross-validation. An efficient partially collapsed Markov chain Monte Carlo algorithm is developed for posterior exploration based on a data augmentation scheme for the hinge loss. Our simulations and three real data examples demonstrate that the new approach is a strong competitor to some approaches that were proposed recently for dealing with challenging classification examples with high dimensionality.  相似文献   

12.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

13.
Hierarchical models enable the encoding of a variety of parametric structures. However, when presented with a large number of covariates upon which some component of a model hierarchy depends, the modeller may be unwilling or unable to specify a form for that dependence. Data-mining methods are designed to automatically discover relationships between many covariates and a response surface, easily accommodating non-linearities and higher-order interactions. We present a method of wrapping hierarchical models around data-mining methods, preserving the best qualities of the two paradigms. We fit the resulting semi-parametric models using an approximate Gibbs sampler called HEBBRU. Using a simulated dataset, we show that HEBBRU is useful for exploratory analysis and displays excellent predictive accuracy. Finally, we apply HEBBRU to an ornithological dataset drawn from the eBird database.  相似文献   

14.
Based on the Bayesian framework of utilizing a Gaussian prior for the univariate nonparametric link function and an asymmetric Laplace distribution (ALD) for the residuals, we develop a Bayesian treatment for the Tobit quantile single-index regression model (TQSIM). With the location-scale mixture representation of the ALD, the posterior inferences of the latent variables and other parameters are achieved via the Markov Chain Monte Carlo computation method. TQSIM broadens the scope of applicability of the Tobit models by accommodating nonlinearity in the data. The proposed method is illustrated by two simulation examples and a labour supply dataset.  相似文献   

15.
Summary.  We develop an efficient way to select the best subset autoregressive model with exogenous variables and generalized autoregressive conditional heteroscedasticity errors. One main feature of our method is to select important autoregressive and exogenous variables, and at the same time to estimate the unknown parameters. The method proposed uses the stochastic search idea. By adopting Markov chain Monte Carlo techniques, we can identify the best subset model from a large of number of possible choices. A simulation experiment shows that the method is very effective. Misspecification in the mean equation can also be detected by our model selection method. In the application to the stock-market data of seven countries, the lagged 1 US return is found to have a strong influence on the other stock-market returns.  相似文献   

16.
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the non-constant pattern of a log baseline rate is modeled with a non-parametric step function, the resulting semi-parametric model involves a model component of varying dimensions and thus requires a sophisticated varying-dimensional inference to obtain the correct estimates of model parameters of a fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art Markov chain Monte Carlo-type algorithm based on partial collapse. The proposed model and methods are used to investigate the association between the daily homicide rates in Cali, Colombia, and the policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.  相似文献   

17.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.  相似文献   

18.
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied.  相似文献   

19.
We propose an estimation procedure for time-series regression models under the Bayesian inference framework. With the exact method of Wise [Wise, J. (1955). The autocorrelation function and spectral density function. Biometrika, 42, 151–159], an exact likelihood function can be obtained instead of the likelihood conditional on initial observations. The constraints on the parameter space arising from the stationarity conditions are handled by a reparametrization, which was not taken into consideration by Chib [Chib, S. (1993). Bayes regression with autoregressive errors: A Gibbs sampling approach. J. Econometrics, 58, 275–294] or Chib and Greenberg [Chib, S. and Greenberg, E. (1994). Bayes inference in regression model with ARMA(p, q) errors. J. Econometrics, 64, 183–206]. Simulation studies show that our method leads to better inferential results than their results.  相似文献   

20.
Studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physical models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. However, uncertainties are present in both the observations and the models. Hence, there is a need for combining these information sources in a fashion that incorporates uncertainty and quantifies its impact on conclusions. We present a hierarchical Bayesian approach to modeling ice-stream velocities incorporating physical models and observations regarding velocity, ice thickness, and surface elevation from the North East Ice Stream in Greenland. The Bayesian model leads to interesting issues in model assessment and computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号