首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
We deal with one-layer feed-forward neural network for the Bayesian analysis of nonlinear time series. Noises are modeled nonlinearly and nonnormally, by means of ARCH models whose parameters are all dependent on a hidden Markov chain. Parameter estimation is performed by sampling from the posterior distribution via Evolutionary Monte Carlo algorithm, in which two new crossover operators have been introduced. Unknown parameters of the model also include the missing values which can occur within the observed series, so, considering future values as missing, it is also possible to compute point and interval multi-step-ahead predictions.  相似文献   

2.
A stationary bilinear (SB) model can be used to describe processes with a time-varying degree of persistence that depends on past shocks. This study develops methods for Bayesian inference, model comparison, and forecasting in the SB model. Using monthly U.K. inflation data, we find that the SB model outperforms the random walk, first-order autoregressive AR(1), and autoregressive moving average ARMA(1,1) models in terms of root mean squared forecast errors. In addition, the SB model is superior to these three models in terms of predictive likelihood for the majority of forecast observations.  相似文献   

3.
The main goal in this paper is to develop and apply stochastic simulation techniques for GARCH models with multivariate skewed distributions using the Bayesian approach. Both parameter estimation and model comparison are not trivial tasks and several approximate and computationally intensive methods (Markov chain Monte Carlo) will be used to this end. We consider a flexible class of multivariate distributions which can model both skewness and heavy tails. Also, we do not fix tail behaviour when dealing with fat tail distributions but leave it subject to inference.  相似文献   

4.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

5.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

6.
When the results of biological experiments are tested for a possible difference between treatment and control groups, the inference is only valid if based upon a model that fits the experimental results satisfactorily. In dominant-lethal testing, foetal death has previously been assumed to follow a variety of models, including a Poisson, Binomial, Beta-binomial and various mixture models. However, discriminating between models has always been a particularly difficult problem. In this paper, we consider the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them. We adopt a Bayesian approach and illustrate how a variety of different models may be considered, using Markov chain Monte Carlo (MCMC) simulation techniques and comparing the results with the corresponding maximum likelihood analyses. We present an auxiliary variable method for determining the probability that any particular data cell is assigned to a given component in a mixture and we illustrate the value of this approach. Finally, we show how the Bayesian approach provides a natural and unique perspective on the model selection problem via reversible jump MCMC and illustrate how probabilities associated with each of the different models may be calculated for each data set. In terms of estimation we show how, by averaging over the different models, we obtain reliable and robust inference for any statistic of interest.  相似文献   

7.
Studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physical models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. However, uncertainties are present in both the observations and the models. Hence, there is a need for combining these information sources in a fashion that incorporates uncertainty and quantifies its impact on conclusions. We present a hierarchical Bayesian approach to modeling ice-stream velocities incorporating physical models and observations regarding velocity, ice thickness, and surface elevation from the North East Ice Stream in Greenland. The Bayesian model leads to interesting issues in model assessment and computation.  相似文献   

8.
Bayesian model learning based on a parallel MCMC strategy   总被引:1,自引:0,他引:1  
We introduce a novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces. Our learning approach is applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure. It is argued that for certain model neighborhood structures, the ordinary reversible Metropolis-Hastings algorithm does not yield an appropriate solution to the estimation problem. Therefore, we develop an alternative, non-reversible algorithm which can avoid the scaling effect of the neighborhood. To efficiently explore a model space, a finite number of interacting parallel stochastic processes is utilized. Our interaction scheme enables exploration of several local neighborhoods of a model space simultaneously, while it prevents the absorption of any particular process to a relatively inferior state. We illustrate the advantages of our method by an application to a classification model. In particular, we use an extensive bacterial database and compare our results with results obtained by different methods for the same data.  相似文献   

9.
Bayesian analysis of mortality data   总被引:1,自引:0,他引:1  
Congdon argued that the use of parametric modelling of mortality data is necessary in many practical demographical problems. In this paper, we focus on a form of model introduced by Heligman and Pollard in 1980, and we adopt a Bayesian analysis, using Markov chain Monte Carlo simulation, to produce the posterior summaries required. This opens the way to richer, more flexible inference summaries and avoids the numerical problems that are encountered with classical methods. Particular methodologies to cope with incomplete life-tables and a derivation of joint lifetimes, median times to death and related quantities of interest are also presented.  相似文献   

10.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

11.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

12.
A model based on the skew Gaussian distribution is presented to handle skewed spatial data. It extends the results of popular Gaussian process models. Markov chain Monte Carlo techniques are used to generate samples from the posterior distributions of the parameters. Finally, this model is applied in the spatial prediction of weekly rainfall. Cross-validation shows that the predictive performance of our model compares favorably with several kriging variants.  相似文献   

13.
14.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   

15.
In estimating individual choice behaviour using multivariate aggregate choice data, the method of data augmentation requires the imputation of individual choices given their partial sums. This article proposes and develops an efficient procedure of simulating multivariate individual choices given their aggregate sums, capitalizing on a sequence of auxiliary distributions. In this framework, a joint distribution of multiple binary vectors given their sums is approximated as a sequence of conditional Bernoulli distributions. The proposed approach is evaluated through a simulation study and is applied to a political science study.  相似文献   

16.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

17.
Summary.  The Sloan digital sky survey is an extremely large astronomical survey that is conducted with the intention of mapping more than a quarter of the sky. Among the data that it is generating are spectroscopic and photometric measurements, both containing information about the red shift of galaxies. The former are precise and easy to interpret but expensive to gather; the latter are far cheaper but correspondingly more difficult to interpret. Recently, Csabai and co-workers have described various calibration techniques aiming to predict red shift from photometric measurements. We investigate what a structured Bayesian approach to the problem can add. In particular, we are interested in providing uncertainty bounds that are associated with the underlying red shifts and the classifications of the galaxies. We find that quite a generic statistical modelling approach, using for the most part standard model ingredients, can compete with much more specific custom-made and highly tuned techniques that are already available in the astronomical literature.  相似文献   

18.
This paper presents the Bayesian analysis of a semiparametric regression model that consists of parametric and nonparametric components. The nonparametric component is represented with a Fourier series where the Fourier coefficients are assumed a priori to have zero means and to decay to 0 in probability at either algebraic or geometric rates. The rate of decay controls the smoothness of the response function. The posterior analysis automatically selects the amount of smoothing that is coherent with the model and data. Posterior probabilities of the parametric and semiparametric models provide a method for testing the parametric model against a non-specific alternative. The Bayes estimator's mean integrated squared error compares favourably with the theoretically optimal estimator for kernel regression.  相似文献   

19.
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets.  相似文献   

20.
Summary.  An important problem in the management of water supplies is identifying the sources of sediment. The paper develops a Bayesian approach, utilizing an end member model, to estimate the proportion of various sources of sediments in samples taken from a dam. This approach not only allows for the incorporation of prior knowledge about the geochemical compositions of the sources (or end members) but also allows for correlation between spatially contiguous samples and the prediction of the sediment's composition at unsampled locations. Sediments that were sampled from the North Pine Dam in south-east Queensland, Australia, are analysed to illustrate the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号