首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
We deal with one-layer feed-forward neural network for the Bayesian analysis of nonlinear time series. Noises are modeled nonlinearly and nonnormally, by means of ARCH models whose parameters are all dependent on a hidden Markov chain. Parameter estimation is performed by sampling from the posterior distribution via Evolutionary Monte Carlo algorithm, in which two new crossover operators have been introduced. Unknown parameters of the model also include the missing values which can occur within the observed series, so, considering future values as missing, it is also possible to compute point and interval multi-step-ahead predictions.  相似文献   

2.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

3.
Multivariate extreme events are typically modelled using multivariate extreme value distributions. Unfortunately, there exists no finite parametrization for the class of multivariate extreme value distributions. One common approach is to model extreme events using some flexible parametric subclass. This approach has been limited to only two or three dimensions, primarily because suitably flexible high-dimensional parametric models have prohibitively complex density functions. We present an approach that allows a number of popular flexible models to be used in arbitrarily high dimensions. The approach easily handles missing and censored data, and can be employed when modelling componentwise maxima and multivariate threshold exceedances. The approach is based on a representation using conditionally independent marginal components, conditioning on positive stable random variables. We use Bayesian inference, where the conditioning variables are treated as auxiliary variables within Markov chain Monte Carlo simulations. We demonstrate these methods with an application to sea-levels, using data collected at 10 sites on the east coast of England.  相似文献   

4.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.  相似文献   

5.
Studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physical models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. However, uncertainties are present in both the observations and the models. Hence, there is a need for combining these information sources in a fashion that incorporates uncertainty and quantifies its impact on conclusions. We present a hierarchical Bayesian approach to modeling ice-stream velocities incorporating physical models and observations regarding velocity, ice thickness, and surface elevation from the North East Ice Stream in Greenland. The Bayesian model leads to interesting issues in model assessment and computation.  相似文献   

6.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

7.
It is now possible to carry out Bayesian image segmentation from a continuum parametric model with an unknown number of regions. However, few suitable parametric models exist. We set out to model processes which have realizations that are naturally described by coloured planar triangulations. Triangulations are already used, to represent image structure in machine vision, and in finite element analysis, for domain decomposition. However, no normalizable parametric model, with realizations that are coloured triangulations, has been specified to date. We show how this must be done, and in particular we prove that a normalizable measure on the space of triangulations in the interior of a fixed simple polygon derives from a Poisson point process of vertices. We show how such models may be analysed by using Markov chain Monte Carlo methods and we present two case-studies, including convergence analysis.  相似文献   

8.
We propose a simulation-based Bayesian approach to the analysis of long memory stochastic volatility models, stationary and nonstationary. The main tool used to reduce the likelihood function to a tractable form is an approximate state-space representation of the model, A data set of stock market returns is analyzed with the proposed method. The approach taken here allows a quantitative assessment of the empirical evidence in favor of the stationarity, or nonstationarity, of the instantaneous volatility of the data.  相似文献   

9.
Data collected before the routine application of prenatal screening are of unique value in estimating the natural live-birth prevalence of Down syndrome. However, much of these data are from births from over 20 years ago and they are of uncertain quality. In particular, they are subject to varying degrees of underascertainment. Published approaches have used ad hoc corrections to deal with this problem or have been restricted to data sets in which ascertainment is assumed to be complete. In this paper we adopt a Bayesian approach to modelling ascertainment and live-birth prevalence. We consider three prior specifications concerning ascertainment and compare predicted maternal-age-specific prevalence under these three different prior specifications. The computations are carried out by using Markov chain Monte Carlo methods in which model parameters and missing data are sampled.  相似文献   

10.
Estimation in mixed linear models is, in general, computationally demanding, since applied problems may involve extensive data sets and large numbers of random effects. Existing computer algorithms are slow and/or require large amounts of memory. These problems are compounded in generalized linear mixed models for categorical data, since even approximate methods involve fitting of a linear mixed model within steps of an iteratively reweighted least squares algorithm. Only in models in which the random effects are hierarchically nested can the computations for fitting these models to large data sets be carried out rapidly. We describe a data augmentation approach to these computational difficulties in which we repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as 'offsets'. The submodels are chosen so that they have a nested random-effect structure, thus allowing maximum exploitation of the computational efficiency which is available in this case. Examples of the use of the algorithm for both metric and discrete responses are discussed, all calculations being carried out using macros within the MLwiN program.  相似文献   

11.
Bayesian models for relative archaeological chronology building   总被引:1,自引:0,他引:1  
For many years, archaeologists have postulated that the numbers of various artefact types found within excavated features should give insight about their relative dates of deposition even when stratigraphic information is not present. A typical data set used in such studies can be reported as a cross-classification table (often called an abundance matrix or, equivalently, a contingency table) of excavated features against artefact types. Each entry of the table represents the number of a particular artefact type found in a particular archaeological feature. Methodologies for attempting to identify temporal sequences on the basis of such data are commonly referred to as seriation techniques. Several different procedures for seriation including both parametric and non-parametric statistics have been used in an attempt to reconstruct relative chronological orders on the basis of such contingency tables. We develop some possible model-based approaches that might be used to aid in relative, archaeological chronology building. We use the recently developed Markov chain Monte Carlo method based on Langevin diffusions to fit some of the models proposed. Predictive Bayesian model choice techniques are then employed to ascertain which of the models that we develop are most plausible. We analyse two data sets taken from the literature on archaeological seriation.  相似文献   

12.
Stylized facts show that average growth rates of U.S. per capita consumption and income differ in recession and expansion periods. Because a linear combination of such series does not have to be a constant mean process, standard cointegration analysis between the variables to examine the permanent income hypothesis may not be valid. To model the changing growth rates in both series, we introduce a multivariate Markov trend model that accounts for different growth rates in consumption and income during expansions and recessions and across variables within both regimes. The deviations from the multivariate Markov trend are modeled by a vector autoregression (VAR) model. Bayes estimates of this model are obtained using Markov chain Monte Carlo methods. The empirical results suggest the existence of a cointegration relation between U.S. per capita disposable income and consumption, after correction for a multivariate Markov trend. This result is also obtained when per capita investment is added to the VAR.  相似文献   

13.
Summary. The study of human immunodeficiency virus dynamics is one of the most important areas in research into acquired immune deficiency syndrome in recent years. Non-linear mixed effects models have been proposed for modelling viral dynamic processes. A challenging problem in the modelling is to identify repeatedly measured (time-dependent), but possibly missing, immunologic or virologic markers (covariates) for viral dynamic parameters. For missing time-dependent covariates in non-linear mixed effects models, the commonly used complete-case, mean imputation and last value carried forward methods may give misleading results. We propose a three-step hierarchical multiple-imputation method, implemented by Gibbs sampling, which imputes the missing data at the individual level but can pool information across individuals. We compare various methods by Monte Carlo simulations and find that the multiple-imputation method proposed performs the best in terms of bias and mean-squared errors in the estimates of covariate coefficients. By applying the favoured multiple-imputation method to clinical data, we conclude that there is a negative correlation between the viral decay rate (a virological response parameter) and CD4 or CD8 cell counts during the treatment; this is counter-intuitive, but biologically interpretable on the basis of findings from other clinical studies. These results may have an important influence on decisions about treatment for acquired immune deficiency syndrome patients.  相似文献   

14.
Summary.  The paper is concerned with new methodology for statistical inference for final outcome infectious disease data using certain structured population stochastic epidemic models. A major obstacle to inference for such models is that the likelihood is both analytically and numerically intractable. The approach that is taken here is to impute missing information in the form of a random graph that describes the potential infectious contacts between individuals. This level of imputation overcomes various constraints of existing methodologies and yields more detailed information about the spread of disease. The methods are illustrated with both real and test data.  相似文献   

15.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

16.
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied.  相似文献   

17.
Posterior distributions for mixture models often have multiple modes, particularly near the boundaries of the parameter space where the component variances are small. This multimodality results in predictive densities that are extremely rough. The authors propose an adjustment of the standard normal‐inverse‐gamma prior structure that directly controls the ratio of the largest component variance to the smallest component variance. The prior adjustment smooths out modes near the boundary of the parameter space, producing more reasonable estimates of the predictive density.  相似文献   

18.
Caries on Permanent Teeth: A Non-parametric Bayesian Analysis   总被引:1,自引:0,他引:1  
Most earlier epidemiological investigations of dental caries have been based on cross-sectional data. Subject-specific information of dental caries in the past, and the duration of exposure of each tooth to the oral environment, are obviously important factors also influencing the presence of dental caries in the future. This has led us to consider multivariate survival models in which the information about the tooth eruption and failure times are combined to assess caries risk. A non-parametric Bayesian intensity model is presented, reflecting, on the one hand, the within subject and between subject sources of variability, and a corresponding split of variability when considering the 28 permanent teeth. We analyse a data set consisting of the dental history of 240 boys, where the observations are based on predetermined dental examinations taking place approximately once every year. Markov chain Monte Carlo integration techniques are applied in the numerical work.  相似文献   

19.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

20.
In this article, we develop statistical models for analysis of correlated mixed categorical (binary and ordinal) response data arising in medical and epidemi-ologic studies. There is evidence in the literature to suggest that models including correlation structure can lead to substantial improvement in precision of estimation or are more appropriate (accurate). We use a very rich class of scale mixture of multivariate normal (SMMVN) iink functions to accommodate heavy tailed distributions. In order to incorporate available historical information, we propose a unified prior elicitation scheme based on SMMVN-link models. Further, simulation-based techniques are developed to assess model adequacy. Finally, a real data example from prostate cancer studies is used to illustrate the proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号