首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A family of threshold nonlinear generalised autoregressive conditionally heteroscedastic models is considered, that allows smooth transitions between regimes, capturing size asymmetry via an exponential smooth transition function. A Bayesian approach is taken and an efficient adaptive sampling scheme is employed for inference, including a novel extension to a recently proposed prior for the smoothing parameter that solves a likelihood identification problem. A simulation study illustrates that the sampling scheme performs well, with the chosen prior kept close to uninformative, while successfully ensuring identification of model parameters and accurate inference for the smoothing parameter. An empirical study confirms the potential suitability of the model, highlighting the presence of both mean and volatility (size) asymmetry; while the model is favoured over modern, popular model competitors, including those with sign asymmetry, via the deviance information criterion.  相似文献   

2.
ABSTRACT

The living hours data of individuals' time spent on daily activities are compositional and include many zeros because individuals do not pursue all activities every day. Thus, we should exercise caution in using such data for empirical analyses. The Bayesian method offers several advantages in analyzing compositional data. In this study, we analyze the time allocation of Japanese married couples using the Bayesian model. Based on the Bayes factors, we compare models that consider and do not consider the correlations between married couples' time use data. The model that considers the correlation shows superior performance. We show that the Bayesian method can adequately take into account the correlations of wives' and husbands' living hours, facilitating the calculation of partial effects that their activities' variables have on living hours. The partial effects of the model that considers the correlations between the couples' time use are easily calculated from the posterior results.  相似文献   

3.
Estimation and Properties of a Time-Varying EGARCH(1,1) in Mean Model   总被引:1,自引:1,他引:0  
Time-varying GARCH-M models are commonly employed in econometrics and financial economics. Yet the recursive nature of the conditional variance makes likelihood analysis of these models computationally infeasible. This article outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only O(T) computational operations, where T is the sample size. Furthermore, the theoretical dynamic properties of a time-varying-parameter EGARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.  相似文献   

4.
This paper presents a comprehensive review and comparison of five computational methods for Bayesian model selection, based on MCMC simulations from posterior model parameter distributions. We apply these methods to a well-known and important class of models in financial time series analysis, namely GARCH and GARCH-t models for conditional return distributions (assuming normal and t-distributions). We compare their performance with the more common maximum likelihood-based model selection for simulated and real market data. All five MCMC methods proved reliable in the simulation study, although differing in their computational demands. Results on simulated data also show that for large degrees of freedom (where the t-distribution becomes more similar to a normal one), Bayesian model selection results in better decisions in favor of the true model than maximum likelihood. Results on market data show the instability of the harmonic mean estimator and reliability of the advanced model selection methods.  相似文献   

5.
Non-parametric Bayesian Estimation of a Spatial Poisson Intensity   总被引:5,自引:0,他引:5  
A method introduced by Arjas & Gasbarra (1994) and later modified by Arjas & Heikkinen (1997) for the non-parametric Bayesian estimation of an intensity on the real line is generalized to cover spatial processes. The method is based on a model approximation where the approximating intensities have the structure of a piecewise constant function. Random step functions on the plane are generated using Voronoi tessellations of random point patterns. Smoothing between nearby intensity values is applied by means of a Markov random field prior in the spirit of Bayesian image analysis. The performance of the method is illustrated in examples with both real and simulated data.  相似文献   

6.
The article considers a Gaussian model with the mean and the variance modeled flexibly as functions of the independent variables. The estimation is carried out using a Bayesian approach that allows the identification of significant variables in the variance function, as well as averaging over all possible models in both the mean and the variance functions. The computation is carried out by a simulation method that is carefully constructed to ensure that it converges quickly and produces iterates from the posterior distribution that have low correlation. Real and simulated examples demonstrate that the proposed method works well. The method in this paper is important because (a) it produces more realistic prediction intervals than nonparametric regression estimators that assume a constant variance; (b) variable selection identifies the variables in the variance function that are important; (c) variable selection and model averaging produce more efficient prediction intervals than those obtained by regular nonparametric regression.  相似文献   

7.
Bayesian analysis of mortality data   总被引:1,自引:0,他引:1  
Congdon argued that the use of parametric modelling of mortality data is necessary in many practical demographical problems. In this paper, we focus on a form of model introduced by Heligman and Pollard in 1980, and we adopt a Bayesian analysis, using Markov chain Monte Carlo simulation, to produce the posterior summaries required. This opens the way to richer, more flexible inference summaries and avoids the numerical problems that are encountered with classical methods. Particular methodologies to cope with incomplete life-tables and a derivation of joint lifetimes, median times to death and related quantities of interest are also presented.  相似文献   

8.
We incorporate a random effect into a multivariate discrete proportional hazards model and propose an efficient semiparametric Bayesian estimation method. By introducing a prior process for the parameters of baseline hazards, we consider a nonparametric estimation of baseline hazards function. Using a state space representation, we derive a dynamic modeling of baseline hazards function and propose an efficient block sampler for Markov chain Monte Carlo method. A numerical example using kidney patients data is given.  相似文献   

9.
This article presents a Bayesian analysis of a multinomial probit model by building on previous work that specified priors on identified parameters. The main contribution of our article is to propose a prior on the covariance matrix of the latent utilities that permits elements of the inverse of the covariance matrix to be identically zero. This allows a parsimonious representation of the covariance matrix when such parsimony exists. The methodology is applied to both simulated and real data, and its ability to obtain more efficient estimators of the covariance matrix and regression coefficients is assessed using simulated data.  相似文献   

10.
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany.  相似文献   

11.
It is now possible to carry out Bayesian image segmentation from a continuum parametric model with an unknown number of regions. However, few suitable parametric models exist. We set out to model processes which have realizations that are naturally described by coloured planar triangulations. Triangulations are already used, to represent image structure in machine vision, and in finite element analysis, for domain decomposition. However, no normalizable parametric model, with realizations that are coloured triangulations, has been specified to date. We show how this must be done, and in particular we prove that a normalizable measure on the space of triangulations in the interior of a fixed simple polygon derives from a Poisson point process of vertices. We show how such models may be analysed by using Markov chain Monte Carlo methods and we present two case-studies, including convergence analysis.  相似文献   

12.
This paper focuses on estimating the number of species and the number of abundant species in a specific geographic region and, consequently, draw inferences on the number of rare species. The word 'species' is generic referring to any objects in a population that can be categorized. In the areas of biology, ecology, literature, etc, the species frequency distributions are usually severely skewed, in which case the population contains a few very abundant species and many rare ones. To model a such situation, we develop an asymmetric multinomial-Dirichlet probability model using species frequency data. Posterior distributions on the number of species and the number of abundant species are obtained and posterior inferences are induced using MCMC simulations. Simulations are used to demonstrate and evaluate the developed methodology. We apply the method to a DNA segment data set and a butterfly data set. Comparisons among different approaches to inferring the number of species are also discussed in this paper.  相似文献   

13.
It is very important to study the occurrence of high levels of particulate matter due to the potential harm to people''s health and to the environment. In the present work we use a non-homogeneous Poisson model to analyse the rate of exceedances of particulate matter with diameter smaller that 2.5 microns (PM 2.5). Models with and without change-points are considered and they are applied to data from Bogota, Colombia, and Mexico City, Mexico. Results show that whereas in Bogota larger particles pose a more serious problem, in Mexico City, even though nowadays levels are more controlled, in the recent past PM 2.5 were the ones causing serious problems.  相似文献   

14.
One critical issue in the Bayesian approach is choosing the priors when there is not enough prior information to specify hyperparameters. Several improper noninformative priors for capture-recapture models were proposed in the literature. It is known that the Bayesian estimate can be sensitive to the choice of priors, especially when sample size is small to moderate. Yet, how to choose a noninformative prior for a given model remains a question. In this paper, as the first step, we consider the problem of estimating the population size for MtMt model using noninformative priors. The MtMt model has prodigious application in wildlife management, ecology, software liability, epidemiological study, census under-count, and other research areas. Four commonly used noninformative priors are considered. We find that the choice of noninformative priors depends on the number of sampling occasions only. The guidelines on the choice of noninformative priors are provided based on the simulation results. Propriety of applying improper noninformative prior is discussed. Simulation studies are developed to inspect the frequentist performance of Bayesian point and interval estimates with different noninformative priors under various population sizes, capture probabilities, and the number of sampling occasions. The simulation results show that the Bayesian approach can provide more accurate estimates of the population size than the MLE for small samples. Two real-data examples are given to illustrate the method.  相似文献   

15.
It is well known that long-term exposure to high levels of pollution is hazardous to human health. Therefore, it is important to study and understand the behavior of pollutants in general. In this work, we study the occurrence of a pollutant concentration's surpassing a given threshold (an exceedance) as well as the length of time that the concentration stays above it. A general N(t)/D/1 queueing model is considered to jointly analyze those problems. A non-homogeneous Poisson process is used to model the arrivals of clusters of exceedances. Geometric and generalized negative binomial distributions are used to model the amount of time (cluster size) that the pollutant concentration stays above the threshold. A mixture model is also used for the cluster size distribution. The rate function of the non-homogeneous Poisson process is assumed to be of either the Weibull or the Musa–Okumoto type. The selection of the model that best fits the data is performed using the Bayes discrimination method and the sum of absolute differences as well as using a graphical criterion. Results are applied to the daily maximum ozone measurements provided by the monitoring network of the Metropolitan Area of Mexico City.  相似文献   

16.
Hidden Markov models (HMMs) have been shown to be a flexible tool for modelling complex biological processes. However, choosing the number of hidden states remains an open question and the inclusion of random effects also deserves more research, as it is a recent addition to the fixed-effect HMM in many application fields. We present a Bayesian mixed HMM with an unknown number of hidden states and fixed covariates. The model is fitted using reversible-jump Markov chain Monte Carlo, avoiding the need to select the number of hidden states. We show through simulations that the estimations produced are more precise than those from a fixed-effect HMM and illustrate its practical application to the analysis of DNA copy number data, a field where HMMs are widely used.  相似文献   

17.
Abstract.  Hazard rate estimation is an alternative to density estimation for positive variables that is of interest when variables are times to event. In particular, it is here shown that hazard rate estimation is useful for seismic hazard assessment. This paper suggests a simple, but flexible, Bayesian method for non-parametric hazard rate estimation, based on building the prior hazard rate as the convolution mixture of a Gaussian kernel with an exponential jump-size compound Poisson process. Conditions are given for a compound Poisson process prior to be well-defined and to select smooth hazard rates, an elicitation procedure is devised to assign a constant prior expected hazard rate while controlling prior variability, and a Markov chain Monte Carlo approximation of the posterior distribution is obtained. Finally, the suggested method is validated in a simulation study, and some Italian seismic event data are analysed.  相似文献   

18.
Summary.  A Bayesian intensity model is presented for studying a bioassay problem involving interval-censored tumour onset times, and without discretization of times of death. Both tumour lethality and base-line hazard rates are estimated in the absence of cause-of-death information. Markov chain Monte Carlo methods are used in the numerical estimation, and sophisticated group updating algorithms are applied to achieve reasonable convergence properties. This method was tried on the rat tumorigenicity data that have previously been analysed by Ahn, Moon and Kodell, and our results seem to be more realistic.  相似文献   

19.
Bayesian classification of Neolithic tools   总被引:1,自引:0,他引:1  
The classification of Neolithic tools by using cluster analysis enables archaeologists to understand the function of the tools and the technological and cultural conditions of the societies that made them. In this paper, Bayesian classification is adopted to analyse data which raise the question whether the observed variability, e.g. the shape and dimensions of the tools, is related to their use. The data present technical difficulties for the practitioner, such as the presence of mixed mode data, missing data and errors in variables. These complications are overcome by employing a finite mixture model and Markov chain Monte Carlo methods. The analysis uses prior information which expresses the archaeologist's belief that there are two tool groups that are similar to contemporary adzes and axes. The resulting mixing densities provide evidence that the morphological dimensional variability among tools is related to the existence of these two tool groups.  相似文献   

20.
Pettitt  A. N.  Weir  I. S.  Hart  A. G. 《Statistics and Computing》2002,12(4):353-367
A Gaussian conditional autoregressive (CAR) formulation is presented that permits the modelling of the spatial dependence and the dependence between multivariate random variables at irregularly spaced sites so capturing some of the modelling advantages of the geostatistical approach. The model benefits not only from the explicit availability of the full conditionals but also from the computational simplicity of the precision matrix determinant calculation using a closed form expression involving the eigenvalues of a precision matrix submatrix. The introduction of covariates into the model adds little computational complexity to the analysis and thus the method can be straightforwardly extended to regression models. The model, because of its computational simplicity, is well suited to application involving the fully Bayesian analysis of large data sets involving multivariate measurements with a spatial ordering. An extension to spatio-temporal data is also considered. Here, we demonstrate use of the model in the analysis of bivariate binary data where the observed data is modelled as the sign of the hidden CAR process. A case study involving over 450 irregularly spaced sites and the presence or absence of each of two species of rain forest trees at each site is presented; Markov chain Monte Carlo (MCMC) methods are implemented to obtain posterior distributions of all unknowns. The MCMC method works well with simulated data and the tree biodiversity data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号