首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian inference for fractionally integrated exponential generalized autoregressive conditional heteroscedastic (FIEGARCH) models using Markov chain Monte Carlo (MCMC) methods is described. A simulation study is presented to assess the performance of the procedure, under the presence of long-memory in the volatility. Samples from FIEGARCH processes are obtained upon considering the generalized error distribution (GED) for the innovation process. Different values for the tail-thickness parameter ν are considered covering both scenarios, innovation processes with lighter (ν > 2) and heavier (ν < 2) tails than the Gaussian distribution (ν = 2). A comparison between the performance of quasi-maximum likelihood (QML) and MCMC procedures is also discussed. An application of the MCMC procedure to estimate the parameters of a FIEGARCH model for the daily log-returns of the S&P500 U.S. stock market index is provided.  相似文献   

2.
Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares.  相似文献   

3.
Approximate Bayesian computation (ABC) is an approach to sampling from an approximate posterior distribution in the presence of a computationally intractable likelihood function. A common implementation is based on simulating model, parameter and dataset triples from the prior, and then accepting as samples from the approximate posterior, those model and parameter pairs for which the corresponding dataset, or a summary of that dataset, is ‘close’ to the observed data. Closeness is typically determined though a distance measure and a kernel scale parameter. Appropriate choice of that parameter is important in producing a good quality approximation. This paper proposes diagnostic tools for the choice of the kernel scale parameter based on assessing the coverage property, which asserts that credible intervals have the correct coverage levels in appropriately designed simulation settings. We provide theoretical results on coverage for both model and parameter inference, and adapt these into diagnostics for the ABC context. We re‐analyse a study on human demographic history to determine whether the adopted posterior approximation was appropriate. Code implementing the proposed methodology is freely available in the R package abctools .  相似文献   

4.
Long-range-dependent time series are endemic in the statistical analysis of Internet traffic. The Hurst parameter provides a good summary of important self-similar scaling properties. We compare a number of different Hurst parameter estimation methods and some important variations. This is done in the context of a wide range of simulated, laboratory-generated, and real data sets. Important differences between the methods are highlighted. Deep insights are revealed on how well the laboratory data mimic the real data. Non-stationarities, which are local in time, are seen to be central issues and lead to both conceptual and practical recommendations.  相似文献   

5.
It is well known that the approximate Bayesian computation algorithm based on Markov chain Monte Carlo methods suffers from the sensitivity to the choice of starting values, inefficiency and a low acceptance rate. To overcome these problems, this study proposes a generalization of the multiple-point Metropolis algorithm, which proceeds by generating multiple-dependent proposals and then by selecting a candidate among the set of proposals on the basis of weights that can be chosen arbitrarily. The performance of the proposed algorithm is illustrated by using both simulated and real data.  相似文献   

6.
In this paper we motivate solutions to simultaneous estimation of multiple dynamic processes in situations where the correspondence between the set of measurements and the set of processes is uncertain and thus special modelling is required to accomodate the unclassified data. We derive the optimal Bayesian solution for non linear processes which turns out to be very computationally complicated, and then suggest a quasi Bayes approximation which removes the complication due to the uncertain measurement-process correspondence. Numerical illustrations are provided for the linear case.  相似文献   

7.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   

8.
In this paper we introduce a broad family of loss functions based on the concept of Bregman divergence. We deal with both Bayesian estimation and prediction problems and show that all Bayes solutions associated with loss functions belonging to the introduced family of losses satisfy the same equation. We further concentrate on the concept of robust Bayesian analysis and provide one equation that explicitly leads to robust Bayes solutions. The results are model-free and include many existing results in Bayesian and robust Bayesian contexts in the literature.  相似文献   

9.
This paper continuesour earlier analysis of a data set on acute ear infections insmall children, presented in Andreev and Arjas (1998). The maingoal here is to provide a method, based on the use of predictivedistributions, for assessing the possible causal influence whichthe type of day care will have on the incidence of ear infections.A closely related technique is used for the assessment of thenonparametric Bayesian intensity model applied in the paper.Two graphical methods, supported by formal tests, are suggestedfor this purpose.  相似文献   

10.
This paper presents a kernel estimation of the distribution of the scale parameter of the inverse Gaussian distribution under type II censoring together with the distribution of the remaining time. Estimation is carried out via the Gibbs sampling algorithm combined with a missing data approach. Estimates and confidence intervals for the parameters of interest are also presented.  相似文献   

11.
Bayesian Geostatistical Design   总被引:6,自引:1,他引:5  
Abstract.  This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is often not the best choice.  相似文献   

12.
Abstract

Although no universally accepted definition of causality exists, in practice one is often faced with the question of statistically assessing causal relationships in different settings. We present a uniform general approach to causality problems derived from the axiomatic foundations of the Bayesian statistical framework. In this approach, causality statements are viewed as hypotheses, or models, about the world and the fundamental object to be computed is the posterior distribution of the causal hypotheses, given the data and the background knowledge. Computation of the posterior, illustrated here in simple examples, may involve complex probabilistic modeling but this is no different than in any other Bayesian modeling situation. The main advantage of the approach is its connection to the axiomatic foundations of the Bayesian framework, and the general uniformity with which it can be applied to a variety of causality settings, ranging from specific to general cases, or from causes of effects to effects of causes.  相似文献   

13.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   

14.
Summary In this paper we introduce a class of prior distributions for contingency tables with given marginals. We are interested in the structrre of concordance/discordance of such tables. There is actually a minor limitation in that the marginals are required to assume only rational values. We do argue, though, that this is not a serious drawback for all applicatory purposes. The posterior and predictive distributions given anM-sample are computed. Examples of Bayesian estimates of some classical indices of concordance are also given. Moreover, we show how to use simulation in order to overcome some difficulties which arise in the computation of the posterior distribution.  相似文献   

15.
Typically, in the practice of causal inference from observational studies, a parametric model is assumed for the joint population density of potential outcomes and treatment assignments, and possibly this is accompanied by the assumption of no hidden bias. However, both assumptions are questionable for real data, the accuracy of causal inference is compromised when the data violates either assumption, and the parametric assumption precludes capturing a more general range of density shapes (e.g., heavier tail behavior and possible multi-modalities). We introduce a flexible, Bayesian nonparametric causal model to provide more accurate causal inferences. The model makes use of a stick-breaking prior, which has the flexibility to capture any multi-modalities, skewness and heavier tail behavior in this joint population density, while accounting for hidden bias. We prove the asymptotic consistency of the posterior distribution of the model, and illustrate our causal model through the analysis of small and large observational data sets.  相似文献   

16.
Abstract

In this article, a new model is presented that is based on the Pareto distribution of the second kind, when the location parameter depends on covariates as well as unobserved heterogeneity. Bayesian analysis of the model can be performed using Markov Chain Monte Carlo techniques. The new procedures are illustrated in the context of artificial data as well as international output data.  相似文献   

17.
Let a group G act on the sample space. This paper gives another proof of a theorem of Stein relating a group invariant family of posterior Bayesian probability regions to classical confidence regions when an appropriate prior is used. The example of the central multivariate normal distribution is discussed.  相似文献   

18.
This article describes a Bayesian small sample approach to making inferences for the operator (or filter) and squared gain of a p — th order Gaussian univariate autoregressive process. Simultaneous pos¬terior probability bands are developed for the real and the imaginary parts of the frequency-response function of an autoregressive operator as well as for the squared gain of an autoregressive process.  相似文献   

19.
We present a variant of the sequential Monte Carlo sampler by incorporating the partial rejection control mechanism of Liu (2001). We show that the resulting algorithm can be considered as a sequential Monte Carlo sampler with a modified mutation kernel. We prove that the new sampler can reduce the variance of the incremental importance weights when compared with standard sequential Monte Carlo samplers, and provide a central limit theorem. Finally, the sampler is adapted for application under the challenging approximate Bayesian computation modelling framework.  相似文献   

20.
The approximate Bayesian computation (ABC) algorithm is used to estimate parameters from complicated phenomena, where likelihood is intractable. Here, we report the development of an algorithm to choose the tolerance level for ABC. We have illustrated the performance of our proposed method by simulating the estimation of scaled mutation and recombination rates. The result shows that the proposed algorithm performs well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号