首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper the issue of making inferences with misclassified data from a noisy multinomial process is addressed. A Bayesian model for making inferences about the proportions and the noise parameters is developed. The problem is reformulated in a more tractable form by introducing auxiliary or latent random vectors. This allows for an easy-to-implement Gibbs sampling-based algorithm to generate samples from the distributions of interest. An illustrative example related to elections is also presented.  相似文献   

2.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

3.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

4.
为了尝试使用贝叶斯方法研究比例数据的分位数回归统计推断问题,首先基于Tobit模型给出了分位数回归建模方法,然后通过选取合适的先验分布得到了贝叶斯层次模型,进而给出了各参数的后验分布并用于Gibbs抽样。数值模拟分析验证了所提出的贝叶斯推断方法对于比例数据分析的有效性。最后,将贝叶斯方法应用于美国加州海洛因吸毒数据,在不同的分位数水平下揭示了吸毒频率的影响因素。  相似文献   

5.
In the analysis of correlated ordered data, mixed-effect models are frequently used to control the subject heterogeneity effects. A common assumption in fitting these models is the normality of random effects. In many cases, this is unrealistic, making the estimation results unreliable. This paper considers several flexible models for random effects and investigates their properties in the model fitting. We adopt a proportional odds logistic regression model and incorporate the skewed version of the normal, Student's t and slash distributions for the effects. Stochastic representations for various flexible distributions are proposed afterwards based on the mixing strategy approach. This reduces the computational burden being performed by the McMC technique. Furthermore, this paper addresses the identifiability restrictions and suggests a procedure to handle this issue. We analyze a real data set taken from an ophthalmic clinical trial. Model selection is performed by suitable Bayesian model selection criteria.  相似文献   

6.
Consider a class of autoregressive models with exogenous variables and power transformed and threshold GARCH (ARX-PTTGARCH) errors, which is a natural generalization of the standard and special GARCH model. We propose a Bayesian method to show that combining Gibbs sampler and Metropolis-Hastings algorithm to give a Bayesian analysis can be applied to estimate parameters of ARX-PTTGARCH models with success.  相似文献   

7.
We develop a hierarchical Bayesian approach for inference in random coefficient dynamic panel data models. Our approach allows for the initial values of each unit's process to be correlated with the unit-specific coefficients. We impose a stationarity assumption for each unit's process by assuming that the unit-specific autoregressive coefficient is drawn from a logitnormal distribution. Our method is shown to have favorable properties compared to the mean group estimator in a Monte Carlo study. We apply our approach to analyze energy and protein intakes among individuals from the Philippines.  相似文献   

8.
This article deals with the problem of Bayesian inference concerning the common scale parameter of several Pareto distributions. Bayesian hypothesis testing of, and Bayesian interval estimation for, the common scale parameter is given. Numerical studies including a comparison study, a simulation study, and a practical application study are given in order to illustrate our procedures and to demonstrate the performance, advantages, and merits of the Bayesian procedures over the classical and generalized variable procedures.  相似文献   

9.
Abstract. When applicable, an assumed monotonicity property of the regression function w.r.t. covariates has a strong stabilizing effect on the estimates. Because of this, other parametric or structural assumptions may not be needed at all. Although monotonic regression in one dimension is well studied, the question remains whether one can find computationally feasible generalizations to multiple dimensions. Here, we propose a non‐parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure. The monotonic construction is based on marked point processes, where the random point locations and the associated marks (function levels) together form piecewise constant realizations of the regression surfaces. The actual inference is based on model‐averaged results over the realizations. The monotonicity of the construction is enforced by partial ordering constraints, which allows it to asymptotically, with increasing density of support points, approximate the family of all monotonic bounded continuous functions.  相似文献   

10.
This paper presents a kernel estimation of the distribution of the scale parameter of the inverse Gaussian distribution under type II censoring together with the distribution of the remaining time. Estimation is carried out via the Gibbs sampling algorithm combined with a missing data approach. Estimates and confidence intervals for the parameters of interest are also presented.  相似文献   

11.
This article considers explicit and detailed theoretical and empirical Bayesian analysis of the well-known Poisson regression model for count data with unobserved individual effects based on the lognormal, rather than the popular negative binomial distribution. Although the negative binomial distribution leads to analytical expressions for the likelihood function, a Poisson-lognormal model is closer to the concept of regression with normally distributed innovations, and accounts for excess zeros as well. Such models have been considered widely in the literature (Winkelmann, 2008 Winkelmann , R. ( 2008 ). Econometric Analysis of Count Data. , 5th ed. Berlin : Springer . [Google Scholar]). The article also provides the necessary theoretical results regarding the posterior distribution of the model. Given that the likelihood function involves integrals with respect to the latent variables, numerical methods organized around Gibbs sampling with data augmentation are proposed for likelihood analysis of the model. The methods are applied to the patent-R&D relationship of 70 US pharmaceutical and biomedical companies, and it is found that it performs better than Poisson regression or negative binomial regression models.  相似文献   

12.
Bayesian inference for the multinomial probit model, using the Gibbs sampler with data augmentation, has been recently considered by some authors. The present paper introduces a modification of the sampling technique, by defining a hybrid Markov chain in which, after each Gibbs sampling cycle, a Metropolis step is carried out along a direction of constant likelihood. Examples with simulated data sets motivate and illustrate the new technique. A proof of the ergodicity of the hybrid Markov chain is also given.  相似文献   

13.
In this study, estimation of the parameters of the zero-inflated count regression models and computations of posterior model probabilities of the log-linear models defined for each zero-inflated count regression models are investigated from the Bayesian point of view. In addition, determinations of the most suitable log-linear and regression models are investigated. It is known that zero-inflated count regression models cover zero-inflated Poisson, zero-inflated negative binomial, and zero-inflated generalized Poisson regression models. The classical approach has some problematic points but the Bayesian approach does not have similar flaws. This work points out the reasons for using the Bayesian approach. It also lists advantages and disadvantages of the classical and Bayesian approaches. As an application, a zoological data set, including structural and sampling zeros, is used in the presence of extra zeros. In this work, it is observed that fitting a zero-inflated negative binomial regression model creates no problems at all, even though it is known that fitting a zero-inflated negative binomial regression model is the most problematic procedure in the classical approach. Additionally, it is found that the best fitting model is the log-linear model under the negative binomial regression model, which does not include three-way interactions of factors.  相似文献   

14.
In this article, the Bayesian analysis of the regression model with errors terms generated by a first-order autoregressive model is considered. Our aim is to study the effect of two kinds of contamination of this model via the posterior distribution of the regression parameter.  相似文献   

15.
We consider simulation-based methods for the design of multi-stress factor accelerated life tests (ALTs) in a Bayesian decision theoretic framework. Multi-stress factor ALTs are challenging due to the increased number of simulation runs required as a result of stress factor-level combinations. We propose the use of Latin hypercube sampling to reduce the simulation cost without loss of statistical efficiency. Exploration and optimization of expected utility function is carried out by a developed algorithm that utilizes Markov chain Monte Carlo methods and nonparametric smoothing techniques. A comparison of proposed approach to a full grid simulation is provided to illustrate computational cost reduction.  相似文献   

16.
The generalized Pareto distribution is used to model the exceedances over a threshold in a number of fields, including the analysis of environmental extreme events and financial data analysis. We use this model in a default Bayesian framework where no prior information is available on unknown model parameters. Using a large simulation study, we compare the performance of our posterior estimations of parameters with other methods proposed in the literature. We show that our procedure also allows to make inferences in other quantities of interest in extreme value analysis without asymptotic arguments. We apply the proposed methodology to a real data set.  相似文献   

17.
针对传统协整检验不能适用于具有随机性特征的超高频金融数据的问题,构建贝叶斯超高频金融数据协整模型,结合参数的后验条件分布设计Gibbs抽样方案,提出基于超高频金融数据的贝叶斯协整检验方法,并利用中国股市超高频金融数据进行实证分析。研究结果表明:贝叶斯方法把参数看作随机变量的思想适合超高频数据随机性的特点,贝叶斯超高频数据协整方法能够不断更新参数信息,避免了OLS估计的有偏性问题,可以得到更符合实际的结论。  相似文献   

18.
Summary.  We discuss a method for combining different but related longitudinal studies to improve predictive precision. The motivation is to borrow strength across clinical studies in which the same measurements are collected at different frequencies. Key features of the data are heterogeneous populations and an unbalanced design across three studies of interest. The first two studies are phase I studies with very detailed observations on a relatively small number of patients. The third study is a large phase III study with over 1500 enrolled patients, but with relatively few measurements on each patient. Patients receive different doses of several drugs in the studies, with the phase III study containing significantly less toxic treatments. Thus, the main challenges for the analysis are to accommodate heterogeneous population distributions and to formalize borrowing strength across the studies and across the various treatment levels. We describe a hierarchical extension over suitable semiparametric longitudinal data models to achieve the inferential goal. A nonparametric random-effects model accommodates the heterogeneity of the population of patients. A hierarchical extension allows borrowing strength across different studies and different levels of treatment by introducing dependence across these nonparametric random-effects distributions. Dependence is introduced by building an analysis of variance (ANOVA) like structure over the random-effects distributions for different studies and treatment combinations. Model structure and parameter interpretation are similar to standard ANOVA models. Instead of the unknown normal means as in standard ANOVA models, however, the basic objects of inference are random distributions, namely the unknown population distributions under each study. The analysis is based on a mixture of Dirichlet processes model as the underlying semiparametric model.  相似文献   

19.
We propose a new mixture model for Bayesian nonparametric inference. Rather than considering extensions from current approaches, such as the mixture of Dirichlet process model, we end up shrinking it, by making the weights less complex. We demonstrate the model and discuss its performance.  相似文献   

20.
To better understand the power shift and the U.S. role compared to China and others regional actors, the Chicago Council on Global Affairs and the East Asia Institute (EAI) surveyed people in six countries - China, Japan, South Korea, Vietnam, Indonesian, and the United States - in the first half of 2008 about regional security and economic integration in Asia and about how these nations perceive each other (Bouton et al., 2010 Bouton, M., Steven, K., Benjamin, P., and Gregory, H. (2010). Soft power in Asia survey, 2008. ICPSR25342-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2010-04-05. doi:10.3886/ICPSR25342.v1[Crossref] [Google Scholar]). There exists latent variance that cannot be adequately explained by parametric models. This is, in large part, due to the hidden structures and latent stories that from in unexpected ways. Therefore, a new Gibbs sampler is developed here in order to reveal preciously unseen structures and latent variances found in the survey dataset of Bouton et al. This new sampler is based upon the semiparametric regression, a well-known tool frequently utilized in order to capture the functional dependence between variables with fixed effect parametric and nonlinear regression. This is then extended to a generalized semiparametric regression for binary responses with logit and probit link function. The new sampler is then developed for the generalized linear mixed model with a nonparametric random effect. It is expressed as nonparametric regression with the multinomial-Dirichlet distribution for the number and positions of knots.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号