首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We develop a hierarchical Bayesian approach for inference in random coefficient dynamic panel data models. Our approach allows for the initial values of each unit's process to be correlated with the unit-specific coefficients. We impose a stationarity assumption for each unit's process by assuming that the unit-specific autoregressive coefficient is drawn from a logitnormal distribution. Our method is shown to have favorable properties compared to the mean group estimator in a Monte Carlo study. We apply our approach to analyze energy and protein intakes among individuals from the Philippines.  相似文献   

2.
Assume that we have a random sample of size n from p-variate normal population and we wish to estimate the mean vector under quadratic loss with respect to the inverse of the unknown covariance matrix, A class of superior estimators to James-Stein positive part estimator is given when n>max{9p+10,13p-7}, based on the argument by Shao and Strawderman(1994).  相似文献   

3.
In this paper, we study a new Bayesian approach for the analysis of linearly mixed structures. In particular, we consider the case of hyperspectral images, which have to be decomposed into a collection of distinct spectra, called endmembers, and a set of associated proportions for every pixel in the scene. This problem, often referred to as spectral unmixing, is usually considered on the basis of the linear mixing model (LMM). In unsupervised approaches, the endmember signatures have to be calculated by an endmember extraction algorithm, which generally relies on the supposition that there are pure (unmixed) pixels contained in the image. In practice, this assumption may not hold for highly mixed data and consequently the extracted endmember spectra differ from the true ones. A way out of this dilemma is to consider the problem under the normal compositional model (NCM). Contrary to the LMM, the NCM treats the endmembers as random Gaussian vectors and not as deterministic quantities. Existing Bayesian approaches for estimating the proportions under the NCM are restricted to the case that the covariance matrix of the Gaussian endmembers is a multiple of the identity matrix. The self-evident conclusion is that this model is not suitable when the variance differs from one spectral channel to the other, which is a common phenomenon in practice. In this paper, we first propose a Bayesian strategy for the estimation of the mixing proportions under the assumption of varying variances in the spectral bands. Then we generalize this model to handle the case of a completely unknown covariance structure. For both algorithms, we present Gibbs sampling strategies and compare their performance with other, state of the art, unmixing routines on synthetic as well as on real hyperspectral fluorescence spectroscopy data.  相似文献   

4.
In this paper, we utilize normal/independent (NI) distributions as a tool for robust modeling of linear mixed models (LMM) under a Bayesian paradigm. The purpose is to develop a non-iterative sampling method to obtain i.i.d. samples approximately from the observed posterior distribution by combining the inverse Bayes formulae, sampling/importance resampling and posterior mode estimates from the expectation maximization algorithm to LMMs with NI distributions, as suggested by Tan et al. [33 Tan, M., Tian, G. and Ng, K. 2003. A noniterative sampling method for computing posteriors in the structure of EM-type algorithms. Statist. Sinica, 13(3): 625640. [Web of Science ®] [Google Scholar]]. The proposed algorithm provides a novel alternative to perfect sampling and eliminates the convergence problems of Markov chain Monte Carlo methods. In order to examine the robust aspects of the NI class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback–Leibler divergence. Further, some discussions on model selection criteria are given. The new methodologies are exemplified through a real data set, illustrating the usefulness of the proposed methodology.  相似文献   

5.
We consider the problem of constructing a fixed-size confidence region for the difference of means of two multivariate normal populations It is assumed that the variance-covariance matrices of two populations are different only by unknown scalar multipliers Two-stage procedures are presented to derive such a confidence region We also discuss the asymptotic efficiency of the procedure.  相似文献   

6.
Very often, the likelihoods for circular data sets are of quite complicated forms, and the functional forms of the normalising constants, which depend upon the unknown parameters, are unknown. This latter problem generally precludes rigorous, exact inference (both classical and Bayesian) for circular data.Noting the paucity of literature on Bayesian circular data analysis, and also because realistic data analysis is naturally permitted by the Bayesian paradigm, we address the above problem taking a Bayesian perspective. In particular, we propose a methodology that combines importance sampling and Markov chain Monte Carlo (MCMC) in a very effective manner to sample from the posterior distribution of the parameters, given the circular data. With simulation study and real data analysis, we demonstrate the considerable reliability and flexibility of our proposed methodology in analysing circular data.  相似文献   

7.
This paper develops a Bayesian procedure for estimation and forecasting of the volatility of multivariate time series. The foundation of this work is the matrix-variate dynamic linear model, for the volatility of which we adopt a multiplicative stochastic evolution, using Wishart and singular multivariate beta distributions. A diagonal matrix of discount factors is employed in order to discount the variances element by element and therefore allowing a flexible and pragmatic variance modelling approach. Diagnostic tests and sequential model monitoring are discussed in some detail. The proposed estimation theory is applied to a four-dimensional time series, comprising spot prices of aluminium, copper, lead and zinc of the London metal exchange. The empirical findings suggest that the proposed Bayesian procedure can be effectively applied to financial data, overcoming many of the disadvantages of existing volatility models.  相似文献   

8.
Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are unknown, arbitrary positive definite and unequal are considered. This problem of testing has been studied to some extent, for example, by Kulatunga and Sasabuchi (1984 Kulatunga, D. D. S., Sasabuchi, S. (1984). A test of homogeneity of mean vectors against multivariate isotonic alternatives. Mem Fac Sci, Kyushu Univ Ser A Mathemat 38:151161. [Google Scholar]) when the covariance matrices are known and also Sasabuchi et al. (2003 Sasabuchi, S., Tanaka, K., Tsukamodo, T. (2003). Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are common but unknown. Annals of Statistics. 31(5):15171536.[Web of Science ®] [Google Scholar]) and Sasabuchi (2007 Sasabuchi, S. (2007). More powerful tests for homogeneity of multivariate normal mean vectors under an order restriction. Sankhya 69(4):700716. [Google Scholar]) when the covariance matrices are unknown but common. In this paper, a test statistic is proposed and because of the main advantage of the bootstrap test is that it avoids the derivation of the complex null distribution analytically, a bootstrap test statistic is derived and since the proposed test statistic is location invariance the bootstrap p-value defined logical and some steps are presented to estimate it. Our numerical studies via Monte Carlo simulation show that the proposed bootstrap test can correctly control the type I error rates. The power of the test for some of the p-dimensional normal distributions is computed by Monte Carlo simulation. Also, the null distribution of test statistic is estimated using kernel density. Finally, the bootstrap test is illustrated using a real data.  相似文献   

9.
The estimation of the dispersion matrix of a multivariate normal distribution with zero mean on the basis of a random sample is discussed from a Bayesian view. An inverted-Wishart distribu- tion for the dispersion is taken, with its defining matrix of intraclass form. Some consistency properties are described. The posterior distribution is found and its mode investigated as a possible estimate in preference to that of maximum likelihood  相似文献   

10.
A test for linear trend among a set of eigenvalues of k covariance matrices is developed. A special case of this test is Flury's (1986) test for the equality of eigenvalues. The linear trend hypothesis appears to be more relevant to data analysis than the equality hypothesis. Examples show how the linear trend hypothesis can be acceptable while the equality hypothesis is rejected.  相似文献   

11.
Normality and independence of error terms are typical assumptions for partial linear models. However, these assumptions may be unrealistic in many fields, such as economics, finance and biostatistics. In this paper, a Bayesian analysis for partial linear model with first-order autoregressive errors belonging to the class of the scale mixtures of normal distributions is studied in detail. The proposed model provides a useful generalization of the symmetrical linear regression model with independent errors, since the distribution of the error term covers both correlated and thick-tailed distributions, and has a convenient hierarchical representation allowing easy implementation of a Markov chain Monte Carlo scheme. In order to examine the robustness of the model against outlying and influential observations, a Bayesian case deletion influence diagnostics based on the Kullback–Leibler (K–L) divergence is presented. The proposed method is applied to monthly and daily returns of two Chilean companies.  相似文献   

12.
The common principal components (CPC) model provides a way to model the population covariance matrices of several groups by assuming a common eigenvector structure. When appropriate, this model can provide covariance matrix estimators of which the elements have smaller standard errors than when using either the pooled covariance matrix or the per group unbiased sample covariance matrix estimators. In this article, a regularized CPC estimator under the assumption of a common (or partially common) eigenvector structure in the populations is proposed. After estimation of the common eigenvectors using the Flury–Gautschi (or other) algorithm, the off-diagonal elements of the nearly diagonalized covariance matrices are shrunk towards zero and multiplied with the orthogonal common eigenvector matrix to obtain the regularized CPC covariance matrix estimates. The optimal shrinkage intensity per group can be estimated using cross-validation. The efficiency of these estimators compared to the pooled and unbiased estimators is investigated in a Monte Carlo simulation study, and the regularized CPC estimator is applied to a real dataset to demonstrate the utility of the method.  相似文献   

13.
In recent years, Bayesian statistics methods in neuroscience have been showing important advances. In particular, detection of brain signals for studying the complexity of the brain is an active area of research. Functional magnetic resonance imagining (fMRI) is an important tool to determine which parts of the brain are activated by different types of physical behavior. According to recent results, there is evidence that the values of the connectivity brain signal parameters are close to zero and due to the nature of time series fMRI data with high-frequency behavior, Bayesian dynamic models for identifying sparsity are indeed far-reaching. We propose a multivariate Bayesian dynamic approach for model selection and shrinkage estimation of the connectivity parameters. We describe the coupling or lead-lag between any pair of regions by using mixture priors for the connectivity parameters and propose a new weakly informative default prior for the state variances. This framework produces one-step-ahead proper posterior predictive results and induces shrinkage and robustness suitable for fMRI data in the presence of sparsity. To explore the performance of the proposed methodology, we present simulation studies and an application to functional magnetic resonance imaging data.  相似文献   

14.
This paper provides Bartlett corrections to improve likelihood ratio tests for heteroskedastic normal linear models when the error covariance matrix is nonscaiar and depends on a set of unknown parameters. The Bartlett corrections are simple enough to be used algebraically to obtain several closed-form expressions in special cases. The corrections have also advantages for numerical purposes because they involve only simple operations on matrices and vectors.  相似文献   

15.
Summary.  Road safety has recently become a major concern in most modern societies. The identification of sites that are more dangerous than others (black spots) can help in better scheduling road safety policies. This paper proposes a methodology for ranking sites according to their level of hazard. The model is innovative in at least two respects. Firstly, it makes use of all relevant information per accident location, including the total number of accidents and the number of fatalities, as well as the number of slight and serious injuries. Secondly, the model includes the use of a cost function to rank the sites with respect to their total expected cost to society. Bayesian estimation for the model via a Markov chain Monte Carlo approach is proposed. Accident data from 519 intersections in Leuven (Belgium) are used to illustrate the methodology proposed. Furthermore, different cost functions are used to show the effect of the proposed method on the use of different costs per type of injury.  相似文献   

16.
We develop Bayesian procedures to make inference about parameters of a statistical design with autocorrelated error terms. Modelling treatment effects can be complex in the presence of other factors such as time; for example in longitudinal data. In this paper, Markov chain Monte Carlo methods (MCMC), the Metropolis-Hastings algorithm and Gibbs sampler are used to facilitate the Bayesian analysis of real life data when the error structure can be expressed as an autoregressive model of order p. We illustrate our analysis with real data.  相似文献   

17.
We consider a Bayesian deterministically trending dynamic time series model with heteroscedastic error variance, in which there exist multiple structural changes in level, trend and error variance, but the number of change-points and the timings are unknown. For a Bayesian analysis, a truncated Poisson prior and conjugate priors are used for the number of change-points and the distributional parameters, respectively. To identify the best model and estimate the model parameters simultaneously, we propose a new method by sequentially making use of the Gibbs sampler in conjunction with stochastic approximation Monte Carlo simulations, as an adaptive Monte Carlo algorithm. The numerical results are in favor of our method in terms of the quality of estimates.  相似文献   

18.
This article deals with the problem of Bayesian inference concerning the common scale parameter of several Pareto distributions. Bayesian hypothesis testing of, and Bayesian interval estimation for, the common scale parameter is given. Numerical studies including a comparison study, a simulation study, and a practical application study are given in order to illustrate our procedures and to demonstrate the performance, advantages, and merits of the Bayesian procedures over the classical and generalized variable procedures.  相似文献   

19.
ABSTRACT

We extend Chebyshev's inequality to a random vector with a singular covariance matrix. Then we consider the case of a multivariate normal distribution for this generalization.  相似文献   

20.
This paper develops a novel and efficient algorithm for Bayesian inference in inverse Gamma stochastic volatility models. It is shown that by conditioning on auxiliary variables, it is possible to sample all the volatilities jointly directly from their posterior conditional density, using simple and easy to draw from distributions. Furthermore, this paper develops a generalized inverse gamma process with more flexible tails in the distribution of volatilities, which still allows for simple and efficient calculations. Using several macroeconomic and financial datasets, it is shown that the inverse gamma and generalized inverse gamma processes can greatly outperform the commonly used log normal volatility processes with Student’s t errors or jumps in the mean equation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号