首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract.  Functional magnetic resonance imaging (fMRI) is a technique for studying the active human brain. During the fMRI experiment, a sequence of MR images is obtained, where the brain is represented as a set of voxels. The data obtained are a realization of a complex spatio-temporal process with many sources of variation, both biological and technical. We present a spatio-temporal point process model approach for fMRI data where the temporal and spatial activation are modelled simultaneously. It is possible to analyse other characteristics of the data than just the locations of active brain regions, such as the interaction between the active regions. We discuss both classical statistical inference and Bayesian inference in the model. We analyse simulated data without repeated stimuli both for location of the activated regions and for interactions between the activated regions. An example of analysis of fMRI data, using this approach, is presented.  相似文献   

2.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

3.
Bayesian analysis of dynamic magnetic resonance breast images   总被引:2,自引:0,他引:2  
Summary.  We describe an integrated methodology for analysing dynamic magnetic resonance images of the breast. The problems that motivate this methodology arise from a collaborative study with a tumour institute. The methods are developed within the Bayesian framework and comprise image restoration and classification steps. Two different approaches are proposed for the restoration. Bayesian inference is performed by means of Markov chain Monte Carlo algorithms. We make use of a Metropolis algorithm with a specially chosen proposal distribution that performs better than more commonly used proposals. The classification step is based on a few attribute images yielded by the restoration step that describe the essential features of the contrast agent variation over time. Procedures for hyperparameter estimation are provided, so making our method automatic. The results show the potential of the methodology to extract useful information from acquired dynamic magnetic resonance imaging data about tumour morphology and internal pathophysiological features.  相似文献   

4.
Existing literature on quantile regression for panel data models with individual effects advocates the application of penalization to reduce the dynamic panel bias and increase the efficiency of the estimators. In this paper, we consider penalized quantile regression for dynamic panel data with random effects from a Bayesian perspective, where the penalty involves an adaptive Lasso shrinkage of the random effects. We also address the role of initial conditions in dynamic panel data models, emphasizing joint modeling of start-up and subsequent responses. For posterior inference, an efficient Gibbs sampler is developed to simulate the parameters from the posterior distributions. Through simulation studies and analysis of a real data set, we assess the performance of the proposed Bayesian method.  相似文献   

5.
We propose penalized-likelihood methods for parameter estimation of high dimensional t distribution. First, we show that a general class of commonly used shrinkage covariance matrix estimators for multivariate normal can be obtained as penalized-likelihood estimator with a penalty that is proportional to the entropy loss between the estimate and an appropriately chosen shrinkage target. Motivated by this fact, we then consider applying this penalty to multivariate t distribution. The penalized estimate can be computed efficiently using EM algorithm for given tuning parameters. It can also be viewed as an empirical Bayes estimator. Taking advantage of its Bayesian interpretation, we propose a variant of the method of moments to effectively elicit the tuning parameters. Simulations and real data analysis demonstrate the competitive performance of the new methods.  相似文献   

6.
We develop a new methodology for determining the location and dynamics of brain activity from combined magnetoencephalography (MEG) and electroencephalography (EEG) data. The resulting inverse problem is ill‐posed and is one of the most difficult problems in neuroimaging data analysis. In our development we propose a solution that combines the data from three different modalities, magnetic resonance imaging (MRI), MEG and EEG, together. We propose a new Bayesian spatial finite mixture model that builds on the mesostate‐space model developed by Daunizeau & Friston [Daunizeau and Friston, NeuroImage 2007; 38, 67–81]. Our new model incorporates two major extensions: (i) We combine EEG and MEG data together and formulate a joint model for dealing with the two modalities simultaneously; (ii) we incorporate the Potts model to represent the spatial dependence in an allocation process that partitions the cortical surface into a small number of latent states termed mesostates. The cortical surface is obtained from MRI. We formulate the new spatiotemporal model and derive an efficient procedure for simultaneous point estimation and model selection based on the iterated conditional modes algorithm combined with local polynomial smoothing. The proposed method results in a novel estimator for the number of mixture components and is able to select active brain regions, which correspond to active variables in a high‐dimensional dynamic linear model. The methodology is investigated using synthetic data and simulation studies and then demonstrated on an application examining the neural response to the perception of scrambled faces. R software implementing the methodology along with several sample datasets are available at the following GitHub repository https://github.com/v2south/PottsMix . The Canadian Journal of Statistics 47: 688–711; 2019 © 2019 Statistical Society of Canada  相似文献   

7.
We can use wavelet shrinkage to estimate a possibly multivariate regression function g under the general regression setup, y = g + ε. We propose an enhanced wavelet-based denoising methodology based on Bayesian adaptive multiresolution shrinkage, an effective Bayesian shrinkage rule in addition to the semi-supervised learning mechanism. The Bayesian shrinkage rule is advanced by utilizing the semi-supervised learning method in which the neighboring structure of a wavelet coefficient is adopted and an appropriate decision function is derived. According to decision function, wavelet coefficients follow one of two prespecified Bayesian rules obtained using varying related parameters. The decision of a wavelet coefficient depends not only on its magnitude, but also on the neighboring structure on which the coefficient is located. We discuss the theoretical properties of the suggested method and provide recommended parameter settings. We show that the proposed method is often superior to several existing wavelet denoising methods through extensive experimentation.  相似文献   

8.
This paper studies penalized quantile regression for dynamic panel data with fixed effects, where the penalty involves l1 shrinkage of the fixed effects. Using extensive Monte Carlo simulations, we present evidence that the penalty term reduces the dynamic panel bias and increases the efficiency of the estimators. The underlying intuition is that there is no need to use instrumental variables for the lagged dependent variable in the dynamic panel data model without fixed effects. This provides an additional use for the shrinkage models, other than model selection and efficiency gains. We propose a Bayesian information criterion based estimator for the parameter that controls the degree of shrinkage. We illustrate the usefulness of the novel econometric technique by estimating a “target leverage” model that includes a speed of capital structure adjustment. Using the proposed penalized quantile regression model the estimates of the adjustment speeds lie between 3% and 44% across the quantiles, showing strong evidence that there is substantial heterogeneity in the speed of adjustment among firms.  相似文献   

9.
We discuss a general approach to dynamic sparsity modeling in multivariate time series analysis. Time-varying parameters are linked to latent processes that are thresholded to induce zero values adaptively, providing natural mechanisms for dynamic variable inclusion/selection. We discuss Bayesian model specification, analysis and prediction in dynamic regressions, time-varying vector autoregressions, and multivariate volatility models using latent thresholding. Application to a topical macroeconomic time series problem illustrates some of the benefits of the approach in terms of statistical and economic interpretations as well as improved predictions. Supplementary materials for this article are available online.  相似文献   

10.
We discuss the impact of tuning parameter selection uncertainty in the context of shrinkage estimation and propose a methodology to account for problems arising from this issue: Transferring established concepts from model averaging to shrinkage estimation yields the concept of shrinkage averaging estimation (SAE) which reflects the idea of using weighted combinations of shrinkage estimators with different tuning parameters to improve overall stability, predictive performance and standard errors of shrinkage estimators. Two distinct approaches for an appropriate weight choice, both of which are inspired by concepts from the recent literature of model averaging, are presented: The first approach relates to an optimal weight choice with regard to the predictive performance of the final weighted estimator and its implementation can be realized via quadratic programming. The second approach has a fairly different motivation and considers the construction of weights via a resampling experiment. Focusing on Ridge, Lasso and Random Lasso estimators, the properties of the proposed shrinkage averaging estimators resulting from these strategies are explored by means of Monte-Carlo studies and are compared to traditional approaches where the tuning parameter is simply selected via cross validation criteria. The results show that the proposed SAE methodology can improve an estimators’ overall performance and reveal and incorporate tuning parameter uncertainty. As an illustration, selected methods are applied to some recent data from a study on leadership behavior in life science companies.  相似文献   

11.
The prognosis for patients with high grade gliomas is poor, with a median survival of 1 year. Treatment efficacy assessment is typically unavailable until 5-6 months post diagnosis. Investigators hypothesize that quantitative magnetic resonance imaging can assess treatment efficacy 3 weeks after therapy starts, thereby allowing salvage treatments to begin earlier. The purpose of this work is to build a predictive model of treatment efficacy by using quantitative magnetic resonance imaging data and to assess its performance. The outcome is 1-year survival status. We propose a joint, two-stage Bayesian model. In stage I, we smooth the image data with a multivariate spatiotemporal pairwise difference prior. We propose four summary statistics that are functionals of posterior parameters from the first-stage model. In stage II, these statistics enter a generalized non-linear model as predictors of survival status. We use the probit link and a multivariate adaptive regression spline basis. Gibbs sampling and reversible jump Markov chain Monte Carlo methods are applied iteratively between the two stages to estimate the posterior distribution. Through both simulation studies and model performance comparisons we find that we can achieve higher overall correct classification rates by accounting for the spatiotemporal correlation in the images and by allowing for a more complex and flexible decision boundary provided by the generalized non-linear model.  相似文献   

12.
In this paper, we propose a three level hierarchical Bayesian model for variable selection and estimation in quantile regression problems. Specifically, at the first level we consider a zero mean normal priors for the coefficients with unknown variance parameters. At the second level, we specify two different priors for the unknown variance parameters which introduce two different models producing different levels of sparsity. Then, at the third level we suggest joint improper priors for the unknown hyperparameters assuming they are independent. Simulations and Boston Housing data are utilized to compare the performance of our models with six existing models. The results indicate that our models perform good in the simulations and Boston Housing data.  相似文献   

13.
In this paper we define a hierarchical Bayesian model for microarray expression data collected from several studies and use it to identify genes that show differential expression between two conditions. Key features include shrinkage across both genes and studies, and flexible modeling that allows for interactions between platforms and the estimated effect, as well as concordant and discordant differential expression across studies. We evaluated the performance of our model in a comprehensive fashion, using both artificial data, and a "split-study" validation approach that provides an agnostic assessment of the model's behavior not only under the null hypothesis, but also under a realistic alternative. The simulation results from the artificial data demonstrate the advantages of the Bayesian model. The 1 - AUC values for the Bayesian model are roughly half of the corresponding values for a direct combination of t- and SAM-statistics. Furthermore, the simulations provide guidelines for when the Bayesian model is most likely to be useful. Most noticeably, in small studies the Bayesian model generally outperforms other methods when evaluated by AUC, FDR, and MDR across a range of simulation parameters, and this difference diminishes for larger sample sizes in the individual studies. The split-study validation illustrates appropriate shrinkage of the Bayesian model in the absence of platform-, sample-, and annotation-differences that otherwise complicate experimental data analyses. Finally, we fit our model to four breast cancer studies employing different technologies (cDNA and Affymetrix) to estimate differential expression in estrogen receptor positive tumors versus negative ones. Software and data for reproducing our analysis are publicly available.  相似文献   

14.
Image segmentation plays an important role in image processing before image recognition or compression. Many segmentation solutions follow the information theoretic criteria and often have excellent results; however, they are not robust to reduce the noise effect in contaminated image data. To guarantee the optimal segmentation with possible noise, a robust Bayesian information criterion is proposed to segment a grayscale image and it is less sensitive to noise. The asymptotic properties are also studied. Monte Carlo numerical experiments along with a brain magnetic resonance image are conducted to evaluate the performance of the new method.  相似文献   

15.
We incorporate a random effect into a multivariate discrete proportional hazards model and propose an efficient semiparametric Bayesian estimation method. By introducing a prior process for the parameters of baseline hazards, we consider a nonparametric estimation of baseline hazards function. Using a state space representation, we derive a dynamic modeling of baseline hazards function and propose an efficient block sampler for Markov chain Monte Carlo method. A numerical example using kidney patients data is given.  相似文献   

16.
Huang J  Ma S  Li H  Zhang CH 《Annals of statistics》2011,39(4):2021-2046
We propose a new penalized method for variable selection and estimation that explicitly incorporates the correlation patterns among predictors. This method is based on a combination of the minimax concave penalty and Laplacian quadratic associated with a graph as the penalty function. We call it the sparse Laplacian shrinkage (SLS) method. The SLS uses the minimax concave penalty for encouraging sparsity and Laplacian quadratic penalty for promoting smoothness among coefficients associated with the correlated predictors. The SLS has a generalized grouping property with respect to the graph represented by the Laplacian quadratic. We show that the SLS possesses an oracle property in the sense that it is selection consistent and equal to the oracle Laplacian shrinkage estimator with high probability. This result holds in sparse, high-dimensional settings with p ? n under reasonable conditions. We derive a coordinate descent algorithm for computing the SLS estimates. Simulation studies are conducted to evaluate the performance of the SLS method and a real data example is used to illustrate its application.  相似文献   

17.
Summary.  Functional magnetic resonance imaging has become a standard technology in human brain mapping. Analyses of the massive spatiotemporal functional magnetic resonance imaging data sets often focus on parametric or non-parametric modelling of the temporal component, whereas spatial smoothing is based on Gaussian kernels or random fields. A weakness of Gaussian spatial smoothing is underestimation of activation peaks or blurring of high curvature transitions between activated and non-activated regions of the brain. To improve spatial adaptivity, we introduce a class of inhomogeneous Markov random fields with stochastic interaction weights in a space-varying coefficient model. For given weights, the random field is conditionally Gaussian, but marginally it is non-Gaussian. Fully Bayesian inference, including estimation of weights and variance parameters, can be carried out through efficient Markov chain Monte Carlo simulation. Although motivated by the analysis of functional magnetic resonance imaging data, the methodological development is general and can also be used for spatial smoothing and regression analysis of areal data on irregular lattices. An application to stylized artificial data and to real functional magnetic resonance imaging data from a visual stimulation experiment demonstrates the performance of our approach in comparison with Gaussian and robustified non-Gaussian Markov random-field models.  相似文献   

18.
In first-level analyses of functional magnetic resonance imaging data, adjustments for temporal correlation as a Satterthwaite approximation or a prewhitening method are usually implemented in the univariate model to keep the nominal test level. In doing so, the temporal correlation structure of the data is estimated, assuming an autoregressive process of order one.We show that this is applicable in multivariate approaches too - more precisely in the so-called stabilized multivariate test statistics. Furthermore, we propose a block-wise permutation method including a random shift that renders an approximation of the temporal correlation structure unnecessary but also approximately keeps the nominal test level in spite of the dependence of sample elements.Although the intentions are different, a comparison of the multivariate methods with the multiple ones shows that the global approach may achieve advantages if applied to suitable regions of interest. This is illustrated using an example from fMRI studies.  相似文献   

19.
We consider hidden Markov models with an unknown number of regimes for the segmentation of the pixel intensities of digital images that consist of a small set of colours. New reversible jump Markov chain Monte Carlo algorithms to estimate both the dimension and the unknown parameters of the model are introduced. Parameters are updated by random walk Metropolis–Hastings moves, without updating the sequence of the hidden Markov chain. The segmentation (i.e. the estimation of the hidden regimes) is a further aim and is performed by means of a number of competing algorithms. We apply our Bayesian inference and segmentation tools to digital images, which are linearized through the Peano–Hilbert scan, and perform experiments and comparisons on both synthetic images and a real brain magnetic resonance image.  相似文献   

20.
Classical bridge regression is known to possess many desirable statistical properties such as oracle, sparsity, and unbiasedness. One outstanding disadvantage of bridge regularization, however, is that it lacks a systematic approach to inference, reducing its flexibility in practical applications. In this study, we propose bridge regression from a Bayesian perspective. Unlike classical bridge regression that summarizes inference using a single point estimate, the proposed Bayesian method provides uncertainty estimates of the regression parameters, allowing coherent inference through the posterior distribution. Under a sparsity assumption on the high-dimensional parameter, we provide sufficient conditions for strong posterior consistency of the Bayesian bridge prior. On simulated datasets, we show that the proposed method performs well compared to several competing methods across a wide range of scenarios. Application to two real datasets further revealed that the proposed method performs as well as or better than published methods while offering the advantage of posterior inference.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号