首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Summary.  Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modelling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies that were conducted at specific household locations as well as 15 ambient monitoring sites in the area. The models allow for both flexible non-linear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalized spline formulation of the model that relates to generalized kriging of the latent traffic pollution variable and leads to a natural Bayesian Markov chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degrees of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately.  相似文献   

2.
3.
Summary.  Motivated by the problem of predicting chemical deposition in eastern USA at weekly, seasonal and annual scales, the paper develops a framework for joint modelling of point- and grid-referenced spatiotemporal data in this context. The hierarchical model proposed can provide accurate spatial interpolation and temporal aggregation by combining information from observed point-referenced monitoring data and gridded output from a numerical simulation model known as the 'community multi-scale air quality model'. The technique avoids the change-of-support problem which arises in other hierarchical models for data fusion settings to combine point- and grid-referenced data. The hierarchical space–time model is fitted to weekly wet sulphate and nitrate deposition data over eastern USA. The model is validated with set-aside data from a number of monitoring sites. Predictive Bayesian methods are developed and illustrated for inference on aggregated summaries such as quarterly and annual sulphate and nitrate deposition maps. The highest wet sulphate deposition occurs near major emissions sources such as fossil-fuelled power plants whereas lower values occur near background monitoring sites.  相似文献   

4.
Non-parametric Bayesian Estimation of a Spatial Poisson Intensity   总被引:5,自引:0,他引:5  
A method introduced by Arjas & Gasbarra (1994) and later modified by Arjas & Heikkinen (1997) for the non-parametric Bayesian estimation of an intensity on the real line is generalized to cover spatial processes. The method is based on a model approximation where the approximating intensities have the structure of a piecewise constant function. Random step functions on the plane are generated using Voronoi tessellations of random point patterns. Smoothing between nearby intensity values is applied by means of a Markov random field prior in the spirit of Bayesian image analysis. The performance of the method is illustrated in examples with both real and simulated data.  相似文献   

5.
6.
Summary.  Short-term forecasts of air pollution levels in big cities are now reported in news-papers and other media outlets. Studies indicate that even short-term exposure to high levels of an air pollutant called atmospheric particulate matter can lead to long-term health effects. Data are typically observed at fixed monitoring stations throughout a study region of interest at different time points. Statistical spatiotemporal models are appropriate for modelling these data. We consider short-term forecasting of these spatiotemporal processes by using a Bayesian kriged Kalman filtering model. The spatial prediction surface of the model is built by using the well-known method of kriging for optimum spatial prediction and the temporal effects are analysed by using the models underlying the Kalman filtering method. The full Bayesian model is implemented by using Markov chain Monte Carlo techniques which enable us to obtain the optimal Bayesian forecasts in time and space. A new cross-validation method based on the Mahalanobis distance between the forecasts and observed data is also developed to assess the forecasting performance of the model implemented.  相似文献   

7.
Abstract.  We propose a Bayesian semiparametric model for survival data with a cure fraction. We explicitly consider a finite cure time in the model, which allows us to separate the cured and the uncured populations. We take a mixture prior of a Markov gamma process and a point mass at zero to model the baseline hazard rate function of the entire population. We focus on estimating the cure threshold after which subjects are considered cured. We can incorporate covariates through a structure similar to the proportional hazards model and allow the cure threshold also to depend on the covariates. For illustration, we undertake simulation studies and a full Bayesian analysis of a bone marrow transplant data set.  相似文献   

8.
Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer theoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the à trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant à trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency-based cross-talk between amygdala and medial orbito-frontal cortex.  相似文献   

9.
In this paper, we propose a spatial model for the initiation of cracks in the bone cement of hip replacement specimens. The failure of hip replacements can be attributed mainly to damage accumulation, consisting of crack initiation and growth, occurring in the cement mantle that interlocks the hip prosthesis and the femur bone. Since crack initiation is an important factor in determining the lifetime of a replacement, the understanding of the reasons for crack initiation is vital in attempting to prolong the life of the hip replacement. The data consist of crack location coordinates from five laboratory experimental models, together with stress measurements. It is known that stress plays a major role in the initiation of cracks, and it is also known that other unmeasurable factors such as air bubbles (pores) in the cement mantle are also influential. We propose an identity-link spatial Poisson regression model for the counts of cracks in discrete regions of the cement, incorporating both the measured (stress), and through a latent process, any unmeasured factors (possibly pores) that may be influential. All analysis is carried out in a Bayesian framework, allowing for the inclusion of prior information obtained from engineers, and parameter estimation for the model is done via Markov chain Monte Carlo techniques.  相似文献   

10.
In practice, members of a committee often make different recommendations despite a common goal and shared sources of information. We study the nonparametric identification and estimation of a structural model, where such discrepancies are rationalized by the members’ unobserved types, which consist of ideological bias while weighing different sources of information, and tastes for multiple objectives announced in the policy target. We consider models with and without strategic incentives for members to make recommendations that conform to the final committee decision. We show that pure-strategy Bayesian Nash equilibria exist in both cases, and that the variation in common information recorded in the data helps us to recover the distribution of private types from the members’ choices. Building on the identification result, we estimate a structural model of interest rate decisions by the Monetary Policy Committee (MPC) at the Bank of England. We find some evidence that the external committee members are less affected by strategic incentives for conformity in their recommendations than the internal members. We also find that the difference in ideological bias between external and internal members is statistically insignificant. Supplementary materials for this article are available online.  相似文献   

11.
Identifiability has long been an important concept in classical statistical estimation. Historically, Bayesians have been less interested in the concept since, strictly speaking, any parameter having a proper prior distribution also has a proper posterior, and is thus estimable. However, the larger statistical community's recent move toward more Bayesian thinking is largely fueled by an interest in Markov chain Monte Carlo-based analyses using vague or even improper priors. As such, Bayesians have been forced to think more carefully about what has been learned about the parameters of interest (given the data so far), or what could possibly be learned (given an infinite amount of data). In this paper, we propose measures of Bayesian learning based on differences in precision and Kullback–Leibler divergence. After investigating them in the context of some familiar Gaussian linear hierarchical models, we consider their use in a more challenging setting involving two sets of random effects (traditional and spatially arranged), only the sum of which is identified by the data. We illustrate this latter model with an example from periodontal data analysis, where the spatial aspect arises from the proximity of various measurements taken in the mouth. Our results suggest our measures behave sensibly and may be useful in even more complicated (e.g., non-Gaussian) model settings.  相似文献   

12.
This work is motivated by a quantitative Magnetic Resonance Imaging study of the differential tumor/healthy tissue change in contrast uptake induced by radiation. The goal is to determine the time in which there is maximal contrast uptake (a surrogate for permeability) in the tumor relative to healthy tissue. A notable feature of the data is its spatial heterogeneity. Zhang, Johnson, Little, and Cao (2008a and 2008b) discuss two parallel approaches to "denoise" a single image of change in contrast uptake from baseline to one follow-up visit of interest. In this work we extend the image model to explore the longitudinal profile of the tumor/healthy tissue contrast uptake in multiple images over time. We fit a two-stage model. First, we propose a longitudinal image model for each subject. This model simultaneously accounts for the spatial and temporal correlation and denoises the observed images by borrowing strength both across neighboring pixels and over time. We propose to use the Mann-Whitney U statistic to summarize the tumor contrast uptake relative to healthy tissue. In the second stage, we fit a population model to the U statistic and estimate when it achieves its maximum. Our initial findings suggest that the maximal contrast uptake of the tumor core relative to healthy tissue peaks around three weeks after initiation of radiotherapy, though this warrants further investigation.  相似文献   

13.
ABSTRACT

This work presents advanced computational aspects of a new method for changepoint detection on spatio-temporal point process data. We summarize the methodology, based on building a Bayesian hierarchical model for the data and declaring prior conjectures on the number and positions of the changepoints, and show how to take decisions regarding the acceptance of potential changepoints. The focus of this work is about choosing an approach that detects the correct changepoint and delivers smooth reliable estimates in a feasible computational time; we propose Bayesian P-splines as a suitable tool for managing spatial variation, both under a computational and a model fitting performance perspective. The main computational challenges are outlined and a solution involving parallel computing in R is proposed and tested on a simulation study. An application is also presented on a data set of seismic events in Italy over the last 20 years.  相似文献   

14.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   

15.
We present a new statistical framework for landmark ?>curve-based image registration and surface reconstruction. The proposed method first elastically aligns geometric features (continuous, parameterized curves) to compute local deformations, and then uses a Gaussian random field model to estimate the full deformation vector field as a spatial stochastic process on the entire surface or image domain. The statistical estimation is performed using two different methods: maximum likelihood and Bayesian inference via Markov Chain Monte Carlo sampling. The resulting deformations accurately match corresponding curve regions while also being sufficiently smooth over the entire domain. We present several qualitative and quantitative evaluations of the proposed method on both synthetic and real data. We apply our approach to two different tasks on real data: (1) multimodal medical image registration, and (2) anatomical and pottery surface reconstruction.  相似文献   

16.
This paper proposes a functional connectivity approach, inspired by brain imaging literature, to model cross-sectional dependence. Using a varying parameter framework, the model allows correlation patterns to arise from complex economic or social relations rather than being simply functions of economic or geographic distances between locations. It nests the conventional spatial and factor model approaches as special cases. A Bayesian Markov Chain Monte Carlo method implements this approach. A small scale Monte Carlo study is conducted to evaluate the performance of this approach in finite samples, which outperforms both a spatial model and a factor model. We apply the functional connectivity approach to estimate a hedonic housing price model for Paris using housing transactions over the period 1990–2003. It allows us to get more information about complex spatial connections and appears more suitable to capture the cross-sectional dependence than the conventional methods.  相似文献   

17.
Summary.  We develop a Bayesian method that allows us to compare weekly depression states recalled for a 3-month period to cross-sectionally assessed measurements of current depression assessed during randomly timed phone interviews. Using these data, we examine the accuracy of recalled depression by linking a spline model for recalled depression and a logistic model for current depression. The logistic model includes the model-based probability of depression based on recall as a covariate and covariates potentially related to the accuracy of recall. The model that we propose allows variability in both measures and can be modified to examine general relationships between longitudinal and cross-sectional measurements.  相似文献   

18.
Bayesian Geostatistical Design   总被引:6,自引:1,他引:5  
Abstract.  This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is often not the best choice.  相似文献   

19.
We develop a new methodology for determining the location and dynamics of brain activity from combined magnetoencephalography (MEG) and electroencephalography (EEG) data. The resulting inverse problem is ill‐posed and is one of the most difficult problems in neuroimaging data analysis. In our development we propose a solution that combines the data from three different modalities, magnetic resonance imaging (MRI), MEG and EEG, together. We propose a new Bayesian spatial finite mixture model that builds on the mesostate‐space model developed by Daunizeau & Friston [Daunizeau and Friston, NeuroImage 2007; 38, 67–81]. Our new model incorporates two major extensions: (i) We combine EEG and MEG data together and formulate a joint model for dealing with the two modalities simultaneously; (ii) we incorporate the Potts model to represent the spatial dependence in an allocation process that partitions the cortical surface into a small number of latent states termed mesostates. The cortical surface is obtained from MRI. We formulate the new spatiotemporal model and derive an efficient procedure for simultaneous point estimation and model selection based on the iterated conditional modes algorithm combined with local polynomial smoothing. The proposed method results in a novel estimator for the number of mixture components and is able to select active brain regions, which correspond to active variables in a high‐dimensional dynamic linear model. The methodology is investigated using synthetic data and simulation studies and then demonstrated on an application examining the neural response to the perception of scrambled faces. R software implementing the methodology along with several sample datasets are available at the following GitHub repository https://github.com/v2south/PottsMix . The Canadian Journal of Statistics 47: 688–711; 2019 © 2019 Statistical Society of Canada  相似文献   

20.
A large volume of CCD X-ray spectra is being generated by the Chandra X-ray Observatory (Chandra) and XMM-Newton. Automated spectral analysis and classification methods can aid in sorting, characterizing, and classifying this large volume of CCD X-ray spectra in a non-parametric fashion, complementary to current parametric model fits. We have developed an algorithm that uses multivariate statistical techniques, including an ensemble clustering method, applied for the first time for X-ray spectral classification. The algorithm uses spectral data to group similar discrete sources of X-ray emission by placing the X-ray sources in a three-dimensional spectral sequence and then grouping the ordered sources into clusters based on their spectra. This new method can handle large quantities of data and operate independently of the requirement of spectral source models and a priori knowledge concerning the nature of the sources (i.e., young stars, interacting binaries, active galactic nuclei). We apply the method to Chandra imaging spectroscopy of the young stellar clusters in the Orion Nebula Cluster and the NGC 1333 star formation region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号