首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
2.
We investigate a Bayesian method for the segmentation of muscle fibre images. The images are reasonably well approximated by a Dirichlet tessellation, and so we use a deformable template model based on Voronoi polygons to represent the segmented image. We consider various prior distributions for the parameters and suggest an appropriate likelihood. Following the Bayesian paradigm, the mathematical form for the posterior distribution is obtained (up to an integrating constant). We introduce a Metropolis-Hastings algorithm and a reversible jump Markov chain Monte Carlo algorithm (RJMCMC) for simulation from the posterior when the number of polygons is fixed or unknown. The particular moves in the RJMCMC algorithm are birth, death and position/colour changes of the point process which determines the location of the polygons. Segmentation of the true image was carried out using the estimated posterior mode and posterior mean. A simulation study is presented which is helpful for tuning the hyperparameters and to assess the accuracy. The algorithms work well on a real image of a muscle fibre cross-section image, and an additional parameter, which models the boundaries of the muscle fibres, is included in the final model.  相似文献   

3.
Model-based clustering is a method that clusters data with an assumption of a statistical model structure. In this paper, we propose a novel model-based hierarchical clustering method for a finite statistical mixture model based on the Fisher distribution. The main foci of the proposed method are: (a) provide efficient solution to estimate the parameters of a Fisher mixture model (FMM); (b) generate a hierarchy of FMMs and (c) select the optimal model. To this aim, we develop a Bregman soft clustering method for FMM. Our model estimation strategy exploits Bregman divergence and hierarchical agglomerative clustering. Whereas, our model selection strategy comprises a parsimony-based approach and an evaluation graph-based approach. We empirically validate our proposed method by applying it on simulated data. Next, we apply the method on real data to perform depth image analysis. We demonstrate that the proposed clustering method can be used as a potential tool for unsupervised depth image analysis.  相似文献   

4.
5.

The additive AR-2D model has been successfully related to the modeling of satelital images both optic and of radar of synthetic opening. Having in mind the errors that are produced in the process of captation and quantification of the image, an interesting subject, is the robust estimation of the parameters in this model. Besides the robust methods in image models are also applied in some important image processing situations such as segmentation by texture and image restoration in the presence of outliers. This paper is concerned with the development and performance of the robust RA estimator proposed by Ojeda (1998) for the estimation of parameters in contaminated AR-2D models. Here, we implement this estimator and we show by simulation study that it has a better performance than the classic least square estimator and the robust M and GM estimators in an additive outlier contaminated image model.  相似文献   

6.
This paper addresses the image modeling problem under the assumption that images can be represented by third-order, hidden Markov mesh random field models. The range of applications of the techniques described hereafter comprises the restoration of binary images, the modeling and compression of image data, as well as the segmentation of gray-level or multi-spectral images, and image sequences under the short-range motion hypothesis. We outline coherent approaches to both the problems of image modeling (pixel labeling) and estimation of model parameters (learning). We derive a real-time labeling algorithm-based on a maximum, marginal a posteriori probability criterion-for a hidden third-order Markov mesh random field model. Our algorithm achieves minimum time and space complexities simultaneously, and we describe what we believe to be the most appropriate data structures to implement it. Critical aspects of the computer simulation of a real-time implementation are discussed, down to the computer code level. We develop an (unsupervised) learning technique by which the model parameters can be estimated without ground truth information. We lay bare the conditions under which our approach can be made time-adaptive in order to be able to cope with short-range motion in dynamic image sequences. We present extensive experimental results for both static and dynamic images from a wide variety of sources. They comprise standard, infra-red and aerial images, as well as a sequence of ultrasound images of a fetus and a series of frames from a motion picture sequence. These experiments demonstrate that the method is subjectively relevant to the problems of image restoration, segmentation and modeling.  相似文献   

7.
SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth.  相似文献   

8.
Virtual observatories give us access to huge amounts of image data that are often redundant. Our goal is to take advantage of this redundancy by combining images of the same field of view into a single model. To achieve this goal, we propose to develop a multi-source data fusion method that relies on probability and band-limited signal theory. The target object is an image to be inferred from a number of blurred and noisy sources, possibly from different sensors under various conditions (i.e. resolution, shift, orientation, blur, noise...). We aim at the recovery of a compound model “image + uncertainties” that best relates to the observations and contains a maximum of useful information from the initial data set. Thus, in some cases, spatial super-resolution may be required in order to preserve the information. We propose to use a Bayesian inference scheme to invert a forward model, which describes the image formation process for each observation and takes into account some a priori knowledge (e.g. stars as point sources). This involves both automatic registration and spatial resampling, which are ill-posed inverse problems that are addressed within a rigorous Bayesian framework. The originality of the work is in devising a new technique of multi-image data fusion that provides us with super-resolution, self-calibration and possibly model selection capabilities. This approach should outperform existing methods such as resample-and-add or drizzling since it can handle different instrument characteristics for each input image and compute uncertainty estimates as well. Moreover, it is designed to also work in a recursive way, so that the model can be updated when new data become available.  相似文献   

9.
国家形象是国家软硬实力的综合体现,对于一国在国际舞台的表现有着重要影响。现实中存在许多影响国家形象的因素。本文从国际竞争力的概念和数据出发,通过对世界主要国家和地区的国家形象的影响因素做出实证分析研究。利用洛桑国际管理发展学院(IMD)的国际竞争力数据库中关于国家形象指标的数据,我们采用分位回归方法,以揭示造成不同国家形象水平的客观影响因素,并与普通最小二乘回归结果进行对比,从而得到更全面的信息,为提高我国的国家形象提供实证依据。  相似文献   

10.
It is now possible to carry out Bayesian image segmentation from a continuum parametric model with an unknown number of regions. However, few suitable parametric models exist. We set out to model processes which have realizations that are naturally described by coloured planar triangulations. Triangulations are already used, to represent image structure in machine vision, and in finite element analysis, for domain decomposition. However, no normalizable parametric model, with realizations that are coloured triangulations, has been specified to date. We show how this must be done, and in particular we prove that a normalizable measure on the space of triangulations in the interior of a fixed simple polygon derives from a Poisson point process of vertices. We show how such models may be analysed by using Markov chain Monte Carlo methods and we present two case-studies, including convergence analysis.  相似文献   

11.
Markov Random Fields with Higher-order Interactions   总被引:5,自引:0,他引:5  
Discrete-state Markov random fields on regular arrays have played a significant role in spatial statistics and image analysis. For example, they are used to represent objects against background in computer vision and pixel-based classification of a region into different crop types in remote sensing. Convenience has generally favoured formulations that involve only pairwise interactions. Such models are in themselves unrealistic and, although they often perform surprisingly well in tasks such as the restoration of degraded images, they are unsatisfactory for many other purposes. In this paper, we consider particular forms of Markov random fields that involve higher-order interactions and therefore are better able to represent the large-scale properties of typical spatial scenes. Interpretations of the parameters are given and realizations from a variety of models are produced via Markov chain Monte Carlo. Potential applications are illustrated in two examples. The first concerns Bayesian image analysis and confirms that pairwise-interaction priors may perform very poorly for image functionals such as number of objects, even when restoration apparently works well. The second example describes a model for a geological dataset and obtains maximum-likelihood parameter estimates using Markov chain Monte Carlo. Despite the complexity of the formulation, realizations of the estimated model suggest that the representation is quite realistic.  相似文献   

12.
This work is motivated by a quantitative Magnetic Resonance Imaging study of the differential tumor/healthy tissue change in contrast uptake induced by radiation. The goal is to determine the time in which there is maximal contrast uptake (a surrogate for permeability) in the tumor relative to healthy tissue. A notable feature of the data is its spatial heterogeneity. Zhang, Johnson, Little, and Cao (2008a and 2008b) discuss two parallel approaches to "denoise" a single image of change in contrast uptake from baseline to one follow-up visit of interest. In this work we extend the image model to explore the longitudinal profile of the tumor/healthy tissue contrast uptake in multiple images over time. We fit a two-stage model. First, we propose a longitudinal image model for each subject. This model simultaneously accounts for the spatial and temporal correlation and denoises the observed images by borrowing strength both across neighboring pixels and over time. We propose to use the Mann-Whitney U statistic to summarize the tumor contrast uptake relative to healthy tissue. In the second stage, we fit a population model to the U statistic and estimate when it achieves its maximum. Our initial findings suggest that the maximal contrast uptake of the tumor core relative to healthy tissue peaks around three weeks after initiation of radiotherapy, though this warrants further investigation.  相似文献   

13.
Summary. A Bayesian method for segmenting weed and crop textures is described and implemented. The work forms part of a project to identify weeds and crops in images so that selective crop spraying can be carried out. An image is subdivided into blocks and each block is modelled as a single texture. The number of different textures in the image is assumed unknown. A hierarchical Bayesian procedure is used where the texture labels have a Potts model (colour Ising Markov random field) prior and the pixels within a block are distributed according to a Gaussian Markov random field, with the parameters dependent on the type of texture. We simulate from the posterior distribution by using a reversible jump Metropolis–Hastings algorithm, where the number of different texture components is allowed to vary. The methodology is applied to a simulated image and then we carry out texture segmentation on the weed and crop images that motivated the work.  相似文献   

14.
In assessing the area under the ROC curve for the accuracy of a diagnostic test, it is imperative to detect and locate multiple abnormalities per image. This approach takes that into account by adopting a statistical model that allows for correlation between the reader scores of several regions of interest (ROI).

The ROI method of partitioning the image is taken. The readers give a score to each ROI in the image and the statistical model takes into account the correlation between the scores of the ROI's of an image in estimating test accuracy. The test accuracy is given by Pr[Y > Z] + (1/2)Pr[Y = Z], where Y is an ordinal diagnostic measurement of an affected ROI, and Z is the diagnostic measurement of an unaffected ROI. This way of measuring test accuracy is equivalent to the area under the ROC curve. The parameters are the parameters of a multinomial distribution, then based on the multinomial distribution, a Bayesian method of inference is adopted for estimating the test accuracy.

Using a multinomial model for the test results, a Bayesian method based on the predictive distribution of future diagnostic scores is employed to find the test accuracy. By resampling from the posterior distribution of the model parameters, samples from the posterior distribution of test accuracy are also generated. Using these samples, the posterior mean, standard deviation, and credible intervals are calculated in order to estimate the area under the ROC curve. This approach is illustrated by estimating the area under the ROC curve for a study of the diagnostic accuracy of magnetic resonance angiography for diagnosis of arterial atherosclerotic stenosis. A generalization to multiple readers and/or modalities is proposed.

A Bayesian way to estimate test accuracy is easy to perform with standard software packages and has the advantage of employing the efficient inclusion of information from prior related imaging studies.  相似文献   

15.
We present a new statistical framework for landmark ?>curve-based image registration and surface reconstruction. The proposed method first elastically aligns geometric features (continuous, parameterized curves) to compute local deformations, and then uses a Gaussian random field model to estimate the full deformation vector field as a spatial stochastic process on the entire surface or image domain. The statistical estimation is performed using two different methods: maximum likelihood and Bayesian inference via Markov Chain Monte Carlo sampling. The resulting deformations accurately match corresponding curve regions while also being sufficiently smooth over the entire domain. We present several qualitative and quantitative evaluations of the proposed method on both synthetic and real data. We apply our approach to two different tasks on real data: (1) multimodal medical image registration, and (2) anatomical and pottery surface reconstruction.  相似文献   

16.
In this work we study the asymptotic behavior of a robust class of estimators of the coefficient of a AR-2D process. We establish the precise conditions for the consistency and asymptotic normality of the RA estimator. The AR-2D model has many applications in image modeling and statistical image processing, therefore the relevance of knowing such properties. The adequacy of the AR-2D model is analyzed with real images; we also show the impact of contamination and the capability of the RA estimator to produce useful results even in the presence of spurious data.  相似文献   

17.
当前中国零售业快速发展,发展自有品牌成为零售商获取竞争优势的重要途径.在总结国内外有关研究的基础上,从感知价值、感知质量、感知价格、感知风险、店铺形象、品牌形象以及品牌知名度七个维度探究其对自有品牌消费者购买意愿和忠诚度影响,并通过对山东省零售企业问卷调查实证分析对模型进行验证,为本土零售商开发和推广自有品牌提供营销策略上的依据和借鉴.  相似文献   

18.
周巍等 《统计研究》2015,32(7):81-86
遥感影像是大数据的一种,利用遥感对农作物播种面积进行估算常采用回归估计量或校准估计量,通常都需要将地面样本数据与遥感分类信息相结合。但对于大多数回归估计量,对省级总体的农作物面积估算只能满足对省级总体的精度要求而不能分解到更小区域,比如县和乡级。本文利用黑龙江省2011年的地面实测样本数据结合遥感分类结果,构建了单元层次的多响应变量的多元回归形式的小域模型,并将小域效应设定为固定形式。这样基于回归估计方法,既可以估算分县的主要作物播种面积,也可以使得各县播种面积估计结果相加就等于回归模型含义下的省级总体的总量估计。对黑龙江省玉米、水稻、大豆分县小域估计结果的精度评价(变异系数C.V),平均而言均可以满足县级精度要求。本文的结果表明小域估计方法在解决省级总体对全省和分县的农作物种植面积多级估算问题中具有很好的应用。  相似文献   

19.
A penalized likelihood approach to the estimation of calibration factors in positron emission tomography (PET) is considered, in particular the problem of estimating the efficiency of PET detectors. Varying efficiencies among the detectors create a non-uniform performance and failure to account for the non-uniformities would lead to streaks in the image, so efficient estimation of the non-uniformities is desirable to reduce the propagation of noise to the final image. The relevant data set is provided by a blank scan, where a model may be derived that depends only on the sources affecting non-uniformities: inherent variation among the detector crystals and geometric effects. Physical considerations suggest a novel mixed inverse model with random crystal effects and smooth geometric effects. Using appropriate penalty terms, the penalized maximum likelihood estimates are derived and an efficient computational algorithm utilizing the fast Fourier transform is developed. Data-driven shrinkage and smoothing parameters are chosen to minimize an estimate of the predictive loss function. Various examples indicate that the approach proposed works well computationally and compares well with the standard method.  相似文献   

20.
This paper describes a technique for building compact models of the shape and appearance of flexible objects seen in two-dimensional images. The models are derived from the statistics of sets of images of example objects with ‘landmark’ points labelled on each object. Each model consists of a flexible shape template, describing how the landmark points can vary, and a statistical model of the expected grey levels in regions around each point. Such models have proved useful in a wide variety of applications. We describe how the models can be used in local image search and give examples of their application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号