首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A warping is a function that deforms images by mapping between image domains. The choice of function is formulated statistically as maximum penalized likelihood, where the likelihood measures the similarity between images after warping and the penalty is a measure of distortion of a warping. The paper addresses two issues simultaneously, of how to choose the warping function and how to assess the alignment. A new, Fourier–von Mises image model is identified, with phase differences between Fourier-transformed images having von Mises distributions. Also, new, null set distortion criteria are proposed, with each criterion uniquely minimized by a particular set of polynomial functions. A conjugate gradient algorithm is used to estimate the warping function, which is numerically approximated by a piecewise bilinear function. The method is motivated by, and used to solve, three applied problems: to register a remotely sensed image with a map, to align microscope images obtained by using different optics and to discriminate between species of fish from photographic images.  相似文献   

2.
In this paper we use a penalized likelihood approach to image warping in the context of discrimination and averaging. The choice of average image is formulated statistically by minimizing a penalized likelihood, where the likelihood measures the similarity between images after warping and the penalty is a measure of distortion of a warping. The notions of measures of similarity are given in terms of normalized image information. The measures of distortion are landmark based. Thus we use a combination of landmark and normalized image information. The average defined in the paper is also extended by allowing random perturbation of the landmarks. This strategy improves averages for discrimination purposes. We give here real applications from medical and biological areas.  相似文献   

3.
This paper addresses the image modeling problem under the assumption that images can be represented by third-order, hidden Markov mesh random field models. The range of applications of the techniques described hereafter comprises the restoration of binary images, the modeling and compression of image data, as well as the segmentation of gray-level or multi-spectral images, and image sequences under the short-range motion hypothesis. We outline coherent approaches to both the problems of image modeling (pixel labeling) and estimation of model parameters (learning). We derive a real-time labeling algorithm-based on a maximum, marginal a posteriori probability criterion-for a hidden third-order Markov mesh random field model. Our algorithm achieves minimum time and space complexities simultaneously, and we describe what we believe to be the most appropriate data structures to implement it. Critical aspects of the computer simulation of a real-time implementation are discussed, down to the computer code level. We develop an (unsupervised) learning technique by which the model parameters can be estimated without ground truth information. We lay bare the conditions under which our approach can be made time-adaptive in order to be able to cope with short-range motion in dynamic image sequences. We present extensive experimental results for both static and dynamic images from a wide variety of sources. They comprise standard, infra-red and aerial images, as well as a sequence of ultrasound images of a fetus and a series of frames from a motion picture sequence. These experiments demonstrate that the method is subjectively relevant to the problems of image restoration, segmentation and modeling.  相似文献   

4.
We consider the problem of binary-image restoration. The image being restored is not random, and we make no assumption about the nature of its contents. The estimate of the colour at each site is a fixed (the same for all sites) function of the data available in a neighbourhood of that site. Under this restriction, the estimate minimizing the overall mean squared error of prediction is the conditional expectation of the true colour given the observations in the neighbourhood of a site. The computation of this conditional expectation leads to the formal definition of the local characteristics of an image, namely, the frequency with which each pattern appears in the true unobserved image. When the “true” distribution of the patterns is unknown, it can be estimated from the records. The conditional expectation described above can then be evaluated using the estimated distribution of the patterns, and this procedure leads to a very natural estimate of the colour at each site. We propose two unbiased and consistent estimates for the distribution of patterns when the noise is a Gaussian white noise. Since the size of realistic images is very large, the estimated pattern distribution is usually close to the true one. This suggests that the estimated conditional expectation can be expected to be nearly optimal. An interesting feature of the proposed restoration methods is that they do not require prior knowledge of the local or global properties of the true underlying image. Several examples based on synthetic images show that the new methods perform fairly well for a variety of images with different degrees of colour continuity or textures.  相似文献   

5.
Image warping is the process of deforming an image through a transformation of its domain, which is typically a subset of R2. Given the destination of a collection of points, the problem becomes one of finding a suitable smooth interpolation for the destinations of the remaining points of the domain. A common solution is to use the thin plate spline (TPS). We find that the TPS often introduces unintended distortions of image structures. In this paper, we will analyze interpolation by TPS, experiment with other radial basis functions, and suggest two alternative functions that provide better results.  相似文献   

6.
7.
This article introduces a non parametric warping model for functional data. When the outcome of an experiment is a sample of curves, data can be seen as realizations of a stochastic process, which takes into account the variations between the different observed curves. The aim of this work is to define a mean pattern which represents the main behaviour of the set of all the realizations. So, we define the structural expectation of the underlying stochastic function. Then, we provide empirical estimators of this structural expectation and of each individual warping function. Consistency and asymptotic normality for such estimators are proved.  相似文献   

8.
Abstract.  In a range of imaging problems, particularly those where the images are of man-made objects, edges join at points which comprise three or more distinct boundaries between textures. In such cases the set of edges in the plane forms what a mathematician would call a planar graph. Smooth edges in the graph meet one another at junctions, called 'vertices', the 'degrees' of which denote the respective numbers of edges that join there. Conventional image reconstruction methods do not always draw clear distinctions among different degrees of junction, however. In such cases the algorithm is, in a sense, too locally adaptive; it inserts junctions without checking more globally to determine whether another configuration might be more suitable. In this paper we suggest an alternative approach to edge reconstruction, which combines a junction classification step with an edge-tracking routine. The algorithm still makes its decisions locally, so that the method retains an adaptive character. However, the fact that it focuses specifically on estimating the degree of a junction means that it is relatively unlikely to insert multiple low-degree junctions when evidence in the data supports the existence of a single high-degree junction. Numerical and theoretical properties of the method are explored, and theoretical optimality is discussed. The technique is based on local least-squares, or local likelihood in the case of Gaussian data. This feature, and the fact that the algorithm takes a tracking approach which does not require analysis of the full spatial data set, mean that it is relatively simple to implement.  相似文献   

9.
We consider the detection of land cover changes using pairs of Landsat ETM+ satellite images. The images consist of eight spectral bands and to simplify the multidimensional change detection task, the image pair is first transformed to a one-dimensional image. When the transformation is non-linear, the true change in the images may be masked by complex noise. For example, when changes in the Normalized Difference Vegetation Index is considered, the variance of noise may not be constant over the image and methods based on image thresholding can be ineffective. To facilitate detection of change in such situations, we propose an approach that uses Bayesian statistical modeling and simulation-based inference. In order to detect both large and small scale changes, our method uses a scale space approach that employs multi-level smoothing. We demonstrate the technique using artificial test images and two pairs of real Landsat ETM+satellite images.  相似文献   

10.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

11.
Statistical approaches in quantitative positron emission tomography   总被引:5,自引:0,他引:5  
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.  相似文献   

12.
13.
An image that is mapped into a bit stream suitable for communication over or storage in a digital medium is said to have been compressed. Using tree-structured vector quantizers (TSVQs) is an approach to image compression in which clustering algorithms are combined with ideas from tree-structured classification to provide code books that can be searched quickly and simply. The overall goal is to optimize the quality of the compressed image subject to a constraint on the communication or storage capacity, i.e. on the allowed bit rate. General goals of image compression and vector quantization are summarized in this paper. There is discussion of methods for code book design, particularly the generalized Lloyd algorithm for clustering, and methods for splitting and pruning that have been extended from the design of classification trees to TSVQs. The resulting codes, called pruned TSVQs, are of variable rate, and yield lower distortion than fixed-rate, full-search vector quantizers for a given average bit rate. They have simple encoders and a natural successive approximation (progressive) property. Applications of pruned TSVQs are discussed, particularly compressing computerized tomography images. In this work, the key issue is not merely the subjective attractiveness of the compressed image but rather whether the diagnostic accuracy is adversely aflected by compression. In recent work, TSVQs have been combined with other types of image processing, including segmentation and enhancement. The relationship between vector quantizer performance and the size of the training sequence used to design the code and other asymptotic properties of the codes are discussed.  相似文献   

14.
An image that is mapped into a bit stream suitable for communication over or storage in a digital medium is said to have been compressed. Using tree-structured vector quantizers (TSVQs) is an approach to image compression in which clustering algorithms are combined with ideas from tree-structured classification to provide code books that can be searched quickly and simply. The overall goal is to optimize the quality of the compressed image subject to a constraint on the communication or storage capacity, i.e. on the allowed bit rate. General goals of image compression and vector quantization are summarized in this paper. There is discussion of methods for code book design, particularly the generalized Lloyd algorithm for clustering, and methods for splitting and pruning that have been extended from the design of classification trees to TSVQs. The resulting codes, called pruned TSVQs, are of variable rate, and yield lower distortion than fixed-rate, full-search vector quantizers for a given average bit rate. They have simple encoders and a natural successive approximation (progressive) property. Applications of pruned TSVQs are discussed, particularly compressing computerized tomography images. In this work, the key issue is not merely the subjective attractiveness of the compressed image but rather whether the diagnostic accuracy is adversely aflected by compression. In recent work, TSVQs have been combined with other types of image processing, including segmentation and enhancement. The relationship between vector quantizer performance and the size of the training sequence used to design the code and other asymptotic properties of the codes are discussed.  相似文献   

15.
Common loss functions used for the restoration of grey scale images include the zero–one loss and the sum of squared errors. The corresponding estimators, the posterior mode and the posterior marginal mean, are optimal Bayes estimators with respect to their way of measuring the loss for different error configurations. However, both these loss functions have a fundamental weakness: the loss does not depend on the spatial structure of the errors. This is important because a systematic structure in the errors can lead to misinterpretation of the estimated image. We propose a new loss function that also penalizes strong local sample covariance in the error and we discuss how the optimal Bayes estimator can be estimated using a two-step Markov chain Monte Carlo and simulated annealing algorithm. We present simulation results for some artificial data which show improvement with respect to small structures in the image.  相似文献   

16.
In this paper, we propose a model for image segmentation based on a finite mixture of Gaussian distributions. For each pixel of the image, prior probabilities of class memberships are specified through a Gibbs distribution, where association between labels of adjacent pixels is modeled by a class-specific term allowing for different interaction strengths across classes. We show how model parameters can be estimated in a maximum likelihood framework using Mean Field theory. Experimental performance on perturbed phantom and on real benchmark images shows that the proposed method performs well in a wide variety of empirical situations.  相似文献   

17.
Problems involving high-dimensional data, such as pattern recognition, image analysis, and gene clustering, often require a preliminary step of dimension reduction before or during statistical analysis. If one restricts to a linear technique for dimension reduction, the remaining issue is the choice of the projection. This choice can be dictated by desire to maximize certain statistical criteria, including variance, kurtosis, sparseness, and entropy, of the projected data. Motivations for such criteria comes from past empirical studies of statistics of natural and urban images. We present a geometric framework for finding projections that are optimal for obtaining certain desired statistical properties. Our approach is to define an objective function on spaces of orthogonal linear projections—Stiefel and Grassmann manifolds, and to use gradient techniques to optimize that function. This construction uses the geometries of these manifolds to perform the optimization. Experimental results are presented to demonstrate these ideas for natural and facial images.  相似文献   

18.
In this paper, we describe a new statistical method for images which contain discontinuities. The method tries to improve the quality of a 'measured' image, which is degraded by the presence of random distortions. This is achieved by using knowledge about the degradation process and a priori information about the main characteristics of the underlying ideal image. Specifically, the method uses information about the discontinuity patterns in small areas of the 'true' image. Some auxiliary labels 'explicitly' describe the location of discontinuities in the true image. A Bayesian model for the image grey levels and the discontinuity labels is built. The maximum a posteriori estimator is considered. The iterated conditional modes algorithm is used to find a (local) maximum of the posterior distribution. The proposed method has been successfully applied to both artificial and real magnetic resonance images. A comparison of the results with those obtained from three other known methods also has been performed. Finally, the connection between Bayesian 'explicity and 'implicit' models is studied. In implicit modelling, there is no use of any set of labels explicitly describing the location of discontinuities. For these models, we derive some constraints of the function by which the presence of the discontinuities is taken into account.  相似文献   

19.
Using the spatial dependence of observations from multivariate images, it is possible to construct methods for data reduction that perform better than the widely used principal components procedure. Switzer and Green introduced the min/max autocorrelation factors (MAF) process for transforming the data to a new set of vectors where the components are arranged according to the amount of autocorrelation. MAF performs well when the underlying image consists of large homogeneous regions. For images with many transitions between smaller homogeneous regions, however, MAF may not perform well. A modification of the MAF process, the restricted min/max autocorrelation factors (RMAF) process, which takes into account the transitions between homogeneous regions, is introduced. Simulation experiments show that large improvements can be achieved using RMAF rather than MAF.  相似文献   

20.
Confronted with multivariate group-structured data, one is in fact always interested in describing differences between groups. In this paper, canonical correlation analysis (CCA) is used as an exploratory data analysis tool to detect and describe differences between groups of objects. CCA allows for the construction of Gabriel biplots, relating representations of objects and variables in the plane that best represents the distinction of the groups of object points. In the case of non-linear CCA, transformations of the original variables are suggested to achieve a better group separation compared with that obtained by linear CCA. One can detect which (transformed) variables are responsible for this separation. The separation itself might be due to several characteristics of the data (eg. distances between the centres of gravity of the original or transformed groups of object points, or differences in the structure of the original groups). Four case studies give an overview of an exploration of the possibilities offered by linear and non-linear CCA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号