首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Statistical approaches in quantitative positron emission tomography   总被引:5,自引:0,他引:5  
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.  相似文献   

2.
The criminal courts in England and Wales may request the probation service to submit pre-sentence reports which are considered by magistrates and judges before making their sentencing decision. Pre-sentence reports must include an assessment of the risk of reoffending and the risk of harm to the public which the convicted offender presents. The offender group reconviction scale is a statistical aid to such risk assessment. We describe the scale and the statistical analysis on which it is based, and we discuss some statistical aspects of its interpretation and use.  相似文献   

3.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

4.
This paper addresses the image modeling problem under the assumption that images can be represented by third-order, hidden Markov mesh random field models. The range of applications of the techniques described hereafter comprises the restoration of binary images, the modeling and compression of image data, as well as the segmentation of gray-level or multi-spectral images, and image sequences under the short-range motion hypothesis. We outline coherent approaches to both the problems of image modeling (pixel labeling) and estimation of model parameters (learning). We derive a real-time labeling algorithm-based on a maximum, marginal a posteriori probability criterion-for a hidden third-order Markov mesh random field model. Our algorithm achieves minimum time and space complexities simultaneously, and we describe what we believe to be the most appropriate data structures to implement it. Critical aspects of the computer simulation of a real-time implementation are discussed, down to the computer code level. We develop an (unsupervised) learning technique by which the model parameters can be estimated without ground truth information. We lay bare the conditions under which our approach can be made time-adaptive in order to be able to cope with short-range motion in dynamic image sequences. We present extensive experimental results for both static and dynamic images from a wide variety of sources. They comprise standard, infra-red and aerial images, as well as a sequence of ultrasound images of a fetus and a series of frames from a motion picture sequence. These experiments demonstrate that the method is subjectively relevant to the problems of image restoration, segmentation and modeling.  相似文献   

5.
Artificial neural networks have been successfully applied to a variety of machine learning tasks, including image recognition, semantic segmentation, and machine translation. However, few studies fully investigated ensembles of artificial neural networks. In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithms. We designed several experiments, with the candidate algorithms being the same network structure with different model checkpoints within a single training process, networks with same structure but trained multiple times stochastically, and networks with different structure. In addition, we further studied the overconfidence phenomenon of the neural networks, as well as its impact on the ensemble methods. Across all of our experiments, the Super Learner achieved best performance among all the ensemble methods in this study.  相似文献   

6.
Face recognition has important applications in forensics (criminal identification) and security (biometric authentication). The problem of face recognition has been extensively studied in the computer vision community, from a variety of perspectives. A relatively new development is the use of facial asymmetry in face recognition, and we present here the results of a statistical investigation of this biometric. We first show how facial asymmetry information can be used to perform three different face recognition tasks—human identification (in the presence of expression variations), classification of faces by expression, and classification of individuals according to sex. Initially, we use a simple classification method, and conduct a feature analysis which shows the particular facial regions that play the dominant role in achieving these three entirely different classification goals. We then pursue human identification under expression changes in greater depth, since this is the most important task from a practical point of view. Two different ways of improving the performance of the simple classifier are then discussed: (i) feature combinations and (ii) the use of resampling techniques (bagging and random subspaces). With these modifications, we succeed in obtaining near perfect classification results on a database of 55 individuals, a statistically significant improvement over the initial results as seen by hypothesis tests of proportions.  相似文献   

7.
The purpose of this work is to develop statistical methods for using degradation measure to estimate a survival function for a linear degradation model. In this paper, we review existing methods and then describe a parametric approach. We focus on estimating the survival function. A simulation study is conducted to evaluate the performance of the estimating method and the method is illustrated using real data.  相似文献   

8.
In recent years, a number of statistical models have been proposed for the purposes of high-level image analysis tasks such as object recognition. However, in general, these models remain hard to use in practice, partly as a result of their complexity, partly through lack of software. In this paper we concentrate on a particular deformable template model which has proved potentially useful for locating and labelling cells in microscope slides Rue and Hurn (1999). This model requires the specification of a number of rather non-intuitive parameters which control the shape variability of the deformed templates. Our goal is to arrange the estimation of these parameters in such a way that the microscope user's expertise is exploited to provide the necessary training data graphically by identifying a number of cells displayed on a computer screen, but that no additional statistical input is required. In this paper we use maximum likelihood estimation incorporating the error structure in the generation of our training data.  相似文献   

9.
We investigate a Bayesian method for the segmentation of muscle fibre images. The images are reasonably well approximated by a Dirichlet tessellation, and so we use a deformable template model based on Voronoi polygons to represent the segmented image. We consider various prior distributions for the parameters and suggest an appropriate likelihood. Following the Bayesian paradigm, the mathematical form for the posterior distribution is obtained (up to an integrating constant). We introduce a Metropolis-Hastings algorithm and a reversible jump Markov chain Monte Carlo algorithm (RJMCMC) for simulation from the posterior when the number of polygons is fixed or unknown. The particular moves in the RJMCMC algorithm are birth, death and position/colour changes of the point process which determines the location of the polygons. Segmentation of the true image was carried out using the estimated posterior mode and posterior mean. A simulation study is presented which is helpful for tuning the hyperparameters and to assess the accuracy. The algorithms work well on a real image of a muscle fibre cross-section image, and an additional parameter, which models the boundaries of the muscle fibres, is included in the final model.  相似文献   

10.
This paper presents a new robust, low computational cost technology for recognizing free-form objects in three-dimensional (3D) range data, or, in two dimensional (2D) curve data in the image plane. Objects are represented by implicit polynomials (i.e. 3D algebraic surfaces or 2D algebraic curves) of degree greater than two, and are recognized by computing and matching vectors of their algebraic invariants (which are functions of their coefficients that are invariant to translations, rotations and general linear transformations). Such polynomials of the fourth degree can represent objects considerably more complicated than quadrics and super-quadrics, and can realize object recognition at significantly lower computational cost. Unfortunately, the coefficients of high degree implicit polynomials are highly sensitive to small changes in the data to which the polynomials are fit, thus often making recognition based on these polynomial coefficients or their invariants unreliable. We take two approaches to the problem: one involves restricting the polynomials to those which represent bounded curves and surfaces, and the other approach is to use Bayesian recognizers. The Bayesian recognizers are amazingly stable and reliable, even when the polynomials have unbounded zero sets and very large coefficient variability. The Bayesian recognizers are a unique interplay of algebraic functions and statistical methods. In this paper, we present these recognizers and show that they work effectively, even when data are missing along a large portion of an object boundary due, for example, to partial occlusion.  相似文献   

11.
This paper presents a new robust, low computational cost technology for recognizing free-form objects in three-dimensional (3D) range data, or, in two dimensional (2D) curve data in the image plane. Objects are represented by implicit polynomials (i.e. 3D algebraic surfaces or 2D algebraic curves) of degree greater than two, and are recognized by computing and matching vectors of their algebraic invariants (which are functions of their coefficients that are invariant to translations, rotations and general linear transformations). Such polynomials of the fourth degree can represent objects considerably more complicated than quadrics and super-quadrics, and can realize object recognition at significantly lower computational cost. Unfortunately, the coefficients of high degree implicit polynomials are highly sensitive to small changes in the data to which the polynomials are fit, thus often making recognition based on these polynomial coefficients or their invariants unreliable. We take two approaches to the problem: one involves restricting the polynomials to those which represent bounded curves and surfaces, and the other approach is to use Bayesian recognizers. The Bayesian recognizers are amazingly stable and reliable, even when the polynomials have unbounded zero sets and very large coefficient variability. The Bayesian recognizers are a unique interplay of algebraic functions and statistical methods. In this paper, we present these recognizers and show that they work effectively, even when data are missing along a large portion of an object boundary due, for example, to partial occlusion.  相似文献   

12.
Image processing through multiscale analysis and measurement noise modeling   总被引:2,自引:0,他引:2  
We describe a range of powerful multiscale analysis methods. We also focus on the pivotal issue of measurement noise in the physical sciences. From multiscale analysis and noise modeling, we develop a comprehensive methodology for data analysis of 2D images, 1D signals (or spectra), and point pattern data. Noise modeling is based on the following: (i) multiscale transforms, including wavelet transforms; (ii) a data structure termed the multiresolution support; and (iii) multiple scale significance testing. The latter two aspects serve to characterize signal with respect to noise. The data analysis objectives we deal with include noise filtering and scale decomposition for visualization or feature detection.  相似文献   

13.
The Bayesian information criterion (BIC) is widely used for variable selection. We focus on the regression setting for which variations of the BIC have been proposed. A version that includes the Fisher Information matrix of the predictor variables performed best in one published study. In this article, we extend the evaluation, introduce a performance measure involving how closely posterior probabilities are approximated, and conclude that the version that includes the Fisher Information often favors regression models having more predictors, depending on the scale and correlation structure of the predictor matrix. In the image analysis application that we describe, we therefore prefer the standard BIC approximation because of its relative simplicity and competitive performance at approximating the true posterior probabilities.  相似文献   

14.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

15.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Fast and robust bootstrap   总被引:1,自引:0,他引:1  
In this paper we review recent developments on a bootstrap method for robust estimators which is computationally faster and more resistant to outliers than the classical bootstrap. This fast and robust bootstrap method is, under reasonable regularity conditions, asymptotically consistent. We describe the method in general and then consider its application to perform inference based on robust estimators for the linear regression and multivariate location-scatter models. In particular, we study confidence and prediction intervals and tests of hypotheses for linear regression models, inference for location-scatter parameters and principal components, and classification error estimation for discriminant analysis.  相似文献   

17.
This work is motivated by a quantitative Magnetic Resonance Imaging study of the differential tumor/healthy tissue change in contrast uptake induced by radiation. The goal is to determine the time in which there is maximal contrast uptake (a surrogate for permeability) in the tumor relative to healthy tissue. A notable feature of the data is its spatial heterogeneity. Zhang, Johnson, Little, and Cao (2008a and 2008b) discuss two parallel approaches to "denoise" a single image of change in contrast uptake from baseline to one follow-up visit of interest. In this work we extend the image model to explore the longitudinal profile of the tumor/healthy tissue contrast uptake in multiple images over time. We fit a two-stage model. First, we propose a longitudinal image model for each subject. This model simultaneously accounts for the spatial and temporal correlation and denoises the observed images by borrowing strength both across neighboring pixels and over time. We propose to use the Mann-Whitney U statistic to summarize the tumor contrast uptake relative to healthy tissue. In the second stage, we fit a population model to the U statistic and estimate when it achieves its maximum. Our initial findings suggest that the maximal contrast uptake of the tumor core relative to healthy tissue peaks around three weeks after initiation of radiotherapy, though this warrants further investigation.  相似文献   

18.
In this paper, we describe a new statistical method for images which contain discontinuities. The method tries to improve the quality of a 'measured' image, which is degraded by the presence of random distortions. This is achieved by using knowledge about the degradation process and a priori information about the main characteristics of the underlying ideal image. Specifically, the method uses information about the discontinuity patterns in small areas of the 'true' image. Some auxiliary labels 'explicitly' describe the location of discontinuities in the true image. A Bayesian model for the image grey levels and the discontinuity labels is built. The maximum a posteriori estimator is considered. The iterated conditional modes algorithm is used to find a (local) maximum of the posterior distribution. The proposed method has been successfully applied to both artificial and real magnetic resonance images. A comparison of the results with those obtained from three other known methods also has been performed. Finally, the connection between Bayesian 'explicity and 'implicit' models is studied. In implicit modelling, there is no use of any set of labels explicitly describing the location of discontinuities. For these models, we derive some constraints of the function by which the presence of the discontinuities is taken into account.  相似文献   

19.
Self-reported income information particularly suffers from an intentional coarsening of the data, which is called heaping or rounding. If it does not occur completely at random – which is usually the case – heaping and rounding have detrimental effects on the results of statistical analysis. Conventional statistical methods do not consider this kind of reporting bias, and thus might produce invalid inference. We describe a novel statistical modeling approach that allows us to deal with self-reported heaped income data in an adequate and flexible way. We suggest modeling heaping mechanisms and the true underlying model in combination. To describe the true net income distribution, we use the zero-inflated log-normal distribution. Heaping points are identified from the data by applying a heuristic procedure comparing a hypothetical income distribution and the empirical one. To determine heaping behavior, we employ two distinct models: either we assume piecewise constant heaping probabilities, or heaping probabilities are considered to increase steadily with proximity to a heaping point. We validate our approach by some examples. To illustrate the capacity of the proposed method, we conduct a case study using income data from the German National Educational Panel Study.  相似文献   

20.
Parametric nonlinear mixed effects models (NLMEs) are now widely used in biometrical studies, especially in pharmacokinetics research and HIV dynamics models, due to, among other aspects, the computational advances achieved during the last years. However, this kind of models may not be flexible enough for complex longitudinal data analysis. Semiparametric NLMEs (SNMMs) have been proposed as an extension of NLMEs. These models are a good compromise and retain nice features of both parametric and nonparametric models resulting in more flexible models than standard parametric NLMEs. However, SNMMs are complex models for which estimation still remains a challenge. Previous estimation procedures are based on a combination of log-likelihood approximation methods for parametric estimation and smoothing splines techniques for nonparametric estimation. In this work, we propose new estimation strategies in SNMMs. On the one hand, we use the Stochastic Approximation version of EM algorithm (SAEM) to obtain exact ML and REML estimates of the fixed effects and variance components. On the other hand, we propose a LASSO-type method to estimate the unknown nonlinear function. We derive oracle inequalities for this nonparametric estimator. We combine the two approaches in a general estimation procedure that we illustrate with simulations and through the analysis of a real data set of price evolution in on-line auctions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号