首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
A penalized likelihood approach to the estimation of calibration factors in positron emission tomography (PET) is considered, in particular the problem of estimating the efficiency of PET detectors. Varying efficiencies among the detectors create a non-uniform performance and failure to account for the non-uniformities would lead to streaks in the image, so efficient estimation of the non-uniformities is desirable to reduce the propagation of noise to the final image. The relevant data set is provided by a blank scan, where a model may be derived that depends only on the sources affecting non-uniformities: inherent variation among the detector crystals and geometric effects. Physical considerations suggest a novel mixed inverse model with random crystal effects and smooth geometric effects. Using appropriate penalty terms, the penalized maximum likelihood estimates are derived and an efficient computational algorithm utilizing the fast Fourier transform is developed. Data-driven shrinkage and smoothing parameters are chosen to minimize an estimate of the predictive loss function. Various examples indicate that the approach proposed works well computationally and compares well with the standard method.  相似文献   

4.
Statistical approaches in quantitative positron emission tomography   总被引:5,自引:0,他引:5  
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.  相似文献   

5.
A local orthogonal polynomial expansion (LOrPE) of the empirical density function is proposed as a novel method to estimate the underlying density. The estimate is constructed by matching localised expectation values of orthogonal polynomials to the values observed in the sample. LOrPE is related to several existing methods, and generalises straightforwardly to multivariate settings. By manner of construction, it is similar to local likelihood density estimation (LLDE). In the limit of small bandwidths, LOrPE functions as kernel density estimation (KDE) with high-order (effective) kernels inherently free of boundary bias, a natural consequence of kernel reshaping to accommodate endpoints. Consistency and faster asymptotic convergence rates follow. In the limit of large bandwidths LOrPE is equivalent to orthogonal series density estimation (OSDE) with Legendre polynomials, thereby inheriting its consistency. We compare the performance of LOrPE to KDE, LLDE, and OSDE, in a number of simulation studies. In terms of mean integrated squared error, the results suggest that with a proper balance of the two tuning parameters, bandwidth and degree, LOrPE generally outperforms these competitors when estimating densities with sharply truncated supports.  相似文献   

6.
We propose an orthogonal series density estimator for complex surveys, where samples are neither independent nor identically distributed. The proposed estimator is proved to be design-unbiased and asymptotically design-consistent. The asymptotic normality is proved under both design and combined spaces. Two data driven estimators are proposed based on the proposed oracle estimator. We show the efficiency of the proposed estimators in simulation studies. A real survey data example is provided for an illustration.  相似文献   

7.
Summary Letg(x) andf(x) be continuous density function on (a, b) and let {ϕj} be a complete orthonormal sequence of functions onL 2(g), which is the set of squared integrable functions weighted byg on (a, b). Suppose that over (a, b). Given a grouped sample of sizen fromf(x), the paper investigates the asymptotic properties of the restricted maximum likelihood estimator of density, obtained by setting all but the firstm of the ϑj’s equal to0. Practical suggestions are given for performing estimation via the use of Fourier and Legendre polynomial series. Research partially supported by: CNR grant, n. 93. 00837. CT10.  相似文献   

8.
Robust parameter design methodology was originally introduced by Taguchi [14 Taguchi, G. 1986. Introduction to Quality Engineering: Designing Quality Into Products and Process, Tokyo: Asian Productivity Organization.  [Google Scholar]] as an engineering methodology for quality improvement of products and processes. A robust design of a system is one in which two different types of factors are varied; control factors and noise factors. Control factors are variables with levels that are adjustable, whereas noise factors are variables with levels that are hard or impossible to control during normal conditions, such as environmental conditions and raw-material properties. Robust parameter design aims at the reduction of process variation by properly selecting the levels of control factors so that the process becomes insensitive to changes in noise factors. Taguchi [14 Taguchi, G. 1986. Introduction to Quality Engineering: Designing Quality Into Products and Process, Tokyo: Asian Productivity Organization.  [Google Scholar] 15 Taguchi, G. 1987. System of Experimental Design, Vol. I and II, New York: UNIPUB.  [Google Scholar]] proposed the use of crossed arrays (inner–outer arrays) for robust parameter design. A crossed array is the cross-product of an orthogonal array (OA) involving control factors (inner array) and an OA involving noise factors (outer array). Objecting to the run size and the flexibility of crossed arrays, several authors combined control and noise factors in a single design matrix, which is called a combined array, instead of crossed arrays. In this framework, we present the use of OAs in Taguchi's methodology as a useful tool for designing robust parameter designs with economical run size.  相似文献   

9.
In this paper we consider long-memory processes obtained by aggregation of independent random parameter AR(1) processes. We propose an estimator of the density of the underlying random parameter. This estimator is based on the expansion of the density function on the basis of Gegenbauer polynomials. Rate of convergence to zero of the mean integrated square error (MISE) and of the uniform error are obtained. The results are illustrated by Monte-Carlo simulations.  相似文献   

10.
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET.  相似文献   

11.
A set of Fortran-77 subroutines is described which compute a nonparametric density estimator expressed as a Fourier series. In addition, a subroutine is given for the estimation of a cumulative distribution. Performance measures are given based on samples from a Weibull distribution. Due to small size and modest space demands, these subroutines are easily implemented on most small computers.  相似文献   

12.
ABSTRACT

There is a growing interest to get a fully MR based radiotherapy. The most important development needed is to obtain improved bone tissue estimation. The existing model-based methods perform poorly on bone tissues. This paper was aimed at obtaining improved bone tissue estimation. Skew-Gaussian mixture model and Gaussian mixture model were proposed to investigate CT image estimation from MR images by partitioning the data into two major tissue types. The performance of the proposed models was evaluated using the leave-one-out cross-validation method on real data. In comparison with the existing model-based approaches, the model-based partitioning approach outperformed in bone tissue estimation, especially in dense bone tissue estimation.  相似文献   

13.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

14.
The term ‘small area’ or ‘small domain’ is commonly used to denote a small geographical area that has a small subpopulation of people within a large area. Small area estimation is an important area in survey sampling because of the growing demand for better statistical inference for small areas in public or private surveys. In small area estimation problems the focus is on how to borrow strength across areas in order to develop a reliable estimator and which makes use of available auxiliary information. Some traditional methods for small area problems such as empirical best linear unbiased prediction borrow strength through linear models that provide links to related areas, which may not be appropriate for some survey data. In this article, we propose a stepwise Bayes approach which borrows strength through an objective posterior distribution. This approach results in a generalized constrained Dirichlet posterior estimator when auxiliary information is available for small areas. The objective posterior distribution is based only on the assumption of exchangeability across related areas and does not make any explicit model assumptions. The form of our posterior distribution allows us to assign a weight to each member of the sample. These weights can then be used in a straight forward fashion to make inferences about the small area means. Theoretically, the stepwise Bayes character of the posterior allows one to prove the admissibility of the point estimators suggesting that inferential procedures based on this approach will tend to have good frequentist properties. Numerically, we demonstrate in simulations that the proposed stepwise Bayes approach can have substantial strengths compared to traditional methods.  相似文献   

15.
Abstract

The availability of some extra information, along with the actual variable of interest, may be easily accessible in different practical situations. A sensible use of the additional source may help to improve the properties of statistical techniques. In this study, we focus on the estimators for calibration and intend to propose a setup where we reply only on first two moments instead of modeling the whole distributional shape. We have proposed an estimator for linear calibration problems and investigated it under normal and skewed environments. We have partitioned its mean squared error into intrinsic and estimation components. We have observed that the bias and mean squared error of the proposed estimator are function of four dimensionless quantities. It is to be noticed that both the classical and the inverse estimators become the special cases of the proposed estimator. Moreover, the mean squared error of the proposed estimator and the exact mean squared error of the inverse estimator coincide. We have also observed that the proposed estimator performs quite well for skewed errors as well. The real data applications are also included in the study for practical considerations.  相似文献   

16.
Density-based clustering methods hinge on the idea of associating groups to the connected components of the level sets of the density underlying the data, to be estimated by a nonparametric method. These methods claim some desirable properties and generally good performance, but they involve a non-trivial computational effort, required for the identification of the connected regions. In a previous work, the use of spatial tessellation such as the Delaunay triangulation has been proposed, because it suitably generalizes the univariate procedure for detecting the connected components. However, its computational complexity grows exponentially with the dimensionality of data, thus making the triangulation unfeasible for high dimensions. Our aim is to overcome the limitations of Delaunay triangulation. We discuss the use of an alternative procedure for identifying the connected regions associated to the level sets of the density. By measuring the extent of possible valleys of the density along the segment connecting pairs of observations, the proposed procedure shifts the formulation from a space with arbitrary dimension to a univariate one, thus leading benefits both in computation and visualization.  相似文献   

17.
A method for nonparametric estimation of density based on a randomly censored sample is presented. The density is expressed as a linear combination of cubic M -splines, and the coefficients are determined by pseudo-maximum-likelihood estimation (likelihood is maximized conditionally on data-dependent knots). By using regression splines (small number of knots) it is possible to reduce the estimation problem to a space of low dimension while preserving flexibility, thus striking a compromise between parametric approaches and ordinary nonparametric approaches based on spline smoothing. The number of knots is determined by the minimum AIC. Examples of simulated and real data are presented. Asymptotic theory and the bootstrap indicate that the precision and the accuracy of the estimates are satisfactory.  相似文献   

18.
In this article, a new model-free feature screening method named after probability density (mass) function distance (PDFD) correlation is presented for ultrahigh-dimensional data analysis. We improve the fused-Kolmogorov filter (F-KOL) screening procedure through probability density distribution. The proposed method is also fully nonparametric and can be applied to more general types of predictors and responses, including discrete and continuous random variables. Kernel density estimate method and numerical integration are applied to obtain the estimator we proposed. The results of simulation studies indicate that the fused-PDFD performs better than other existing screening methods, such as F-KOL filter, sure-independent screening (SIS), sure independent ranking and screening (SIRS), distance correlation sure-independent screening (DCSIS) and robust ranking correlation screening (RRCS). Finally, we demonstrate the validity of fused-PDFD by a real data example.  相似文献   

19.
ABSTRACT

We develop here an alternative information theoretic method of inference of problems in which all of the observed information is in terms of intervals. We focus on the unconditional case in which the observed information is in terms the minimal and maximal values at each period. Given interval data, we infer the joint and marginal distributions of the interval variable and its range. Our inferential procedure is based on entropy maximization subject to multidimensional moment conditions and normalization in which the entropy is defined over discretized intervals. The discretization is based on theory or empirically observed quantities. The number of estimated parameters is independent of the discretization so the level of discretization does not change the fundamental level of complexity of our model. As an example, we apply our method to study the weather pattern for Los Angeles and New York City across the last century.  相似文献   

20.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号