首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
This paper, dedicated to the 80th birthday of Professor C. R. Rao, deals with asymptotic distributions of Fréchet sample means and Fréchet total sample variance that are used in particular for data on projective shape spaces or on 3D shape spaces. One considers the intrinsic means associated with Riemannian metrics that are locally flat in a geodesically convex neighborhood around the support of a probability measure on a shape space or on a projective shape space. Such methods are needed to derive tests concerning variability of planar projective shapes in natural images or large sample and bootstrap confidence intervals for 3D mean shape coordinates of an ordered set of landmarks from laser images.  相似文献   

2.
In this paper we use a penalized likelihood approach to image warping in the context of discrimination and averaging. The choice of average image is formulated statistically by minimizing a penalized likelihood, where the likelihood measures the similarity between images after warping and the penalty is a measure of distortion of a warping. The notions of measures of similarity are given in terms of normalized image information. The measures of distortion are landmark based. Thus we use a combination of landmark and normalized image information. The average defined in the paper is also extended by allowing random perturbation of the landmarks. This strategy improves averages for discrimination purposes. We give here real applications from medical and biological areas.  相似文献   

3.
4.
Summary.  We consider the Bayesian analysis of human movement data, where the subjects perform various reaching tasks. A set of markers is placed on each subject and a system of cameras records the three-dimensional Cartesian co-ordinates of the markers during the reaching movement. It is of interest to describe the mean and variability of the curves that are traced by the markers during one reaching movement, and to identify any differences due to covariates. We propose a methodology based on a hierarchical Bayesian model for the curves. An important part of the method is to obtain identifiable features of the movement so that different curves can be compared after temporal warping. We consider four landmarks and a set of equally spaced pseudolandmarks are located in between. We demonstrate that the algorithm works well in locating the landmarks, and shape analysis techniques are used to describe the posterior distribution of the mean curve. A feature of this type of data is that some parts of the movement data may be missing—the Bayesian methodology is easily adapted to cope with this situation.  相似文献   

5.
In the context of functional data analysis, we propose new sample tests for homogeneity. Based on some well-known depth measures, we construct four different statistics in order to measure distance between the two samples. A simulation study is performed to check the efficiency of the tests when confronted with shape and magnitude perturbation. Finally, we apply these tools to measure the homogeneity in some samples of real data, and we obtain good results using this new method.  相似文献   

6.
7.
Summary.  We consider the analysis of extreme shapes rather than the more usual mean- and variance-based shape analysis. In particular, we consider extreme shape analysis in two applications: human muscle fibre images, where we compare healthy and diseased muscles, and temporal sequences of DNA shapes from molecular dynamics simulations. One feature of the shape space is that it is bounded, so we consider estimators which use prior knowledge of the upper bound when present. Peaks-over-threshold methods and maximum-likelihood-based inference are used. We introduce fixed end point and constrained maximum likelihood estimators, and we discuss their asymptotic properties for large samples. It is shown that in some cases the constrained estimators have half the mean-square error of the unconstrained maximum likelihood estimators. The new estimators are applied to the muscle and DNA data, and practical conclusions are given.  相似文献   

8.
One method of expressing coarse information about the shape of an object is to describe the shape by its landmarks, which can be taken as meaningful points on the outline of an object. We consider a situation in which we want to classify shapes into known populations based on their landmarks, invariant to the location, scale and rotation of the shapes. A neural network method for transformation-invariant classification of landmark data is presented. The method is compared with the (non-transformation-invariant) complex Bingham rule; the two techniques are tested on two sets of simulated data, and on data that arise from mice vertebrae. Despite the obvious advantage of the complex Bingham rule because of information about rotation, the neural network method compares favourably.  相似文献   

9.
聂斌  杜梦莹  廖丹 《统计研究》2012,29(9):88-94
 在统计过程控制的第I阶段,准确识别运行状态发生漂移的时间点是决定控制效果的关键。本文以多维空间的数据离心程度作为判定变点规则的标准,通过概率密度轮廓将单一观测值序列转化为多维空间中的数据点,运用数据深度技术构造特征变量,并建立变点定位规则。仿真性能分析的结果表明新方法能够在不需要假设过程服从正态分布的前提下对变点位置进行精确定位。在比较研究中也表现出良好的综合性能。  相似文献   

10.
We define the β-skeleton depth based on the probability that a point is contained within the β-skeleton influence region of two i.i.d. random vectors. The proposed family of depth functions satisfies the four desirable properties of statistical depth function. We also define and examine the sample β-skeleton depth functions and show that they share well-behaved asymptotic properties, including uniform consistency and asymptotic normality. Finally, we explore the β-skeleton multidimensional medians as location estimators of the center of multivariate distributions, discuss its asymptotic properties, and study its breakdown point. A Monte Carlo study compares the β-skeleton medians with the random Tukey median and the sample mean.  相似文献   

11.
This paper describes a technique for building compact models of the shape and appearance of flexible objects seen in two-dimensional images. The models are derived from the statistics of sets of images of example objects with ‘landmark’ points labelled on each object. Each model consists of a flexible shape template, describing how the landmark points can vary, and a statistical model of the expected grey levels in regions around each point. Such models have proved useful in a wide variety of applications. We describe how the models can be used in local image search and give examples of their application.  相似文献   

12.
This paper describes a technique for building compact models of the shape and appearance of flexible objects seen in two-dimensional images. The models are derived from the statistics of sets of images of example objects with 'landmark' points labelled on each object. Each model consists of a flexible shape template, describing how the landmark points can vary, and a statistical model of the expected grey levels in regions around each point. Such models have proved useful in a wide variety of applications. We describe how the models can be used in local image search and give examples of their application.  相似文献   

13.
In extending univariate outlier detection methods to higher dimension, various issues arise: limited visualization methods, inadequacy of marginal methods, lack of a natural order, limited parametric modeling, and, when using Mahalanobis distance, restriction to ellipsoidal contours. To address and overcome such limitations, we introduce nonparametric multivariate outlier identifiers based on multivariate depth functions, which can generate contours following the shape of the data set. Also, we study masking robustness, that is, robustness against misidentification of outliers as nonoutliers. In particular, we define a masking breakdown point (MBP), adapting to our setting certain ideas of Davies and Gather [1993. The identification of multiple outliers (with discussion). Journal of the American Statistical Association 88, 782–801] and Becker and Gather [1999. The masking breakdown point of multivariate outlier identification rules. Journal of the American Statistical Association 94, 947–955] based on the Mahalanobis distance outlyingness. We then compare four affine invariant outlier detection procedures, based on Mahalanobis distance, halfspace or Tukey depth, projection depth, and “Mahalanobis spatial” depth. For the goal of threshold type outlier detection, it is found that the Mahalanobis distance and projection procedures are distinctly superior in performance, each with very high MBP, while the halfspace approach is quite inferior. When a moderate MBP suffices, the Mahalanobis spatial procedure is competitive in view of its contours not constrained to be elliptical and its computational burden relatively mild. A small sampling experiment yields findings completely in accordance with the theoretical comparisons. While these four depth procedures are relatively comparable for the purpose of robust affine equivariant location estimation, the halfspace depth is not competitive with the others for the quite different goal of robust setting of an outlyingness threshold.  相似文献   

14.
A notion of data depth is used to measure centrality or outlyingness of a data point in a given data cloud. In the context of data depth, the point (or points) having maximum depth is called as deepest point (or points). In the present work, we propose three multi-sample tests for testing equality of location parameters of multivariate populations by using the deepest point (or points). These tests can be considered as extensions of two-sample tests based on the deepest point (or points). The proposed tests are implemented through the idea of Fisher's permutation test. Performance of earlier tests is studied by simulation. Illustration with two real datasets is also provided.  相似文献   

15.
The influence of individual points in an ordinal logistic model is considered when the aim is to determine their effects on the predictive probability in a Bayesian predictive approach. Our concern is to study the effects produced when the data are slightly perturbed, in particular by observing how these perturbations will affect the predictive probabilities and consequently the classification of future cases. We consider the extent of the change in the predictive distribution when an individual point is omitted (deleted) from the sample by use of a divergence measure suggested by Johnson (1985) as a measure of discrepancy between the full data and the data with the case deleted. The methodology is illustrated on some data used in Titterington et al. (1981).  相似文献   

16.
陈辉  陈建成 《统计研究》2008,25(11):64-71
 本文利用Copula函数的概念研究了保险投资组合多元金融数据的统计模拟。根据我国保险投资的特殊性,我们选用沪深300指数、基金指数、企债指数和国债指数四种风险资产来模拟保险投资组合中的股票、基金、企债和国债收益。基于模拟的结果分别利用传统近似方法(Add-VaR、N-VaR和H-VaR)和Copula方法计算了投资组合的总风险;相对于Copula-VaR方法,Add-VaR显著高估了风险,N-VaR显著低估了风险,H-VaR对于Copula-VaR的近似效果比较好,但其也高估了风险,即H-VaR相对于Copula-VaR是一种比较保守的方法。另外,我们分析了投资组合权重变化和Copula函数的选择对投资组合总风险的影响。  相似文献   

17.
We discuss the detection of a connected shape in a noisy image. Two types of image are considered: in the first a degraded outline of the shape is visible, while in the second the data are a corrupted version of the shape itself. In the first type the shape is defined by a thin outline of pixels with records that are different from those at pixels inside and outside the shape, while in the second type the shape is defined by its edge and pixels inside and outside the shape have different records. Our motivation is the identification of cross-sectional head shapes in ultrasound images of human fetuses. We describe and discuss a new approach to detecting shapes in images of the first type that uses a specially designed filter function that iteratively identifies the outline pixels of the head. We then suggest a way based on the cascade algorithm introduced by Jubb and Jennison (1991) of improving and considerably increasing the speed of a method proposed by Storvik (1994) for detecting edges in images of the second type.  相似文献   

18.
The estimated test error of a learned classifier is the most commonly reported measure of classifier performance. However, constructing a high quality point estimator of the test error has proved to be very difficult. Furthermore, common interval estimators (e.g. confidence intervals) are based on the point estimator of the test error and thus inherit all the difficulties associated with the point estimation problem. As a result, these confidence intervals do not reliably deliver nominal coverage. In contrast we directly construct the confidence interval by use of smooth data-dependent upper and lower bounds on the test error. We prove that for linear classifiers, the proposed confidence interval automatically adapts to the non-smoothness of the test error, is consistent under fixed and local alternatives, and does not require that the Bayes classifier be linear. Moreover, the method provides nominal coverage on a suite of test problems using a range of classification algorithms and sample sizes.  相似文献   

19.
The likelihood function from a large sample is commonly assumed to be approximately a normal density function. The literature supports, under mild conditions, an approximate normal shape about the maximum; but typically a stronger result is needed: that the normalized likelihood itself is approximately a normal density. In a transformation-parameter context, we consider the likelihood normalized relative to right-invariant measure, and in the location case under moderate conditions show that the standardized version converges almost surely to the standard normal. Also in a transformation-parameter context, we show that almost sure convergence of the normalized and standardized likelihood to a standard normal implies that the standardized distribution for conditional inference converges almost surely to a corresponding standard normal. This latter result is of immediate use for a range of estimating, testing, and confidence procedures on a conditional-inference basis.  相似文献   

20.
In this paper, it is demonstrated that coefficient of determination of an ANOVA linear model provides a measure of polarization. Taking as the starting point the link between polarization and dispersion, we reformulate the measure of polarization of Zhang and Kanbur using the decomposition of the variance instead of the decomposition of the Theil index. We show that the proposed measure is equivalent to the coefficient of determination of an ANOVA linear model that explains, for example, the income of the households as a function of any population characteristic such as education, gender, occupation, etc. This result provides an alternative way to analyse polarization by sub-populations characteristics and at the same time allows us to compare sub-populations via the estimated coefficients of the ANOVA model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号