首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new robust, low computational cost technology for recognizing free-form objects in three-dimensional (3D) range data, or, in two dimensional (2D) curve data in the image plane. Objects are represented by implicit polynomials (i.e. 3D algebraic surfaces or 2D algebraic curves) of degree greater than two, and are recognized by computing and matching vectors of their algebraic invariants (which are functions of their coefficients that are invariant to translations, rotations and general linear transformations). Such polynomials of the fourth degree can represent objects considerably more complicated than quadrics and super-quadrics, and can realize object recognition at significantly lower computational cost. Unfortunately, the coefficients of high degree implicit polynomials are highly sensitive to small changes in the data to which the polynomials are fit, thus often making recognition based on these polynomial coefficients or their invariants unreliable. We take two approaches to the problem: one involves restricting the polynomials to those which represent bounded curves and surfaces, and the other approach is to use Bayesian recognizers. The Bayesian recognizers are amazingly stable and reliable, even when the polynomials have unbounded zero sets and very large coefficient variability. The Bayesian recognizers are a unique interplay of algebraic functions and statistical methods. In this paper, we present these recognizers and show that they work effectively, even when data are missing along a large portion of an object boundary due, for example, to partial occlusion.  相似文献   

2.
This paper presents a new robust, low computational cost technology for recognizing free-form objects in three-dimensional (3D) range data, or, in two dimensional (2D) curve data in the image plane. Objects are represented by implicit polynomials (i.e. 3D algebraic surfaces or 2D algebraic curves) of degree greater than two, and are recognized by computing and matching vectors of their algebraic invariants (which are functions of their coefficients that are invariant to translations, rotations and general linear transformations). Such polynomials of the fourth degree can represent objects considerably more complicated than quadrics and super-quadrics, and can realize object recognition at significantly lower computational cost. Unfortunately, the coefficients of high degree implicit polynomials are highly sensitive to small changes in the data to which the polynomials are fit, thus often making recognition based on these polynomial coefficients or their invariants unreliable. We take two approaches to the problem: one involves restricting the polynomials to those which represent bounded curves and surfaces, and the other approach is to use Bayesian recognizers. The Bayesian recognizers are amazingly stable and reliable, even when the polynomials have unbounded zero sets and very large coefficient variability. The Bayesian recognizers are a unique interplay of algebraic functions and statistical methods. In this paper, we present these recognizers and show that they work effectively, even when data are missing along a large portion of an object boundary due, for example, to partial occlusion.  相似文献   

3.
Previous approaches to model-based object recognition from images have required information about the position and orientation (pose) of the object relative to the camera. Object recognition has been carried out in parallel with pose estimation, leading to algorithm which are error prone and computationally expensive. Object recognition can be decoupled from pose estimation by using geometrical properties of the object that are unchanged or invariant under projection to the image. Thus, the computational cost of object recognition is drastically reduced. A number of invariants of current interest in computer Interest in computer vision are described. The simplest and most fundamental projective invariant, the cross ratio of four collinear points, is then investigated in detail. A simple system is defined for recognizing objects on the basis of the cross ratio alone. The system has a database of models. Each model is a single cross ratio value. The performance of the system is characterized by the probability R of rejection, the probability po(s) of misclassification and the probability F of a false alarm. Formulae for R, po(g) and Fare stated. The probability density function p(t) for the cross ratio t of four collinear points with independent, identical Gaussian distribution is stated. Experiments have been carried out to see how well the formulae for R, F and p(t) apply in practice. The results are extremely encouraging. The cumulative distribution function for p(r) is closely matched by the cumulative distribution function estimated from natural images. The experimental estimates of R agree well with the theoretical predictions. However, the experimental estimates of F are below the theoretical predictions. Two possible reasons for the discrepancy are suggested: (1) it is due to the finite resolution of the corner detector; and (2) it is due to deviations from the Gaussian distributions assumed in the theoretical calculations. The experimental investigation of R has led to a new, simple and theoretically well-founded way of estimating the accuracy of corner detectors.  相似文献   

4.
A Bayesian approach to object matching is presented. An object and a scene are each represented by features, such as critical points, line segments and surface patches, constrained by unary properties and contextual relations. The matching is presented as a labeling problem, where each feature in the scene is assigned (associated with) a feature of the known model objects. The prior distribution of a scene's labeling is modeled as a Markov random field, which encodes the between-object constraints. The conditional distribution of the observed features labeled is assumed to be Gaussian, which encodes the within-object constraints. An optimal solution is defined as a maximum a posteriori estimate. Relationships with previous work are discussed. Experimental results are shown.  相似文献   

5.
Predictive mean matching imputation is popular for handling item nonresponse in survey sampling. In this article, we study the asymptotic properties of the predictive mean matching estimator for finite-population inference using a superpopulation model framework. We also clarify conditions for its robustness. For variance estimation, the conventional bootstrap inference is invalid for matching estimators with a fixed number of matches due to the nonsmoothness nature of the matching estimator. We propose a new replication variance estimator, which is asymptotically valid. The key strategy is to construct replicates directly based on the linear terms of the martingale representation for the matching estimator, instead of individual records of variables. Simulation studies confirm that the proposed method provides valid inference.  相似文献   

6.
This paper proposes a statistical procedure for the automatic volumetric primitives classification and segmentation of 3D objects surveyed with high density laser scanning range measurements. The procedure is carried out in three main phases: first, a Taylor’s expansion nonparametric model is applied to study the differential local properties of the surface so to classify and identify homogeneous point clusters. Classification is based on the study of the surface Gaussian and mean curvature, computed for each point from estimated differential parameters of the Taylor’s formula extended to second order terms. The geometrical primitives are classified into the following basic types: elliptic, hyperbolic, parabolic and planar. The last phase corresponds to a parametric regression applied to perform a robust segmentation of the various primitives. A Simultaneous AutoRegressive model is applied to define the trend surface for each geometric feature, and a Forward Search procedure puts in evidence outliers or clusters of non stationary data. An erratum to this article can be found at  相似文献   

7.
In this era of Big Data, large-scale data storage provides the motivation for statisticians to analyse new types of data. The proposed work concerns testing serial correlation in a sequence of sets of time series, here referred to as time series objects. An example is serial correlation of monthly stock returns when daily stock returns are observed. One could consider a representative or summarized value of each object to measure the serial correlation, but this approach would ignore information about the variation in the observed data. We develop Kolmogorov–Smirnov-type tests with the standard bootstrap and wild bootstrap Ljung–Box test statistics for serial correlation in mean and variance of time series objects, which take the variation within a time series object into account. We study the asymptotic property of the proposed tests and present their finite sample performance using simulated and real examples.  相似文献   

8.
Practical Bayesian data analysis involves manipulating and summarizing simulations from the posterior distribution of the unknown parameters. By manipulation we mean computing posterior distributions of functions of the unknowns, and generating posterior predictive distributions. The results need to be summarized both numerically and graphically. We introduce, and implement in R, an object-oriented programming paradigm based on a random variable object type that is implicitly represented by simulations. This makes it possible to define vector and array objects that may contain both random and deterministic quantities, and syntax rules that allow to treat these objects like any numeric vectors or arrays, providing a solution to various problems encountered in Bayesian computing involving posterior simulations. We illustrate the use of this new programming environment with examples of Bayesian computing, demonstrating missing-value imputation, nonlinear summary of regression predictions, and posterior predictive checking.  相似文献   

9.
For estimation of population totals, dual system estimation (d.s.e.) is often used. Such a procedure is known to suffer from bias under certain conditions. In the following, a simple model is proposed that combines three conditions under which bias of the DSE can result. The conditions relate to response correlation, classification and matching error. The resulting bias is termed model bias. The effects of model bias and synthetic bias in a small area estimation application are illustrated. The illustration uses simulated population data  相似文献   

10.
We consider the problem of estimating the quantiles of a distribution function in a fixed design regression model in which the observations are subject to random right censoring. The quantile estimator is defined via a conditional Kaplan-Meier type estimator for the distribution at a given design point. We establish an a.s. asymptotic representation for this quantile estimator, from which we obtain its asymptotic normality. Because a complicated estimation procedure is necessary for estimating the asymptotic bias and variance, we use a resampling procedure, which provides us, via an asymptotic representation for the bootstrapped estimator, with an alternative for the normal approximation.  相似文献   

11.
Summary.  The paper considers the problem of estimating the entire temperature field for every location on the globe from scattered surface air temperatures observed by a network of weather-stations. Classical methods such as spherical harmonics and spherical smoothing splines are not efficient in representing data that have inherent multiscale structures. The paper presents an estimation method that can adapt to the multiscale characteristics of the data. The method is based on a spherical wavelet approach that has recently been developed for a multiscale representation and analysis of scattered data. Spatially adaptive estimators are obtained by coupling the spherical wavelets with different thresholding (selective reconstruction) techniques. These estimators are compared for their spatial adaptability and extrapolation performance by using the surface air temperature data.  相似文献   

12.
Mixture models are used in a large number of applications yet there remain difficulties with maximum likelihood estimation. For instance, the likelihood surface for finite normal mixtures often has a large number of local maximizers, some of which do not give a good representation of the underlying features of the data. In this paper we present diagnostics that can be used to check the quality of an estimated mixture distribution. Particular attention is given to normal mixture models since they frequently arise in practice. We use the diagnostic tools for finite normal mixture problems and in the nonparametric setting where the difficult problem of determining a scale parameter for a normal mixture density estimate is considered. A large sample justification for the proposed methodology will be provided and we illustrate its implementation through several examples  相似文献   

13.
Treatment effect estimators that utilize the propensity score as a balancing score, e.g., matching and blocking estimators are robust to misspecifications of the propensity score model when the misspecification is a balancing score. Such misspecifications arise from using the balancing property of the propensity score in the specification procedure. Here, we study misspecifications of a parametric propensity score model written as a linear predictor in a strictly monotonic function, e.g. a generalized linear model representation. Under mild assumptions we show that for misspecifications, such as not adding enough higher order terms or choosing the wrong link function, the true propensity score is a function of the misspecified model. Hence, the latter does not bring bias to the treatment effect estimator. It is also shown that a misspecification of the propensity score does not necessarily lead to less efficient estimation of the treatment effect. The results of the paper are highlighted in simulations where different misspecifications are studied.  相似文献   

14.
A new procedure is proposed to estimate the jump location curve and surface in the two-dimensional (2D) and three-dimensional (3D) nonparametric jump regression models, respectively. In each of the 2D and 3D cases, our estimation procedure is motivated by the fact that, under some regularity conditions, the ridge location of the rotational difference kernel estimate (RDKE; Qiu in Sankhyā Ser. A 59, 268–294, 1997, and J. Comput. Graph. Stat. 11, 799–822, 2002; Garlipp and Müller in Sankhyā Ser. A 69, 55–86, 2007) obtained from the noisy image is asymptotically close to the jump location of the true image. Accordingly, a computational procedure based on the kernel smoothing method is designed to find the ridge location of RDKE, and the result is taken as the jump location estimate. The sequence relationship among the points comprising our jump location estimate is obtained. Our jump location estimate is produced without the knowledge of the range or shape of jump region. Simulation results demonstrate that the proposed estimation procedure can detect the jump location very well, and thus it is a useful alternative for estimating the jump location in each of the 2D and 3D cases.  相似文献   

15.
A measure is the formal representation of the non-negative additive functions that abound in science. We review and develop the art of assigning Bayesian priors to measures. Where necessary, spatial correlation is delegated to correlating kernels imposed on otherwise uncorrelated priors. The latter must be infinitely divisible (ID) and hence described by the Lévy–Khinchin representation. Thus the fundamental object is the Lévy measure, the choice of which corresponds to different ID process priors. The general case of a Lévy measure comprising a mixture of assigned base measures leads to a prior process comprising a convolution of corresponding processes. Examples involving a single base measure are the gamma process, the Dirichlet process (for the normalized case) and the Poisson process. We also discuss processes that we call the supergamma and super-Dirichlet processes, which are double base measure generalizations of the gamma and Dirichlet processes. Examples of multiple and continuum base measures are also discussed. We conclude with numerical examples of density estimation.  相似文献   

16.
This article compares the inverse-probability-of-selection-weighting estimation principle with the matching principle and derives conditions for weighting and matching to identify the same and the true distribution, respectively. This comparison improves the understanding of the relation of these estimation principles and allows constructing new estimators.  相似文献   

17.
Quality adjusted survival has been increasingly advocated in clinical trials to be assessed as a synthesis of survival and quality of life. We investigate nonparametric estimation of its expectation for a general multistate process with incomplete follow-up data. Upon establishing a representation of expected quality adjusted survival through marginal distributions of a set of defined events, we propose two estimators for expected quality adjusted survival. Expressed as functions of Nelson-Aalen estimators, the two estimators are strongly consistent and asymptotically normal. We derive their asymptotic variances and propose sample-based variance estimates, along with evaluation of asymptotic relative efficiency. Monte Carlo studies show that these estimation procedures perform well for practical sample sizes. We illustrate the methods using data from a national, multicenter AIDS clinical trial.  相似文献   

18.
混沌理论认为,人类行为大多具有非线性特征。会计舞弊属于行为会计的研究范畴,而传统上基于统计理论构建的舞弊识别模型大多受限于线性约束假设,可能存在模型设定偏误和信息提取不充分的缺陷。以沪深A股受到监管处罚的上市公司及其配对公司为样本,借鉴Taylor展开式的非线性思想,并使用主成分分析消除变量多重共线性,构建了非线性-主成分Logistic回归的会计舞弊识别模型。与线性回归模型对比发现,前者具有更高的舞弊识别正确率,模型拟合度更优。应用这一模型有助于更加充分提取舞弊识别信息,提高舞弊识别效率。  相似文献   

19.
In recent years, a number of statistical models have been proposed for the purposes of high-level image analysis tasks such as object recognition. However, in general, these models remain hard to use in practice, partly as a result of their complexity, partly through lack of software. In this paper we concentrate on a particular deformable template model which has proved potentially useful for locating and labelling cells in microscope slides Rue and Hurn (1999). This model requires the specification of a number of rather non-intuitive parameters which control the shape variability of the deformed templates. Our goal is to arrange the estimation of these parameters in such a way that the microscope user's expertise is exploited to provide the necessary training data graphically by identifying a number of cells displayed on a computer screen, but that no additional statistical input is required. In this paper we use maximum likelihood estimation incorporating the error structure in the generation of our training data.  相似文献   

20.
首先通过表征化、形象化和抽象化三个步骤构建技术创新能力评价的杠杆模型,然后基于杠杆模型建立了中国高技术产业技术创新能力的评价指标体系。在此基础上,通过构建基于微粒群算法的评价模型,对中国高技术产业5大行业的17个细分行业技术创新能力进行评价分析。结果表明:17个细分行业的技术创新能力存在着明显差异,排名前3的行业依次为:通信设备制造业、家用视听设备制造业和飞机制造业及修理业;并且根据各细分行业技术创新能力的3个组成能力的分值和排名,将17个细分行业划分为“基本匹配型”、“弱匹配型”和“不匹配型”3种类型,其中,只有化学药品原药制造业、仪器仪表制造业2个行业处于“基本匹配型”。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号