首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce a general class of complex elliptical distributions on a complex sphere that includes many of the most commonly used distributions, like the complex Watson, Bingham, angular central Gaussian and several others. We study properties of this family of distributions and apply the distribution theory for modeling shapes in two dimensions. We develop maximum likelihood and Bayesian methods of estimation to describe shape and obtain confidence bounds and credible regions for shapes. The methodology is illustrated through an example where estimation of shape of mouse vertebrae is desired.  相似文献   

2.
This paper highlights distributional connections between directional statistics and shape analysis. In particular, we provide a test of uniformity for highly dispersed shapes, using the standard techniques of directional statistics. We exploit the isometric transformation from triangular shapes to a sphere in three dimensions, to provide a rich class of shape distributions. A link between the Fisher distribution and the complex Bingham distribution is re-examined. Some extensions to higher-dimensional shapes are outlined.  相似文献   

3.
4.
The complex Bingham distribution is relevant for the shape analysis of landmark data in two dimensions. In this paper it is shown that the problem of simulating from this distribution reduces to simulation from a truncated multivariate exponential distribution. Several simulation methods are described and their efficiencies are compared.  相似文献   

5.
This paper presents a novel Bayesian method based on the complex Watson shape distribution that is used in detecting shape differences between the second thoracic vertebrae for two groups of mice, small and large, categorized according to their body weight. Considering the data provided in Johnson et al. (1988), we provide Bayesian methods of estimation as well as highest posterior density (HPD) estimates for modal vertebrae shapes within each group. Finally, we present a classification procedure that can be used in any shape classification experiment, and apply it for categorizing new vertebrae shapes in small or large groups.  相似文献   

6.
This paper, dedicated to the 80th birthday of Professor C. R. Rao, deals with asymptotic distributions of Fréchet sample means and Fréchet total sample variance that are used in particular for data on projective shape spaces or on 3D shape spaces. One considers the intrinsic means associated with Riemannian metrics that are locally flat in a geodesically convex neighborhood around the support of a probability measure on a shape space or on a projective shape space. Such methods are needed to derive tests concerning variability of planar projective shapes in natural images or large sample and bootstrap confidence intervals for 3D mean shape coordinates of an ordered set of landmarks from laser images.  相似文献   

7.
We examine the use of Confocal Laser Tomographic images for detecting glaucoma. From the clinical aspect, the optic nerve head's (ONH) area contains all the relevant information on glaucoma. The shape of ONH is approximately a skewed cup. We summarize its shape by three biological landmarks on the neural-rim and the fourth landmark as the point of the maximum depth, which is approximately the point where the optic nerve enters this eye cup. These four landmarks are extracted from the images related to some Rhesus monkeys before and after inducing glaucoma. Previous analysis on Bookstein shape coordinates of these four landmarks revealed only marginally significant findings. From clinical experience, it is believed that the ratio depth to diameter of the eye cup provides a useful measure of the shape change. We consider the bootstrap distribution of this normalized 'depth' (G) and give evidence that it provides an appropriate measure of the shape change. This measure G is labelled as the glaucoma index. Further experiments are in progress to validate its use for glaucoma in humans.  相似文献   

8.
We describe methods to detect influential observations in a sample of pre-shapes when the underlying distribution is assumed to be complex Bingham. One of these methods is based on Cook's distance, which is derived from the likelihood of the complex Bingham distribution. Other method is related to the tangent space, which is based on the local influence for the multivariate normal distribution. A method to detect outliers is also explained. The application of the methods is illustrated in both a real dataset and a simulated sample.  相似文献   

9.
Overfitting occurs when one tries to train a large model on small amount of data. Regularizing a neural network using prior knowledge remains a topic of research as it is not concluded how much prior information can be given to the neural network. In this paper, a novel algorithm is introduced which uses regularization to train a neural network without increasing the dataset. A trivial prior information of a class label is supplied to the model while training. Laplace noise is introduced to the intermediate layer for more generalization. The results show significant improvement in accuracy on the standard datasets for a simple Convolutional Neural Network (CNN). While the proposed method outperforms previous regularization techniques like dropout and batch normalization, it can also be applied with them for further improvement in the performance. On the variants of MNIST, proposed algorithm achieved an average 48% increment in the test accuracy.  相似文献   

10.
The odd Weibull distribution is a three-parameter generalization of the Weibull and the inverse Weibull distributions having rich density and hazard shapes for modeling lifetime data. This paper explored the odd Weibull parameter regions having finite moments and examined the relation to some well-known distributions based on skewness and kurtosis functions. The existence of maximum likelihood estimators have shown with complete data for any sample size. The proof for the uniqueness of these estimators is given only when the absolute value of the second shape parameter is between zero and one. Furthermore, elements of the Fisher information matrix are obtained based on complete data using a single integral representation which have shown to exist for any parameter values. The performance of the odd Weibull distribution over various density and hazard shapes is compared with generalized gamma distribution using two different test statistics. Finally, analysis of two data sets has been performed for illustrative purposes.  相似文献   

11.
We propose novel parametric concentric multi‐unimodal small‐subsphere families of densities for p ? 1 ≥ 2‐dimensional spherical data. Their parameters describe a common axis for K small hypersubspheres, an array of K directional modes, one mode for each subsphere, and K pairs of concentrations parameters, each pair governing horizontal (within the subsphere) and vertical (orthogonal to the subsphere) concentrations. We introduce two kinds of distributions. In its one‐subsphere version, the first kind coincides with a special case of the Fisher–Bingham distribution, and the second kind is a novel adaption that models independent horizontal and vertical variations. In its multisubsphere version, the second kind allows for a correlation of horizontal variation over different subspheres. In medical imaging, the situation of p ? 1 = 2 occurs precisely in modeling the variation of a skeletally represented organ shape due to rotation, twisting, and bending. For both kinds, we provide new computationally feasible algorithms for simulation and estimation and propose several tests. To the best knowledge of the authors, our proposed models are the first to treat the variation of directional data along several concentric small hypersubspheres, concentrated near modes on each subsphere, let alone horizontal dependence. Using several simulations, we show that our methods are more powerful than a recent nonparametric method and ad hoc methods. Using data from medical imaging, we demonstrate the advantage of our method and infer on the dominating axis of rotation of the human knee joint at different walking phases.  相似文献   

12.
A mathematical classification method is presented to show how numerical tests for abnormal anatomical shape change can be used to study geometrical shape changes of the hippocampus in relation to the occurrence of schizophrenia. The method uses the well-known best Bayesian decision rule for two simple hypotheses. Furthermore, the technique is illustrated by applying the hypothesis testing method to some preliminary hippocampal data. The data pool available for the experiment consisted of 10 subjects, five of whom were diagnosed with schizophrenia and five of whom were not schizophrenics. Even though the information used in the experiment is limited and the number of subjects is relatively small, we are confident that the mathematical classification method presented is of significance and can be used successfully, given proper data, as a diagnostic tool.  相似文献   

13.
Wang  Haixu  Cao  Jiguo 《Statistics and Computing》2020,30(5):1209-1220

Reconstructing the functional network of a neuron cluster is a fundamental step to reveal the complex interactions among neural systems of the brain. Current approaches to reconstruct a network of neurons or neural systems focus on establishing a static network by assuming the neural network structure does not change over time. To the best of our knowledge, this is the first attempt to build a time-varying directed network of neurons by using an ordinary differential equation model, which allows us to describe the underlying dynamical mechanism of network connections. The proposed method is demonstrated by estimating a network of wide dynamic range neurons located in the dorsal horn of the rats’ spinal cord in response to pain stimuli applied to the Zusanli acupoint on the right leg. The finite sample performance of the proposed method is also investigated with a simulation study.

  相似文献   

14.
Abstract

In analyzing two multivariate normal data sets, the assumption about equality of covariance matrices is usually used as a default for doing subsequence inferences. If this equality doesn’t hold, later inferences will be more complex and usually approximate. If one detects some identical components between two decomposed non equal covariance matrices and uses this extra information, one expects that subsequence inferences can be more accurately performed. For this purpose, in this article we consider some statistical tests about the equality of components of decomposed covariance matrices of two multivariate normal populations. Our emphasis is on the spectral decomposition of these matrices. Hypotheses about the equalities of sizes, shapes, and set of directions as components of these two covariance matrices are tested by the likelihood ratio test (LRT). Some simulation studies are carried out to investigate the accuracy and power of the LRT. Finally, analyses of two real data sets are illustrated.  相似文献   

15.
If interest lies in reporting absolute measures of risk from time-to-event data then obtaining an appropriate approximation to the shape of the underlying hazard function is vital. It has previously been shown that restricted cubic splines can be used to approximate complex hazard functions in the context of time-to-event data. The degree of complexity for the spline functions is dictated by the number of knots that are defined. We highlight through the use of a motivating example that complex hazard function shapes are often required when analysing time-to-event data. Through the use of simulation, we show that provided a sufficient number of knots are used, the approximated hazard functions given by restricted cubic splines fit closely to the true function for a range of complex hazard shapes. The simulation results also highlight the insensitivity of the estimated relative effects (hazard ratios) to the correct specification of the baseline hazard.  相似文献   

16.
We present a new method to describe shape change and shape differences in curves, by constructing a deformation function in terms of a wavelet decomposition. Wavelets form an orthonormal basis which allows representations at multiple resolutions. The deformation function is estimated, in a fully Bayesian framework, using a Markov chain Monte Carlo algorithm. This Bayesian formulation incorporates prior information about the wavelets and the deformation function. The flexibility of the MCMC approach allows estimation of complex but clinically important summary statistics, such as curvature in our case, as well as estimates of deformation functions with variance estimates, and allows thorough investigation of the posterior distribution. This work is motivated by multi-disciplinary research involving a large-scale longitudinal study of idiopathic scoliosis in UK children. This paper provides novel statistical tools to study this spinal deformity, from which 5% of UK children suffer. Using the data we consider statistical inference for shape differences between normals, scoliotics and developers of scoliosis, in particular for spinal curvature, and look at longitudinal deformations to describe shape changes with time.  相似文献   

17.
The complex Watson distribution is an important simple distribution on the complex sphere which is used in statistical shape analysis. We describe the density, obtain the integrating constant and provide large sample approximations. Maximum likelihood estimation and hypothesis testing procedures for one and two samples are described. The particular connection with shape analysis is discussed and we consider an application examining shape differences between normal and schizophrenic brains. We make some observations about Bayesian shape inference and finally we describe a more general rotationally symmetric family of distributions.  相似文献   

18.
Stereology typically concerns estimation of properties of a geometric structure from plane section information. This paperprovides a brief review of some statistical aspects of this rapidly developing field, with some reference to applications in the earth sciences. After an introductory discussion of the scope of stereology, section 2 briefly mentions results applicable when no assumptions can be made about the stochastic nature of the sampled matrix, statistical considerations then arising solelyfrom the ‘randomness’ of the plane section. The next two sections postulate embedded particles of specific shapes, the particular case of spheres being discussed in some detail.

References are made to results for ‘thin slices’ and other prob-ing mechanisms. Randomly located convex particles, of otherwise arbitrary shape, are discussed in section 5 and the review concludes with a specific application of stereological ideas to some data on neolithic mining.  相似文献   

19.
Although the effect of missing data on regression estimates has received considerable attention, their effect on predictive performance has been neglected. We studied the performance of three missing data strategies—omission of records with missing values, replacement with a mean and imputation based on regression—on the predictive performance of logistic regression (LR), classification tree (CT) and neural network (NN) models in the presence of data missing completely at random (MCAR). Models were constructed using datasets of size 500 simulated from a joint distribution of binary and continuous predictors including nonlinearities, collinearity and interactions between variables. Though omission produced models that fit better on the data from which the models were developed, imputation was superior on average to omission for all models when evaluating the receiver operating characteristic (ROC) curve area, mean squared error (MSE), pooled variance across outcome categories and calibration X 2 on an independently generated test set. However, in about one-third of simulations, omission performed better. Performance was also more variable with omission including quite a few instances of extremely poor performance. Replacement and imputation generally produced similar results except with neural networks for which replacement, the strategy typically used in neural network algorithms, was inferior to imputation. Missing data affected simpler models much less than they did more complex models such as generalized additive models that focus on local structure For moderate sized datasets, logistic regressions that use simple nonlinear structures such as quadratic terms and piecewise linear splines appear to be at least as robust to randomly missing values as neural networks and classification trees.  相似文献   

20.
Continuous shape change is represented as curves in the shape space. A method for checking the closeness of these curves to a geodesic is presented. Three large databases of short human motions are considered and shown to be well approximated by geodesics. The motions are thus approximated by two shapes on the geodesic and the rate of progress along the path. An analysis of facial motion data taken from a study of subjects with cleft lip or cleft palate is presented that allows the motion to be considered independently from the static shape. Inferential methods for assessing the change in motion are presented. The construction of predicted animated motions is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号