首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents nonparametric two-sample bootstrap tests for means of random symmetric positive-definite (SPD) matrices according to two different metrics: the Frobenius (or Euclidean) metric, inherited from the embedding of the set of SPD metrics in the Euclidean set of symmetric matrices, and the canonical metric, which is defined without an embedding and suggests an intrinsic analysis. A fast algorithm is used to compute the bootstrap intrinsic means in the case of the latter. The methods are illustrated in a simulation study and applied to a two-group comparison of means of diffusion tensors (DTs) obtained from a single voxel of registered DT images of children in a dyslexia study.  相似文献   

2.
Abstract. The Central Limit Theorem (CLT) for extrinsic and intrinsic means on manifolds is extended to a generalization of Fréchet means. Examples are the Procrustes mean for 3D Kendall shapes as well as a mean introduced by Ziezold. This allows for one‐sample tests previously not possible, and to numerically assess the ‘inconsistency of the Procrustes mean’ for a perturbation model and ‘inconsistency’ within a model recently proposed for diffusion tensor imaging. Also it is shown that the CLT can be extended to mildly rank deficient diffusion tensors. An application to forestry gives the temporal evolution of Douglas fir tree stems tending strongly towards cylinders at early ages and tending away with increased competition.  相似文献   

3.
We propose a new method of nonparametric estimation which is based on locally constant smoothing with an adaptive choice of weights for every pair of data points. Some theoretical properties of the procedure are investigated. Then we demonstrate the performance of the method on some simulated univariate and bivariate examples and compare it with other nonparametric methods. Finally we discuss applications of this procedure to magnetic resonance and satellite imaging.  相似文献   

4.
Summary. It is occasionally necessary to smooth data over domains in R 2 with complex irregular boundaries or interior holes. Traditional methods of smoothing which rely on the Euclidean metric or which measure smoothness over the entire real plane may then be inappropriate. This paper introduces a bivariate spline smoothing function defined as the minimizer of a penalized sum-of-squares functional. The roughness penalty is based on a partial differential operator and is integrated only over the problem domain by using finite element analysis. The method is motivated by and applied to two sample smoothing problems and is compared with the thin plate spline.  相似文献   

5.
In this paper, I explore the usage of positive definite metric tensors derived from the second derivative information in the context of the simplified manifold Metropolis adjusted Langevin algorithm. I propose a new adaptive step size procedure that resolves the shortcomings of such metric tensors in regions where the log‐target has near zero curvature in some direction. The adaptive step size selection also appears to alleviate the need for different tuning parameters in transient and stationary regimes that is typical of Metropolis adjusted Langevin algorithm. The combination of metric tensors derived from the second derivative information and the adaptive step size selection constitute a large step towards developing reliable manifold Markov chain Monte Carlo methods that can be implemented automatically for models with unknown or intractable Fisher information, and even for target distributions that do not admit factorization into prior and likelihood. Through examples of low to moderate dimension, I show that the proposed methodology performs very well relative to alternative Markov chain Monte Carlo methods.  相似文献   

6.
In areas such as kernel smoothing and non-parametric regression, there is emphasis on smooth interpolation. We concentrate on pure interpolation and build smooth polynomial interpolators by first extending the monomial (polynomial) basis and then minimizing a measure of roughness with respect to the extra parameters in the extended basis. Algebraic methods can help in choosing the extended basis. We get arbitrarily close to optimal smoothing for any dimension over an arbitrary region, giving simple models close to splines. We show in examples that smooth interpolators perform much better than straight polynomial fits and for small sample size, better than kriging-type methods, used, for example in computer experiments.  相似文献   

7.
This article provides alternative circular smoothing methods in nonparametric estimation of periodic functions. By treating the data as ‘circular’, we solve the “boundary issue” in the nonparametric estimation treating the data as ‘linear’. By redefining the distance metric and signed distance, we modify many estimators used in the situations involving periodic patterns. In the perspective of ‘nonparametric estimation of periodic functions’, we present the examples in nonparametric estimation of (1) a periodic function, (2) multiple periodic functions, (3) an evolving function, (4) a periodically varying-coefficient model and (5) a generalized linear model with periodically varying coefficient. In the perspective of ‘circular statistics’, we provide alternative approaches to calculate the weighted average and evaluate the ‘linear/circular–linear/circular’ association and regression. Simulation studies and an empirical study of electricity price index have been conducted to illustrate and compare our methods with other methods in the literature.  相似文献   

8.
New points can be superimposed on a Euclidean configuration obtained as a result of a metric multidimensional scaling at coordinates given by Gower's interpolation formula. The procedure amounts to discarding a, possibly nonnull, coordinate along an additional dimension. We derive an analytical formula for this projection error term and, for real data problems, we describe a statistical method for testing its significance, as a cautionary device prior to further distance-based predictions.  相似文献   

9.
This paper proposes a new factor rotation for the context of functional principal components analysis. This rotation seeks to re-express a functional subspace in terms of directions of decreasing smoothness as represented by a generalized smoothing metric. The rotation can be implemented simply and we show on two examples that this rotation can improve the interpretability of the leading components.  相似文献   

10.
A number of areas related to learning under supervision have not been fully investigated, particularly the possibility of incorporating the method of classification into shape analysis. In this regard, practical ideas conducive to the improvement of form classification are the focus of interest. Our proposal is to employ a hybrid classifier built on Euclidean Distance Matrix Analysis (EDMA) and Procrustes distance, rather than generalised Procrustes analysis (GPA). In empirical terms, it has been demonstrated that there is notable difference between the estimated form and the true form when EDMA is used as the basis for computation. However, this does not seem to be the case when GPA is employed. With the assumption that no association exists between landmarks, EDMA and GPA are used to calculate the mean form and diagonal weighting matrix to build superimposing classifiers. As our findings indicate, with the use of EDMA estimators, the superimposing classifiers we propose work extremely well, as opposed to the use of GPA, as far as both simulated and real datasets are concerned.  相似文献   

11.
Cross-validation as a means of choosing the smoothing parameter in spline regression has achieved a wide popularity. Its appeal comprises of an automatic method based on an attractive criterion and along with many other methods it has been shown to minimize predictive mean square error asymptotically. However, in practice there may be a substantial proportion of applications where a cross-validation style choice may lead to drastic undersmoothing often as far as interpolation. Furthermore, because the criterion is so appealing the user may be misled by an inappropriate, automatically-chosen value. In this paper we investigate the nature of cross-validatory methods in spline smoothing regression and suggest variants which provide small sample protection against undersmoothing.  相似文献   

12.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

13.
Kernel density estimation for multivariate, circular data has been formulated only when the sample space is the sphere, but theory for the torus would also be useful. For data lying on a d-dimensional torus (d?1), we discuss kernel estimation of a density, its mixed partial derivatives, and their squared functionals. We introduce a specific class of product kernels whose order is suitably defined in such a way to obtain L2-risk formulas whose structure can be compared to their Euclidean counterparts. Our kernels are based on circular densities; however, we also discuss smaller bias estimation involving negative kernels which are functions of circular densities. Practical rules for selecting the smoothing degree, based on cross-validation, bootstrap and plug-in ideas are derived. Moreover, we provide specific results on the use of kernels based on the von Mises density. Finally, real-data examples and simulation studies illustrate the findings.  相似文献   

14.
The procedures of estimating prediction intervals for ARMA processes can be divided into model based methods and empirical methods. Model based methods require knowledge of the model and the underlying innovation distribution. Empirical methods are based on sample forecast errors. In this paper we apply nonparametric quantile regression to empirical forecast errors using lead time as regressor. Using this method there is no need for a distributional assumption. But for the special data pattern in this application a double kernel method which allows smoothing in two directions is required. An estimation algorithm is presented and applied to some simulation examples.  相似文献   

15.
Graphical methods are presented for the analysis of ranking data collected from g groups of rankers. The data provided by a single individual consist of the ranks of r objects. The sample space is the space of all permutations and has cardinality r! In order to reduce the dimensionality of the data and to study the interrelationships among rankers and items, a two-stage approach is proposed. First, transformations motivated by various metrics on permutations are defined. In particular, the Kendall metric gives rise to pairwise comparisons. Then, the transformed data are analyzed using results in connection with the generalized singular-value decomposition of a matrix. The methods are illustrated on two examples.  相似文献   

16.
We pnwnt some ilicorciical results obtained by applying Procrustes methods to the statistical analysis on the two special manifolds, the Stiefel manifold Vk,m and the Grassmann manifold Gk,m?k or, equivalently, the manifold Pk,m?k of all m × m orthogonal projection matrices idempoteut of rank k. Procrustes representations of Vk,m and Pk,m?k by means of equivalence classes of matrices are considered, and Procrustes statistics and means are defined via the ordinary, weighted and generalized Procrustes methods. We discuss perturbation theory in Procrustes analysis on Vk,m and Pk,m?k. Finally, we give a brief discussion of embeddings of the Stiefel and Grassmann manifolds.  相似文献   

17.
Studies on diffusion tensor imaging (DTI) quantify the diffusion of water molecules in a brain voxel using an estimated 3 × 3 symmetric positive definite (p.d.) diffusion tensor matrix. Due to the challenges associated with modelling matrix‐variate responses, the voxel‐level DTI data are usually summarized by univariate quantities, such as fractional anisotropy. This approach leads to evident loss of information. Furthermore, DTI analyses often ignore the spatial association among neighbouring voxels, leading to imprecise estimates. Although the spatial modelling literature is rich, modelling spatially dependent p.d. matrices is challenging. To mitigate these issues, we propose a matrix‐variate Bayesian semiparametric mixture model, where the p.d. matrices are distributed as a mixture of inverse Wishart distributions, with the spatial dependence captured by a Markov model for the mixture component labels. Related Bayesian computing is facilitated by conjugacy results and use of the double Metropolis–Hastings algorithm. Our simulation study shows that the proposed method is more powerful than competing non‐spatial methods. We also apply our method to investigate the effect of cocaine use on brain microstructure. By extending spatial statistics to matrix‐variate data, we contribute to providing a novel and computationally tractable inferential tool for DTI analysis.  相似文献   

18.
In this paper, we introduce Procrustes analysis in a Bayesian framework, by treating the classic Procrustes regression equation from a Bayesian perspective, while modeling shapes in two dimensions. The Bayesian approach allows us to compute point estimates and credible sets for the full Procrustes fit parameters. The methods are illustrated through an application to radar data from short-term weather forecasts (nowcasts), a very important problem in hydrology and meteorology.  相似文献   

19.
In the paper we suggest certain nonparametric estimators of random signals based on the wavelet transform. We consider stochastic signals embedded in white noise and extractions with wavelet denoizing algorithms utilizing the non-decimated discrete wavelet transform and the idea of wavelet scaling. We evaluate properties of these estimators via extensive computer simulations and partially also analytically. Our wavelet estimators of random signals have clear advantages over parametric maximum likelihood methods as far as computational issues are concerned, while at the same time they can compete with these methods in terms of precision of estimation in small samples. An illustrative example concerning smoothing of survey data is also provided.  相似文献   

20.
Wavelet shrinkage for unequally spaced data   总被引:4,自引:0,他引:4  
Wavelet shrinkage (WaveShrink) is a relatively new technique for nonparametric function estimation that has been shown to have asymptotic near-optimality properties over a wide class of functions. As originally formulated by Donoho and Johnstone, WaveShrink assumes equally spaced data. Because so many statistical applications (e.g., scatterplot smoothing) naturally involve unequally spaced data, we investigate in this paper how WaveShrink can be adapted to handle such data. Focusing on the Haar wavelet, we propose four approaches that extend the Haar wavelet transform to the unequally spaced case. Each approach is formulated in terms of continuous wavelet basis functions applied to a piecewise constant interpolation of the observed data, and each approach leads to wavelet coefficients that can be computed via a matrix transform of the original data. For each approach, we propose a practical way of adapting WaveShrink. We compare the four approaches in a Monte Carlo study and find them to be quite comparable in performance. The computationally simplest approach (isometric wavelets) has an appealing justification in terms of a weighted mean square error criterion and readily generalizes to wavelets of higher order than the Haar.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号