首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The smoothness of Tukey depth contours is a regularity condition often encountered in asymptotic theory, among others. This condition ensures that the Tukey depth fully characterizes the underlying multivariate probability distribution. In this paper we demonstrate that this regularity condition is rarely satisfied. It is shown that even well-behaved probability distributions with symmetrical, smooth and (strictly) quasi-concave densities may have non-smooth Tukey depth contours, and that the smoothness behaviour of depth contours is fairly unpredictable.  相似文献   

2.
In univariate statistics, the trimmed mean has long been regarded as a robust and efficient alternative to the sample mean. A multivariate analogue calls for a notion of trimmed region around the center of the sample. Using Tukey's depth to achieve this goal, this paper investigates two types of multivariate trimmed means obtained by averaging over the trimmed region in two different ways. For both trimmed means, conditions ensuring asymptotic normality are obtained; in this respect, one of the main features of the paper is the systematic use of Hadamard derivatives and empirical processes methods to derive the central limit theorems. Asymptotic efficiency relative to the sample mean as well as breakdown point are also studied. The results provide convincing evidence that these location estimators have nice asymptotic behavior and possess highly desirable finite-sample robustness properties; furthermore, relative to the sample mean, both of them can in some situations be highly efficient for dimensions between 2 and 10.  相似文献   

3.
The Tukey depth (Proceedings of the International Congress of Mathematicians, vol. 2, pp. 523–531, 1975) of a point p with respect to a finite set S of points is the minimum number of elements of S contained in any closed halfspace that contains p. Algorithms for computing the Tukey depth of a point in various dimensions are considered. The running times of these algorithms depend on the value of the output, making them suited to situations, such as outlier removal, where the value of the output is typically small. This research was partly funded by the NSERC Canada.  相似文献   

4.
Tukey’s control chart is generally used for monitoring the processes where the measurement process physically damages the product. It is based on single observation and robust to outliers. In this paper, two optimal synthetic Tukey’s control charts are proposed by integrating the conforming run length chart with the Tukey’s control chart and its modification. The performance comparison of the proposed charts with the existing Tukey’s control charts is made by using out-of-control average run length and extra quadratic loss as performance metrics. The proposed charts offer better protection against the process shifts as compare to the existing Tukey’s control charts when the underlying process distribution is symmetric or asymmetric. Simulation studies also establish the supremacy of the proposed control charts over the existing Tukey’s control charts. In the end, an illustrative example based on a real data set of the combined cycle power plant is provided for practical implementation.  相似文献   

5.
The concept of location depth was introduced as a way to extend the univariate notion of ranking to a bivariate configuration of data points. It has been used successfully for robust estimation, hypothesis testing, and graphical display. The depth contours form a collection of nested polygons, and the center of the deepest contour is called the Tukey median. The only available implemented algorithms for the depth contours and the Tukey median are slow, which limits their usefulness. In this paper we describe an optimal algorithm which computes all bivariate depth contours in O(n 2) time and space, using topological sweep of the dual arrangement of lines. Once these contours are known, the location depth of any point can be computed in O(log2 n) time with no additional preprocessing or in O(log n) time after O(n 2) preprocessing. We provide fast implementations of these algorithms to allow their use in everyday statistical practice.  相似文献   

6.
This paper investigates two estimators under the non-parametric neighbourhoods of an exponential scale parametric family. It uses the relative efficiency approach and shows that the tighter lower bounds on the relative efficiency of the upper trimmed mean to mean can be obtained under a sufficient condition. This condition gives the relationship between the possible positive lower bound and the degree of asymmetry of some related distributions. Similar arguments can be applied to the comparison of dispersion estimators under the neighbourhoods of a normal distribution.  相似文献   

7.
8.
Summary: Data depth is a concept that measures the centrality of a point in a given data cloud x 1, x 2,...,x n or in a multivariate distribution P X on d d . Every depth defines a family of so–called trimmed regions. The –trimmed region is given by the set of points that have a depth of at least . Data depth has been used to define multivariate measures of location and dispersion as well as multivariate dispersion orders.If the depth of a point can be represented as the minimum of the depths with respect to all unidimensional projections, we say that the depth satisfies the (weak) projection property. Many depths which have been proposed in the literature can be shown to satisfy the weak projection property. A depth is said to satisfy the strong projection property if for every the unidimensional projection of the –trimmed region equals the –trimmed region of the projected distribution.After a short introduction into the general concept of data depth we formally define the weak and the strong projection property and give necessary and sufficient criteria for the projection property to hold. We further show that the projection property facilitates the construction of depths from univariate trimmed regions. We discuss some of the depths proposed in the literature which possess the projection property and define a general class of projection depths, which are constructed from univariate trimmed regions by using the above method.Finally, algorithmic aspects of projection depths are discussed. We describe an algorithm which enables the approximate computation of depths that satisfy the projection property.  相似文献   

9.
As a measure of certainty, informational energy has been used in many statistical problems. In this article, we introduce some estimators of this quantity by modifying the basic estimator available in the literature. The new measures are then used to develop tests of uniformity. A Monte Carlo simulation study is performed to evaluate power behavior of the proposed tests. The results confirm the preference of the new tests in some situations.  相似文献   

10.
In this paper it is shown that data depth does not only provide consistent and robust estimators but also consistent and robust tests. Thereby, consistency of a test means that the Type I (αα) error and the Type II (ββ) error converge to zero with growing sample size in the interior of the nullhypothesis and the alternative, respectively. Robustness is measured by the breakdown point which depends here on a so-called concentration parameter. The consistency and robustness properties are shown for cases where the parameter of maximum depth is a biased estimator and has to be corrected. This bias is a disadvantage for estimation but an advantage for testing. It causes that the corresponding simplicial depth is not a degenerated U-statistic so that tests can be derived easily. However, the straightforward tests have a very poor power although they are asymptotic α-levelα-level tests. To improve the power, a new method is presented to modify these tests so that even consistency of the modified tests is achieved. Examples of two-dimensional copulas and the Weibull distribution show the applicability of the new method.  相似文献   

11.
A formula for the inverse of the Freeman–Tukey double arcsine transformation is derived. This formula is useful when expressing means of double arcsines as retransformed proportions. When the mean is taken from original proportions involving different n's, it is suggested that the harmonic mean of the n's be used in the inversion formula.  相似文献   

12.
We consider the properties of the trimmed mean, as regards minimax-variance L-estimation of a location parameter in a Kolmogorov neighbourhood K() of the normal distribution: We first review some results on the search for an L-minimax estimator in this neighbourhood, i.e. a linear combination of order statistics whose maximum variance in Kt() is a minimum in the class of L-estimators. The natural candidate – the L-estimate which is efficient for that member of Kt,() with minimum Fisher information – is known not to be a saddlepoint solution to the minimax problem. We show here that it is not a solution at all. We do this by showing that a smaller maximum variance is attained by an appropriately trimmed mean. We argue that this trimmed mean, as well as being computationally simple – much simpler than the efficient L-estimate referred to above, and simpler than the minimax M- and R-estimators – is at least “nearly” minimax.  相似文献   

13.
Several estimators of squared prediction error have been suggested for use in model and bandwidth selection problems. Among these are cross-validation, generalized cross-validation and a number of related techniques based on the residual sum of squares. For many situations with squared error loss, e.g. nonparametric smoothing, these estimators have been shown to be asymptotically optimal in the sense that in large samples the estimator minimizing the selection criterion also minimizes squared error loss. However, cross-validation is known not to be asymptotically optimal for some `easy' location problems. We consider selection criteria based on estimators of squared prediction risk for choosing between location estimators. We show that criteria based on adjusted residual sum of squares are not asymptotically optimal for choosing between asymptotically normal location estimators that converge at rate n 1/2but are when the rate of convergence is slower. We also show that leave-one-out cross-validation is not asymptotically optimal for choosing between √ n -differentiable statistics but leave- d -out cross-validation is optimal when d ∞ at the appropriate rate.  相似文献   

14.
For the lifetime (or negative) exponential distribution, the trimmed likelihood estimator has been shown to be explicit in the form of a β‐trimmed mean which is representable as an estimating functional that is both weakly continuous and Fréchet differentiable and hence qualitatively robust at the parametric model. It also has high efficiency at the model. The robustness is in contrast to the maximum likelihood estimator (MLE) involving the usual mean which is not robust to contamination in the upper tail of the distribution. When there is known right censoring, it may be perceived that the MLE which is the most asymptotically efficient estimator may be protected from the effects of ‘outliers’ due to censoring. We demonstrate that this is not the case generally, and in fact, based on the functional form of the estimators, suggest a hybrid defined estimator that incorporates the best features of both the MLE and the β‐trimmed mean. Additionally, we study the pure trimmed likelihood estimator for censored data and show that it can be easily calculated and that the censored observations are not always trimmed. The different trimmed estimators are compared by a modest simulation study.  相似文献   

15.
16.
Forecasting of future snow depths is useful for many applications like road safety, winter sport activities, avalanche risk assessment and hydrology. Motivated by the lack of statistical forecasts models for snow depth, in this paper we present a set of models to fill this gap. First, we present a model to do short-term forecasts when we assume that reliable weather forecasts of air temperature and precipitation are available. The covariates are included nonlinearly into the model following basic physical principles of snowfall, snow aging and melting. Due to the large set of observations with snow depth equal to zero, we use a zero-inflated gamma regression model, which is commonly used to similar applications like precipitation. We also do long-term forecasts of snow depth and much further than traditional weather forecasts for temperature and precipitation. The long-term forecasts are based on fitting models to historic time series of precipitation, temperature and snow depth. We fit the models to data from six locations in Norway with different climatic and vegetation properties. Forecasting five days into the future, the results showed that, given reliable weather forecasts of temperature and precipitation, the forecast errors in absolute value was between 3 and 7?cm for different locations in Norway. Forecasting three weeks into the future, the forecast errors were between 7 and 16?cm.  相似文献   

17.
Blest (2000) proposed a new nonparametric measure of correlation between two random variables. His coefficient, which is dissymmetric in its arguments, emphasizes discrepancies observed among the first ranks in the orderings induced by the variables. The authors derive the limiting distribution of Blest's index and suggest symmetric variants whose merits as statistics for testing independence are explored using asymptotic relative efficiency calculations and Monte Carlo simulations.  相似文献   

18.
19.
Balanced Confidence Regions Based on Tukey's Depth and the Bootstrap   总被引:1,自引:0,他引:1  
We propose and study the bootstrap confidence regions for multivariate parameters based on Tukey's depth. The bootstrap is based on the normalized or Studentized statistic formed from an independent and identically distributed random sample obtained from some unknown distribution in R q . The bootstrap points are deleted on the basis of Tukey's depth until the desired confidence level is reached. The proposed confidence regions are shown to be second order balanced in the context discussed by Beran. We also study the asymptotic consistency of Tukey's depth-based bootstrap confidence regions. The applicability of the method proposed is demonstrated in a simulation study.  相似文献   

20.
Pitman closeness of both the upper and lower k-record statistics to the population quantiles of a location–scale family of distributions is studied. For the population median, the Pitman-closest k-record is also determined. In the case of symmetric distributions, the Pitman closeness probabilities of k-record statistics are shown to be distribution-free, and explicit expressions are also derived for these probabilities. Exact expressions are derived for the required probabilities for uniform and exponential distributions. Numerical results are given for these families and also the Pitman-closest k-record is determined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号