首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The authors give the exact coefficient of 1/N in a saddlepoint approximation to the Wilcoxon‐Mann‐Whitney null‐distribution. This saddlepoint approximation is obtained from an Edgeworth approximation to the exponentially tilted distribution. Moreover, the rate of convergence of the relative error is uniformly of order O (1/N) in a large deviation interval as defined in Feller (1971). The proposed method for computing the coefficient of 1/N can be used to obtain the exact coefficients of 1/Ni, for any i. The exact formulas for the cumulant generating function and the cumulants, needed for these results, are those of van Dantzig (1947‐1950).  相似文献   

2.
Abstract. Let {Zt}t 0 be a Lévy process with Lévy measure ν and let be a random clock, where g is a non‐negative function and is an ergodic diffusion independent of Z. Time‐changed Lévy models of the form are known to incorporate several important stylized features of asset prices, such as leptokurtic distributions and volatility clustering. In this article, we prove central limit theorems for a type of estimators of the integral parameter β(?):=∫?(x)ν(dx), valid when both the sampling frequency and the observation time‐horizon of the process get larger. Our results combine the long‐run ergodic properties of the diffusion process with the short‐term ergodic properties of the Lévy process Z via central limit theorems for martingale differences. The performance of the estimators are illustrated numerically for Normal Inverse Gaussian process Z and a Cox–Ingersoll–Ross process .  相似文献   

3.
The density power divergence, indexed by a single tuning parameter α, has proved to be a very useful tool in minimum distance inference. The family of density power divergences provides a generalized estimation scheme which includes likelihood-based procedures (represented by choice α=0 for the tuning parameter) as a special case. However, under data contamination, this scheme provides several more stable choices for model fitting and analysis (provided by positive values for the tuning parameter α). As larger values of α necessarily lead to a drop in model efficiency, determining the optimal value of α to provide the best compromise between model-efficiency and stability against data contamination in any real situation is a major challenge. In this paper, we provide a refinement of an existing technique with the aim of eliminating the dependence of the procedure on an initial pilot estimator. Numerical evidence is provided to demonstrate the very good performance of the method. Our technique has a general flavour, and we expect that similar tuning parameter selection algorithms will work well for other M-estimators, or any robust procedure that depends on the choice of a tuning parameter.  相似文献   

4.
5.
Abstract. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti Penttinen and Eva B. Vedel Jensen were invited to discuss the paper. We here present the comments from the two invited discussants and from a number of other scholars, as well as the authors’ responses to these comments. Below Figure 1, Figure 2, etc., refer to figures in the paper under discussion, while Figure A , Figure B , etc., refer to figures in the current discussion. All numbered sections and formulas refer to the paper.
Figure A Open in figure viewer PowerPoint The estimate of A(k) (solid curve) and pointwise maximum and minimum envelopes (dotted curves) from 99 simulations under independent marking, conditional on the point locations. Zero boundary has been applied.  相似文献   

6.
This paper considers record values of residuals or prediction errors in a one-parameter autoregressive process and the statistic Z n = number of ε -repetitions of this record. When the parameter of the autoregression is unknown, the prediction errors, and therefore Z n , are unobservable. Here an observable analogue ̂ n of Z n is considered. It is proved that under special conditions the difference Z n − unobservable. Here an observable analogue ̂ n converges to zero in probability and therefore that unobservable. Here an observable analogue ̂ n has the same asymptotic behaviour as Z n .  相似文献   

7.
It is well established that bandwidths exist that can yield an unbiased non–parametric kernel density estimate at points in particular regions (e.g. convex regions) of the underlying density. These zero–bias bandwidths have superior theoretical properties, including a 1/n convergence rate of the mean squared error. However, the explicit functional form of the zero–bias bandwidth has remained elusive. It is difficult to estimate these bandwidths and virtually impossible to achieve the higher–order rate in practice. This paper addresses these issues by taking a fundamentally different approach to the asymptotics of the kernel density estimator to derive a functional approximation to the zero–bias bandwidth. It develops a simple approximation algorithm that focuses on estimating these zero–bias bandwidths in the tails of densities where the convexity conditions favourable to the existence of the zerobias bandwidths are more natural. The estimated bandwidths yield density estimates with mean squared error that is O(n–4/5), the same rate as the mean squared error of density estimates with other choices of local bandwidths. Simulation studies and an illustrative example with air pollution data show that these estimated zero–bias bandwidths outperform other global and local bandwidth estimators in estimating points in the tails of densities.  相似文献   

8.
In this paper, we consider the problem of adaptive density or survival function estimation in an additive model defined by Z=X+Y with X independent of Y, when both random variables are non‐negative. This model is relevant, for instance, in reliability fields where we are interested in the failure time of a certain material that cannot be isolated from the system it belongs. Our goal is to recover the distribution of X (density or survival function) through n observations of Z, assuming that the distribution of Y is known. This issue can be seen as the classical statistical problem of deconvolution that has been tackled in many cases using Fourier‐type approaches. Nonetheless, in the present case, the random variables have the particularity to be supported. Knowing that, we propose a new angle of attack by building a projection estimator with an appropriate Laguerre basis. We present upper bounds on the mean squared integrated risk of our density and survival function estimators. We then describe a non‐parametric data‐driven strategy for selecting a relevant projection space. The procedures are illustrated with simulated data and compared with the performances of a more classical deconvolution setting using a Fourier approach. Our procedure achieves faster convergence rates than Fourier methods for estimating these functions.  相似文献   

9.
The pooled variance of p samples presumed to have been obtained from p populations having common variance σ2, has invariably been adopted as the default estimator for σ2. In this paper, alternative estimators of the common population variance are developed. These estimators are biased and have lower mean-squared error values than . The comparative merit of these estimators over the unbiased estimator is explored using relative efficiency (a ratio of mean-squared error values).  相似文献   

10.
Abstract. We consider the functional non‐parametric regression model Y= r( χ )+?, where the response Y is univariate, χ is a functional covariate (i.e. valued in some infinite‐dimensional space), and the error ? satisfies E(? | χ ) = 0. For this model, the pointwise asymptotic normality of a kernel estimator of r (·) has been proved in the literature. To use this result for building pointwise confidence intervals for r (·), the asymptotic variance and bias of need to be estimated. However, the functional covariate setting makes this task very hard. To circumvent the estimation of these quantities, we propose to use a bootstrap procedure to approximate the distribution of . Both a naive and a wild bootstrap procedure are studied, and their asymptotic validity is proved. The obtained consistency results are discussed from a practical point of view via a simulation study. Finally, the wild bootstrap procedure is applied to a food industry quality problem to compute pointwise confidence intervals.  相似文献   

11.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

12.
The evolution of opinion as to how to analyse the AB/BA cross‐over trials is described by examining the recommendations of three key papers. The impact of these papers on the medical literature is analysed by looking at citation rates as a function of various factors. It is concluded that amongst practitioners there is a highly imperfect appreciation of the issues raised by the possibility of carry‐over. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
A finite mixture model using the Student's t distribution has been recognized as a robust extension of normal mixtures. Recently, a mixture of skew normal distributions has been found to be effective in the treatment of heterogeneous data involving asymmetric behaviors across subclasses. In this article, we propose a robust mixture framework based on the skew t distribution to efficiently deal with heavy-tailedness, extra skewness and multimodality in a wide range of settings. Statistical mixture modeling based on normal, Student's t and skew normal distributions can be viewed as special cases of the skew t mixture model. We present analytically simple EM-type algorithms for iteratively computing maximum likelihood estimates. The proposed methodology is illustrated by analyzing a real data example.  相似文献   

14.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

15.
Testing goodness‐of‐fit of commonly used genetic models is of critical importance in many applications including association studies and testing for departure from Hardy–Weinberg equilibrium. Case–control design has become widely used in population genetics and genetic epidemiology, thus it is of interest to develop powerful goodness‐of‐fit tests for genetic models using case–control data. This paper develops a likelihood ratio test (LRT) for testing recessive and dominant models for case–control studies. The LRT statistic has a closed‐form formula with a simple $\chi^{2}(1)$ null asymptotic distribution, thus its implementation is easy even for genome‐wide association studies. Moreover, it has the same power and optimality as when the disease prevalence is known in the population. The Canadian Journal of Statistics 41: 341–352; 2013 © 2013 Statistical Society of Canada  相似文献   

16.
In the presence of multicollinearity, the rk class estimator is proposed as an alternative to the ordinary least squares (OLS) estimator which is a general estimator including the ordinary ridge regression (ORR), the principal components regression (PCR) and the OLS estimators. Comparison of competing estimators of a parameter in the sense of mean square error (MSE) criterion is of central interest. An alternative criterion to the MSE criterion is the Pitman’s (1937) closeness (PC) criterion. In this paper, we compare the rk class estimator to the OLS estimator in terms of PC criterion so that we can get the comparison of the ORR estimator to the OLS estimator under the PC criterion which was done by Mason et al. (1990) and also the comparison of the PCR estimator to the OLS estimator by means of the PC criterion which was done by Lin and Wei (2002).  相似文献   

17.
The test of the hypothesis of equal means of two normal populations without assumption on the variances is usually referred to as the Behrens–Fisher Problem. Exact similar tests are known not to exist. However, excellent approximately similar “solutions” are readily available. Of these available tests and corresponding critical regions, those due to Welch, Aspin, and Trickett in the 1940s and 1950s come closest to achieving similarity. This article examines numerically the Welch–Aspin asymptotic series and the related Trickett–Welch integral equation formulations of this problem. Through examples, we illustrate that well-behaved tests can deviate from similarity by an almost incredibly small amount. Despite this, with much more extensive computation than was feasible a half-century ago, we can see irregularities which could be an empirical reflection of the known nonexistance of exact solutions.  相似文献   

18.
19.
Bayesian inference for rank-order problems is frustrated by the absence of an explicit likelihood function. This hurdle can be overcome by assuming a latent normal representation that is consistent with the ordinal information in the data: the observed ranks are conceptualized as an impoverished reflection of an underlying continuous scale, and inference concerns the parameters that govern the latent representation. We apply this generic data-augmentation method to obtain Bayes factors for three popular rank-based tests: the rank sum test, the signed rank test, and Spearman''s ρs.  相似文献   

20.
We prove a Berry–Esséen bound for general M-estimators under optimal regularity conditions on the score function and the underlying distribution. As an application we obtain Berry–Esséen bounds for the sample median, the Lp -median, p > 1 and Huber's estimator of location  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号