首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias-corrected extreme values of the point process. We show how the smoothing leads to Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.  相似文献   

2.
Inference for clusters of extreme values   总被引:3,自引:0,他引:3  
Summary. Inference for clusters of extreme values of a time series typically requires the identification of independent clusters of exceedances over a high threshold. The choice of declustering scheme often has a significant effect on estimates of cluster characteristics. We propose an automatic declustering scheme that is justified by an asymptotic result for the times between threshold exceedances. The scheme relies on the extremal index, which we show may be estimated before declustering, and supports a bootstrap procedure for assessing the variability of estimates.  相似文献   

3.
Stationarity tests exhibit extreme size distortions if the observable process is stationary yet highly persistent. In this paper we provide a theoretical explanation for the size distortion of the KPSS test for DGPs with a broad range of first order autocorrelation coefficient. Considering a near-integrated, nearly stationary process we show that the asymptotic distribution of the test contains an additional term, which can potentially explain the amount of size distortion documented in previous simulation studies.  相似文献   

4.
Under a Markovian structure on a sequence of random variables which can be partitioned into m(1) jointly dependent subsequences (where within each subsequence the random variables have a common marginal distribution which may vary between the subsequences), the asymptotic distribution theory of the sample extreme values is developed. The asymptotic independence of the subsequence extreme values is also studied.  相似文献   

5.
The problem of selecting the best of k populations is studied for data which are incomplete as some of the values have been deleted randomly. This situation is met in extreme value analysis where only data exceeding a threshold are observable. For increasing sample size we study the case where the probability that a value is observed tends to zero, but the sparse condition is satisfied, so that the mean number of observable values in each population is bounded away from zero and infinity as the sample size tends to infinity. The incomplete data are described by thinned point processes which are approximated by Poisson point processes. Under weak assumptions and after suitable transformations these processes converge to a Poisson point process. Optimal selection rules for the limit model are used to construct asymptotically optimal selection rules for the original sequence of models. The results are applied to extreme value data for high thresholds data.  相似文献   

6.
In this paper we propose a family of tests for exponentiality against the IDMRL alternative. Here we assume that the turning point or the proportion before the turning point is unknown. We derive the asymptotic null distributions of the test statistics and obtain their asymptotic critical values based on Durbin's approximation method. A simulation study is conducted to evaluate the proposed tests.  相似文献   

7.
We consider the likelihood ratio test (LRT) process related to the test of the absence of QTL (a QTL denotes a quantitative trait locus, i.e. a gene with quantitative effect on a trait) on the interval [0, T] representing a chromosome. The originality of this study is that we are under selective genotyping: only the individuals with extreme phenotypes are genotyped. We give the asymptotic distribution of this LRT process under the null hypothesis that there is no QTL on [0, T] and under local alternatives with a QTL at t on [0, T]. We show that the LRT process is asymptotically the square of a ‘non-linear interpolated and normalized Gaussian process’. We have an easy formula in order to compute the supremum of the square of this interpolated process. We prove that we have to genotype symmetrically and that the threshold is exactly the same as in the situation where all the individuals are genotyped.  相似文献   

8.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

9.
The extreme value distribution has been extensively used to model natural phenomena such as rainfall and floods, and also in modeling lifetimes and material strengths. Maximum likelihood estimation (MLE) for the parameters of the extreme value distribution leads to likelihood equations that have to be solved numerically, even when the complete sample is available. In this paper, we discuss point and interval estimation based on progressively Type-II censored samples. Through an approximation in the likelihood equations, we obtain explicit estimators which are approximations to the MLEs. Using these approximate estimators as starting values, we obtain the MLEs using an iterative method and examine numerically their bias and mean squared error. The approximate estimators compare quite favorably to the MLEs in terms of both bias and efficiency. Results of the simulation study, however, show that the probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are unsatisfactory for both these estimators and particularly so when the effective sample size is small. We, therefore, suggest the use of unconditional simulated percentage points of these pivotal quantities for the construction of confidence intervals. The results are presented for a wide range of sample sizes and different progressive censoring schemes. We conclude with an illustrative example.  相似文献   

10.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

11.
Extreme Values and Haar Series Estimates of Point Process Boundaries   总被引:1,自引:0,他引:1  
ABSTRACT.  We present a new method for estimating the edge of a two-dimensional bounded set, given a finite random set of points drawn from the interior. The estimator is based both on Haar series and extreme values of the point process. We give conditions for various kind of convergence and we obtain remarkably different possible limit distributions. We propose a method of reducing the negative bias, illustrated by a simulation.  相似文献   

12.
In the paper we compare several parameterized estimators for the positive extreme value index, which is a very important parameter appearing in the estimation of the probability of rare events. Firstly, asymptotic comparison at optimal levels of the corresponding tail index estimators is performed. Secondly, the practical validation of asymptotic results for moderate finite samples is done by means of Monte-Carlo simulations. We demonstrate that theoretical domination of the positive extreme value index estimators, which are asymptotically normal with a null asymptotic bias, is not reflected in Monte-Carlo simulations. Moreover, the estimators of such type do not demonstrate stability in the sense of empirical mean-squared error.  相似文献   

13.
We consider regularizations by convolution of the empirical process and study the asymptotic behaviour of non-linear functionals of this process. Using a result for the same type of non-linear functionals of the Brownian bridge, shown in a previous paper [4], and a strong approximation theorem, we prove several results for the p-deviation in estimation of the derivatives of the density. We also study the asymptotic behaviour of the number of crossings of the smoothed empirical process defined by Yukich [17] and of a modified version of the Kullback deviation.  相似文献   

14.
Bivariate extreme value condition (see (1.1) below) includes the marginal extreme value conditions and the existence of the (extreme) dependence function. Two cases are of interest: asymptotic independence and asymptotic dependence. In this paper, we investigate testing the existence of the dependence function under the null hypothesis of asymptotic independence and present two suitable test statistics. Small simulations are studied and the application for a real data is shown. The other case with the null hypothesis of asymptotic dependence is already investigated.  相似文献   

15.
Abstract

In this article we suggest a new multivariate autoregressive process for modeling time-dependent extreme value distributed observations. The idea behind the approach is to transform the original observations to latent variables that are univariate normally distributed. Then the vector autoregressive DCC model is fitted to the multivariate latent process. The distributional properties of the suggested model are extensively studied. The process parameters are estimated by applying a two-stage estimation procedure. We derive a prediction interval for future values of the suggested process. The results are applied in an empirically study by modeling the behavior of extreme daily stock prices.  相似文献   

16.
We develop a likelihood ratio test for an abrupt change point in Weibull hazard functions with covariates, including the two-piece constant hazard as a special case. We first define the log-likelihood ratio test statistic as the supremum of the profile log-likelihood ratio process over the interval which may contain an unknown change point. Using local asymptotic normality (LAN) and empirical measure, we show that the profile log-likelihood ratio process converges weakly to a quadratic form of Gaussian processes. We determine the critical values of the test and discuss how the test can be used for model selection. We also illustrate the method using the Chronic Granulomatous Disease (CGD) data.  相似文献   

17.
A conditional approach for multivariate extreme values (with discussion)   总被引:2,自引:0,他引:2  
Summary.  Multivariate extreme value theory and methods concern the characterization, estimation and extrapolation of the joint tail of the distribution of a d -dimensional random variable. Existing approaches are based on limiting arguments in which all components of the variable become large at the same rate. This limit approach is inappropriate when the extreme values of all the variables are unlikely to occur together or when interest is in regions of the support of the joint distribution where only a subset of components is extreme. In practice this restricts existing methods to applications where d is typically 2 or 3. Under an assumption about the asymptotic form of the joint distribution of a d -dimensional random variable conditional on its having an extreme component, we develop an entirely new semiparametric approach which overcomes these existing restrictions and can be applied to problems of any dimension. We demonstrate the performance of our approach and its advantages over existing methods by using theoretical examples and simulation studies. The approach is used to analyse air pollution data and reveals complex extremal dependence behaviour that is consistent with scientific understanding of the process. We find that the dependence structure exhibits marked seasonality, with ex- tremal dependence between some pollutants being significantly greater than the dependence at non-extreme levels.  相似文献   

18.
The paper considers generalized maximum likelihood asymptotic power one tests which aim to detect a change point in logistic regression when the alternative specifies that a change occurred in parameters of the model. A guaranteed non-asymptotic upper bound for the significance level of each of the tests is presented. For cases in which the test supports the conclusion that there was a change point, we propose a maximum likelihood estimator of that point and present results regarding the asymptotic properties of the estimator. An important field of application of this approach is occupational medicine, where for a lot chemical compounds and other agents, so-called threshold limit values (or TLVs) are specified.We demonstrate applications of the test and the maximum likelihood estimation of the change point using an actual problem that was encountered with real data.  相似文献   

19.
Summary. Many geophysical regression problems require the analysis of large (more than 104 values) data sets, and, because the data may represent mixtures of concurrent natural processes with widely varying statistical properties, contamination of both response and predictor variables is common. Existing bounded influence or high breakdown point estimators frequently lack the ability to eliminate extremely influential data and/or the computational efficiency to handle large data sets. A new bounded influence estimator is proposed that combines high asymptotic efficiency for normal data, high breakdown point behaviour with contaminated data and computational simplicity for large data sets. The algorithm combines a standard M -estimator to downweight data corresponding to extreme regression residuals and removal of overly influential predictor values (leverage points) on the basis of the statistics of the hat matrix diagonal elements. For this, the exact distribution of the hat matrix diagonal elements p ii for complex multivariate Gaussian predictor data is shown to be β ( p ii ,  m ,  N − m ), where N is the number of data and m is the number of parameters. Real geophysical data from an auroral zone magnetotelluric study which exhibit severe outlier and leverage point contamination are used to illustrate the estimator's performance. The examples also demonstrate the utility of looking at both the residual and the hat matrix distributions through quantile–quantile plots to diagnose robust regression problems.  相似文献   

20.
Penalized likelihood inference in extreme value analyses   总被引:1,自引:0,他引:1  
Models for extreme values are usually based on detailed asymptotic argument, for which strong ergodic assumptions such as stationarity, or prescribed perturbations from stationarity, are required. In most applications of extreme value modelling such assumptions are not satisfied, but the type of departure from stationarity is either unknown or complex, making asymptotic calculations unfeasible. This has led to various approaches in which standard extreme value models are used as building blocks for conditional or local behaviour of processes, with more general statistical techniques being used at the modelling stage to handle the non-stationarity. This paper presents another approach in this direction based on penalized likelihood. There are some advantages to this particular approach: the method has a simple interpretation; computations for estimation are relatively straightforward using standard algorithms; and a simple reinterpretation of the model enables broader inferences, such as confidence intervals, to be obtained using MCMC methodology. Methodological details together with applications to both athletics and environmental data are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号