首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
《Econometric Reviews》2008,27(1):268-297
Nonlinear functions of multivariate financial time series can exhibit long memory and fractional cointegration. However, tools for analysing these phenomena have principally been justified under assumptions that are invalid in this setting. Determination of asymptotic theory under more plausible assumptions can be complicated and lengthy. We discuss these issues and present a Monte Carlo study, showing that asymptotic theory should not necessarily be expected to provide a good approximation to finite-sample behavior.  相似文献   

2.
We give an overview of several aspects arising in the statistical analysis of extreme risks with actuarial applications in view. In particular it is demonstrated that empirical process theory is a very powerful tool, both for the asymptotic analysis of extreme value estimators and to devise tools for the validation of the underlying model assumptions. While the focus of the paper is on univariate tail risk analysis, the basic ideas of the analysis of the extremal dependence between different risks are also outlined. Here we emphasize some of the limitations of classical multivariate extreme value theory and sketch how a different model proposed by Ledford and Tawn can help to avoid pitfalls. Finally, these theoretical results are used to analyze a data set of large claim sizes from health insurance.  相似文献   

3.
Most Markov chain Monte Carlo (MCMC) users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. Potentially useful diagnostics can be borrowed from diverse areas such as time series. One such method is phase randomization. This paper describes this method in the context of MCMC, summarizes its characteristics, and contrasts its performance with those of the more common diagnostic tests for MCMC. It is observed that the new tool contributes information about third‐ and higher‐order cumulant behaviour which is important in characterizing certain forms of nonlinearity and non‐stationarity.  相似文献   

4.
5.
The analysis of extreme values is often required from short series which are biasedly sampled or contain outliers. Data for sea-levels at two UK east coast sites and data on athletics records for women's 3000 m track races are shown to exhibit such characteristics. Univariate extreme value methods provide a poor quantification of the extreme values for these data. By using bivariate extreme value methods we analyse jointly these data with related observations, from neighbouring coastal sites and 1500 m races respectively. We show that using bivariate methods provides substantial benefits, both in these applications and more generally with the amount of information gained being determined by the degree of dependence, the lengths and the amount of overlap of the two series, the homogeneity of the marginal characteristics of the variables and the presence and type of the outlier.  相似文献   

6.
Summary.  A fundamental issue in applied multivariate extreme value analysis is modelling dependence within joint tail regions. The primary focus of this work is to extend the classical pseudopolar treatment of multivariate extremes to develop an asymptotically motivated representation of extremal dependence that also encompasses asymptotic independence. Starting with the usual mild bivariate regular variation assumptions that underpin the coefficient of tail dependence as a measure of extremal dependence, our main result is a characterization of the limiting structure of the joint survivor function in terms of an essentially arbitrary non-negative measure that must satisfy some mild constraints. We then construct parametric models from this new class and study in detail one example that accommodates asymptotic dependence, asymptotic independence and asymmetry within a straightforward parsimonious parameterization. We provide a fast simulation algorithm for this example and detail likelihood-based inference including tests for asymptotic dependence and symmetry which are useful for submodel selection. We illustrate this model by application to both simulated and real data. In contrast with the classical multivariate extreme value approach, which concentrates on the limiting distribution of normalized componentwise maxima, our framework focuses directly on the structure of the limiting joint survivor function and provides significant extensions of both the theoretical and the practical tools that are available for joint tail modelling.  相似文献   

7.
It is well recognized that the generalized extreme value (GEV) distribution is widely used for any extreme events. This notion is based on the study of discrete choice behavior; however, there is a limit for predicting the distribution at ungauged sites. Hence, there have been studies on spatial dependence within extreme events in continuous space using recorded observations. We model the annual maximum daily rainfall data consisting of 25 locations for the period from 1982 to 2013. The spatial GEV model that is established under observations is assumed to be mutually independent because there is no spatial dependency between the stations. Furthermore, we divide the region into two regions for a better model fit and identify the best model for each region. We show that the regional spatial GEV model reflects the spatial pattern well compared with the spatial GEV model over the entire region as the local GEV distribution. The advantage of spatial extreme modeling is that more robust return levels and some indices of extreme rainfall can be obtained for observed stations as well as for locations without observed data. Thus, the model helps to determine the effects and assessment of vulnerability due to heavy rainfall in northeast Thailand.  相似文献   

8.
Multivariate extreme events are typically modelled using multivariate extreme value distributions. Unfortunately, there exists no finite parametrization for the class of multivariate extreme value distributions. One common approach is to model extreme events using some flexible parametric subclass. This approach has been limited to only two or three dimensions, primarily because suitably flexible high-dimensional parametric models have prohibitively complex density functions. We present an approach that allows a number of popular flexible models to be used in arbitrarily high dimensions. The approach easily handles missing and censored data, and can be employed when modelling componentwise maxima and multivariate threshold exceedances. The approach is based on a representation using conditionally independent marginal components, conditioning on positive stable random variables. We use Bayesian inference, where the conditioning variables are treated as auxiliary variables within Markov chain Monte Carlo simulations. We demonstrate these methods with an application to sea-levels, using data collected at 10 sites on the east coast of England.  相似文献   

9.
Abstract

The generalized extreme value (GEV) distribution is known as the limiting result for the modeling of maxima blocks of size n, which is used in the modeling of extreme events. However, it is possible for the data to present an excessive number of zeros when dealing with extreme data, making it difficult to analyze and estimate these events by using the usual GEV distribution. The Zero-Inflated Distribution (ZID) is widely known in literature for modeling data with inflated zeros, where the inflator parameter w is inserted. The present work aims to create a new approach to analyze zero-inflated extreme values, that will be applied in data of monthly maximum precipitation, that can occur during months where there was no precipitation, being these computed as zero. An inference was made on the Bayesian paradigm, and the parameter estimation was made by numerical approximations of the posterior distribution using Markov Chain Monte Carlo (MCMC) methods. Time series of some cities in the northeastern region of Brazil were analyzed, some of them with predominance of non-rainy months. The results of these applications showed the need to use this approach to obtain more accurate and with better adjustment measures results when compared to the standard distribution of extreme value analysis.  相似文献   

10.
Generalized exponential, geometric extreme exponential and Weibull distributions are three non-negative skewed distributions that are suitable for analysing lifetime data. We present diagnostic tools based on the likelihood ratio test (LRT) and the minimum Kolmogorov distance (KD) method to discriminate between these models. Probability of correct selection has been calculated for each model and for several combinations of shape parameters and sample sizes using Monte Carlo simulation. Application of LRT and KD discrimination methods to some real data sets has also been studied.  相似文献   

11.
A test for two events being mutually exclusive is presented for the case in which there are known rates of misclassification of the events. The test can be utilized in other situations, such as to test whether a set is a subset of another set. In the test, the null value of the probability of the intersection is replaced by the expected value of the number determined to be in the intersection by the imperfect diagnostic tools. The test statistic is the number in a sample that is judged to be in the intersection. Medical testing applications are emphasized.  相似文献   

12.
Summary.  The paper describes a method of estimating the false negative fraction of a multiple-screening test when individuals who test negatively on all K tests do not have their true disease status verified. The method proposed makes no explicit assumptions about the underlying heterogeneity of the population or about serial correlation of test results within an individual. Rather, it is based on estimating false negative fractions conditionally on observed diagnostic histories and extrapolating the observed patterns in these empirical frequencies by using logistic regression. The method is illustrated on, and motivated by, data on a multiple-screening test for bowel cancer.  相似文献   

13.
ABSTRACT

In fitting a power-law process, we show that the construction of the empirical recurrence rate time series either simplifies the modeling task, or liberates a point process restrained by a key parametric model assumption such as the monotonicity requirement of the intensity function. The technique can be applied to seasonal events occurring in spurts or clusters, because the autoregressive integrated moving average (ARIMA) procedure provides a comprehensive set of tools with great flexibility. Essentially, we consolidate two of the most powerful modeling tools for the stochastic process and time series in the statistical literature to handle counts of events in a Poisson or Poisson-like process.  相似文献   

14.
Flood events can be caused by several different meteorological circumstances. For example, heavy rain events often lead to short flood events with high peaks, whereas snowmelt normally results in events of very long duration with a high volume. Both event types have to be considered in the design of flood protection systems. Unfortunately, all these different event types are often included in annual maximum series (AMS) leading to inhomogeneous samples. Moreover, certain event types are underrepresented in the AMS. This is especially unsatisfactory if the most extreme events result from such an event type. Therefore, monthly maximum data are used to enlarge the information spectrum on the different event types. Of course, not all events can be included in the flood statistics because not every monthly maximum can be declared as a flood. To take this into account, a mixture Peak-over-threshold model is applied, with thresholds specifying flood events of several types that occur in a season of the year. This model is then extended to cover the seasonal type of the data. The applicability is shown in a German case study, where the impact of the single event types in different parts of a year is evaluated.  相似文献   

15.
Ion Grama 《Statistics》2019,53(4):807-838
We propose an extension of the regular Cox's proportional hazards model which allows the estimation of the probabilities of rare events. It is known that when the data are heavily censored, the estimation of the tail of the survival distribution is not reliable. To improve the estimate of the baseline survival function in the range of the largest observed data and to extend it outside, we adjust the tail of the baseline distribution beyond some threshold by an extreme value model under appropriate assumptions. The survival distributions conditioned to the covariates are easily computed from the baseline. A procedure allowing an automatic choice of the threshold and an aggregated estimate of the survival probabilities are also proposed. The performance is studied by simulations and an application on two data sets is given.  相似文献   

16.
The problem of selecting the best of k populations is studied for data which are incomplete as some of the values have been deleted randomly. This situation is met in extreme value analysis where only data exceeding a threshold are observable. For increasing sample size we study the case where the probability that a value is observed tends to zero, but the sparse condition is satisfied, so that the mean number of observable values in each population is bounded away from zero and infinity as the sample size tends to infinity. The incomplete data are described by thinned point processes which are approximated by Poisson point processes. Under weak assumptions and after suitable transformations these processes converge to a Poisson point process. Optimal selection rules for the limit model are used to construct asymptotically optimal selection rules for the original sequence of models. The results are applied to extreme value data for high thresholds data.  相似文献   

17.
Abstract

Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve and surface are useful tools to assess the ability of diagnostic tests to discriminate between ordered classes or groups. To define these diagnostic tests, selecting the optimal thresholds that maximize the accuracy of these tests is required. One procedure that is commonly used to find the optimal thresholds is by maximizing what is known as Youden’s index. This article presents nonparametric predictive inference (NPI) for selecting the optimal thresholds of a diagnostic test. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. Based on multiple future observations, the NPI approach is presented for selecting the optimal thresholds for two-group and three-group scenarios. In addition, a pairwise approach has also been presented for the three-group scenario. The article ends with an example to illustrate the proposed methods and a simulation study of the predictive performance of the proposed methods along with some classical methods such as Youden index. The NPI-based methods show some interesting results that overcome some of the issues concerning the predictive performance of Youden’s index.  相似文献   

18.
基于Markov区制转换模型的极值风险度量研究   总被引:1,自引:0,他引:1  
将马尔科夫区制转换模型与极值理论相结合研究金融风险度量问题.首先用SWARCH-t模型捕捉收益率序列的剧烈波动和结构变换特征,然后将收益序列转化为标准残差序列,在此基础上通过SWARCH-t模型与极值理论相结合拟合标准残差的尾部分布,进而构建基于SWARCH- t- EVT的动态VaR模型,最后对模型的有效性进行检验.研究表明,SWARCH-t-EVT模型能够有效识别上证综指的波动区制特征,且能有效合理地测度上证综指收益风险,尤其在高的置信水平下表现更好.  相似文献   

19.
We investigate the extremal clustering behaviour of stationary time series that possess two regimes, where the switch is governed by a hidden two-state Markov chain. We also suppose that the process is conditionally Markovian in each latent regime. We prove under general assumptions that above high thresholds these models behave approximately as a random walk in one (called dominant) regime and as a stationary autoregression in the other (dominated) regime. Based on this observation, we propose an estimation and simulation scheme to analyse the extremal dependence structure of such models, taking into account only observations above high thresholds. The properties of the estimation method are also investigated. Finally, as an application, we fit a model to high-level exceedances of water discharge data, simulate extremal events from the fitted model, and show that the (model-based) flood peak, flood duration and flood volume distributions match their observed counterparts.  相似文献   

20.
In this paper, we propose a method to assess influence in skew-Birnbaum–Saunders regression models, which are an extension based on the skew-normal distribution of the usual Birnbaum–Saunders (BS) regression model. An interesting characteristic that the new regression model has is the capacity of predicting extreme percentiles, which is not possible with the BS model. In addition, since the observed likelihood function associated with the new regression model is more complex than that from the usual model, we facilitate the parameter estimation using a type-EM algorithm. Moreover, we employ influence diagnostic tools that considers this algorithm. Finally, a numerical illustration includes a brief simulation study and an analysis of real data in order to show the proposed methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号