首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistics for Extreme Sea Currents   总被引:1,自引:0,他引:1  
Estimates of various characteristics of extreme sea currents, such as speeds and their directions, are required when designing offshore structures. This paper extends standard statistical methods for extreme values to handle the directionality, temporal dependence and tidal non-stationarity that are present in sea current extremes. The methods are applied to a short period of data from the Inner Dowsing Light Tower in the North Sea. Substantial benefits, over existing methods, are obtained from our analysis of the sea current by decomposing it into tide and surge currents. In particular, we find that at the Inner Dowsing the strong directionality in extreme sea current speeds is completely explained by the tidal current and directionality in the non-extreme surge currents. This finding aids model fitting and extrapolation.  相似文献   

2.
The analysis of extreme values is often required from short series which are biasedly sampled or contain outliers. Data for sea-levels at two UK east coast sites and data on athletics records for women's 3000 m track races are shown to exhibit such characteristics. Univariate extreme value methods provide a poor quantification of the extreme values for these data. By using bivariate extreme value methods we analyse jointly these data with related observations, from neighbouring coastal sites and 1500 m races respectively. We show that using bivariate methods provides substantial benefits, both in these applications and more generally with the amount of information gained being determined by the degree of dependence, the lengths and the amount of overlap of the two series, the homogeneity of the marginal characteristics of the variables and the presence and type of the outlier.  相似文献   

3.
Multivariate extreme events are typically modelled using multivariate extreme value distributions. Unfortunately, there exists no finite parametrization for the class of multivariate extreme value distributions. One common approach is to model extreme events using some flexible parametric subclass. This approach has been limited to only two or three dimensions, primarily because suitably flexible high-dimensional parametric models have prohibitively complex density functions. We present an approach that allows a number of popular flexible models to be used in arbitrarily high dimensions. The approach easily handles missing and censored data, and can be employed when modelling componentwise maxima and multivariate threshold exceedances. The approach is based on a representation using conditionally independent marginal components, conditioning on positive stable random variables. We use Bayesian inference, where the conditioning variables are treated as auxiliary variables within Markov chain Monte Carlo simulations. We demonstrate these methods with an application to sea-levels, using data collected at 10 sites on the east coast of England.  相似文献   

4.
5.
6.
Anticipating catastrophes through extreme value modelling   总被引:11,自引:0,他引:11  
Summary. When catastrophes strike it is easy to be wise after the event. It is also often argued that such catastrophic events are unforeseeable, or at least so implausible as to be negligible for planning purposes. We consider these issues in the context of daily rainfall measurements recorded in Venezuela. Before 1999 simple extreme value techniques were used to assess likely future levels of extreme rainfall, and these gave no particular cause for concern. In December 1999 a daily precipitation event of more than 410 mm, almost three times the magnitude of the previously recorded maximum, caused devastation and an estimated 30000 deaths. We look carefully at the previous history of the process and offer an extreme value analysis of the data—with some methodological novelty—that suggests that the 1999 event was much more plausible than the previous analyses had claimed. Deriving design parameters from the results of such an analysis may have had some mitigating effects on the consequences of the subsequent disaster. The themes of the new analysis are simple: the full exploitation of available data, proper accounting of uncertainty, careful interpretation of asymptotic limit laws and allowance for non-stationarity. The effect on the Venezuelan data analysis is dramatic. The broader implications are equally dramatic; that a naïve use of extreme value techniques is likely to lead to a false sense of security that might have devastating consequences in practice.  相似文献   

7.
A conditional approach for multivariate extreme values (with discussion)   总被引:2,自引:0,他引:2  
Summary.  Multivariate extreme value theory and methods concern the characterization, estimation and extrapolation of the joint tail of the distribution of a d -dimensional random variable. Existing approaches are based on limiting arguments in which all components of the variable become large at the same rate. This limit approach is inappropriate when the extreme values of all the variables are unlikely to occur together or when interest is in regions of the support of the joint distribution where only a subset of components is extreme. In practice this restricts existing methods to applications where d is typically 2 or 3. Under an assumption about the asymptotic form of the joint distribution of a d -dimensional random variable conditional on its having an extreme component, we develop an entirely new semiparametric approach which overcomes these existing restrictions and can be applied to problems of any dimension. We demonstrate the performance of our approach and its advantages over existing methods by using theoretical examples and simulation studies. The approach is used to analyse air pollution data and reveals complex extremal dependence behaviour that is consistent with scientific understanding of the process. We find that the dependence structure exhibits marked seasonality, with ex- tremal dependence between some pollutants being significantly greater than the dependence at non-extreme levels.  相似文献   

8.
This study demonstrates the decomposition of seasonality and long‐term trend in seismological data observed at irregular time intervals. The decomposition was applied to the estimation of earthquake detection capability using cubic B‐splines and a Bayesian approach, which is similar to the seasonal adjustment model frequently used to analyse economic time‐series data. We employed numerical simulation to verify the method and then applied it to real earthquake datasets obtained in and around the northern Honshu island, Japan. With this approach, we obtained the seasonality of the detection capability related to the annual variation of wind speed and the long‐term trend corresponding to the recent improvement of the seismic network in the studied region.  相似文献   

9.
Comparison of approaches for estimating the probability of coastal flooding   总被引:3,自引:0,他引:3  
Coastal flooding is typically caused by combinations of extreme water-levels and large waves. Two extreme value methods, one univariate and the other multivariate, have been used for estimating the probability of coastal flooding at an existing flood defence structure and for aiding the design of a new structure. The properties of these two methods are compared in terms of extrapolation, sophistication and use of information for a range of extremal dependence structures. We find that, when applied to the assessment of the safety offered by an existing Dutch dike, the multivariate approach provides the more useful and accurate design information and has the substantial benefits of consistency and reduced statistical analysis when applied to several sites along a Dutch coastline.  相似文献   

10.
The theory of max-stable processes generalizes traditional univariate and multivariate extreme value theory by allowing for processes indexed by a time or space variable. We consider a particular class of max-stable processes, known as M4 processes, that are particularly well adapted to modeling the extreme behavior of multiple time series. We develop procedures for determining the order of an M4 process and for estimating the parameters. To illustrate the methods, some examples are given for modeling jumps in returns in multivariate financial time series. We introduce a new measure to quantify and predict the extreme co-movements in price returns.  相似文献   

11.
The problem of testing hypotheses of a unit root and a structural change in one-dimensional time series is considered. A non-parametric two-step method for solution of the problem is proposed. The method is based upon the modified Kolmogorov-Smirnov statistic. At the first step of this method the hypothesis of stationarity of an obtained sample is tested against a unified alternative of a statistical non-stationarity of a time series (a unit root or a structural change). At the second step of the proposed method, in case of rejecting the stationarity hypothesis at the first step, the hypothesis of an unknown structural change is tested against the alternative of a unit root. We prove that probabilities of errors (false classification of hypotheses) of the proposed method converge to zero as the sample size tends to infinity.  相似文献   

12.
Estimates of the largest wind gust that will occur at a given location over a specified period are required by civil engineers. Estimation is usually based on models which are derived from the limiting distributions of maxima of stationary time series and which are fitted to data on extreme gusts. In this paper we develop a model for maximum gusts which also incorporates data on hourly mean speeds through a distributional relationship between maxima and means. This joint model is closely linked to the physical processes which generate the most extreme values and thus provides a mechanism by which data on means can augment those on gusts. It is argued that this increases the credibility of extrapolation in estimates of long period return gusts. The model is shown to provide a good fit to data obtained at a location in northern England and is compared with a more traditional modelling approach, which also performs well for this site.  相似文献   

13.
Penalized likelihood inference in extreme value analyses   总被引:1,自引:0,他引:1  
Models for extreme values are usually based on detailed asymptotic argument, for which strong ergodic assumptions such as stationarity, or prescribed perturbations from stationarity, are required. In most applications of extreme value modelling such assumptions are not satisfied, but the type of departure from stationarity is either unknown or complex, making asymptotic calculations unfeasible. This has led to various approaches in which standard extreme value models are used as building blocks for conditional or local behaviour of processes, with more general statistical techniques being used at the modelling stage to handle the non-stationarity. This paper presents another approach in this direction based on penalized likelihood. There are some advantages to this particular approach: the method has a simple interpretation; computations for estimation are relatively straightforward using standard algorithms; and a simple reinterpretation of the model enables broader inferences, such as confidence intervals, to be obtained using MCMC methodology. Methodological details together with applications to both athletics and environmental data are given.  相似文献   

14.
The aim of the article is to identify the intraday seasonality in a wind speed time series. Following the traditional approach, the marginal probability law is Weibull and, consequently, we consider seasonal Weibull law. A new estimation and decision procedure to estimate the seasonal Weibull law intraday scale parameter is presented. We will also give statistical decision-making tools to discard or not the trend parameter and to validate the seasonal model.  相似文献   

15.
Summary.  The isolation of DNA markers that are linked to interesting genes helps plant breeders to select parent plants that transmit useful traits to future generations. Such 'marker-assisted breeding and selection' heavily leans on statistical testing of associations between markers and a well-chosen trait. Statistical association analysis is guided by classical p -values or the false discovery rate and thus relies predominantly on the null hypothesis. The main concern of plant breeders, however, is to avoid missing an important alternative. To judge evidence from this perspective, we complement the traditional p -value with a one-sided 'alternative p -value' which summarizes evidence against a target alternative in the direction of the null hypothesis. This p -value measures 'impotence' as opposed to significance: how likely is it to observe an outcome as extreme as or more extreme than the one that was observed when data stem from the alternative? We show how a graphical inspection of both p -values can guide marker selection when the null and the alternative hypotheses have a comparable importance. We derive formal decision tools with balanced properties yielding different rejection regions for different markers. We apply our approach to study rye-grass plants.  相似文献   

16.
An important aspect in the modelling of biological phenomena in living organisms, whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., is time variation in the data. Thus, the recovery of a 'smooth' regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here, non-stationarity means that the spectral behaviour of the noise is allowed to change slowly over time). We develop a procedure to adapt existing threshold rules to such situations, e.g. that of a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the -risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but stationary errors. A natural application of our approach is the estimation of the trend function of non-stationary time series under the model of local stationarity. The method is illustrated on both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear non-stationarity in its variance.  相似文献   

17.
Stationary long memory processes have been extensively studied over the past decades. When we deal with financial, economic, or environmental data, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists. To take into account this phenomenon, we propose a new class of stochastic processes: locally stationary k-factor Gegenbauer process. We present a procedure to estimate consistently the time-varying parameters by applying discrete wavelet packet transform. The robustness of the algorithm is investigated through a simulation study. And we apply our methods on Nikkei Stock Average 225 (NSA 225) index series.  相似文献   

18.
吴浩  彭非 《统计研究》2020,37(4):114-128
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。  相似文献   

19.
In this article, we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC), and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets and simulation studies provide evidence that the extra analytical work involved in Hamiltonian Monte Carlo algorithms is compensated by a more efficient exploration of the parameter space.  相似文献   

20.
This paper proposes a linear mixed model (LMM) with spatial effects, trend, seasonality and outliers for spatio-temporal time series data. A linear trend, dummy variables for seasonality, a binary method for outliers and a multivariate conditional autoregressive (MCAR) model for spatial effects are adopted. A Bayesian method using Gibbs sampling in Markov Chain Monte Carlo is used for parameter estimation. The proposed model is applied to forecast rice and cassava yields, a spatio-temporal data type, in Thailand. The data have been extracted from the Office of Agricultural Economics, Ministry of Agriculture and Cooperatives of Thailand. The proposed model is compared with our previous model, an LMM with MCAR, and a log transformed LMM with MCAR. We found that the proposed model is the most appropriate, using the mean absolute error criterion. It fits the data very well in both the fitting part and the validation part for both rice and cassava. Therefore, it is recommended to be a primary model for forecasting these types of spatio-temporal time series data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号