首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Simulation and extremal analysis of hurricane events   总被引:3,自引:0,他引:3  
In regions affected by tropical storms the damage caused by hurricane winds can be catastrophic. Consequently, accurate estimates of hurricane activity in such regions are vital. Unfortunately, the severity of events means that wind speed data are scarce and unreliable, even by standards which are usual for extreme value analysis. In contrast, records of atmospheric pressures are more complete. This suggests a two-stage approach: the development of a model describing spatiotemporal patterns of wind field behaviour for hurricane events; then the simulation of such events, using meteorological climate models, to obtain a realization of associated wind speeds whose extremal characteristics are summarized. This is not a new idea, but we apply careful statistical modelling for each aspect of the model development and simulation, taking the Gulf and Atlantic coastlines of the USA as our study area. Moreover, we address for the first time the issue of spatial dependence in extremes of hurricane events, which we find to have substantial implications for regional risk assessments.  相似文献   

2.
Seasonality and Return Periods of Landfalling Atlantic Basin Hurricanes   总被引:1,自引:0,他引:1  
This paper studies the annual arrival cycle and return period properties of landfalling Atlantic Basin hurricanes. A non-homogeneous Poisson process with a periodic intensity function is used to model the annual cycle of hurricane arrival times. Wind speed and central pressure return periods and non-encounter probabilities are estimated by combining the Poisson arrival model with extreme value peaks-over-threshold methods. The data used in this study contain all Atlantic Basin hurricanes that have made landfall in the contiguous United States during the years 1935–98 inclusive.  相似文献   

3.
精算是保险发展的基础,是保险经营的技术支持。精算在国外有四百年的发展历史,引入中国只有二十年。要使精算技术在中国得到发展创新并为社会需要服务,必须了解精算思想产生的历史背景,厘清精算理论发展的脉络,真正把握精算思想的实质。基于此,介绍了精算各发展时期的主要代表人物及其学术思想,阐述精算技术对各时期保险发展的影响,同时对精算学与复利理论、数学、统计学、计算技术、金融经济学交叉融合的历史过程进行了分析述评。  相似文献   

4.
In this paper the collective risk model with Poisson–Lindley and exponential distributions as the primary and secondary distributions, respectively, is developed in a detailed way. It is applied to determine the Bayes premium used in actuarial science and also to compute the regulatory capital in the analysis of operational risk. The results are illustrated with numerous examples and compared with other approaches proposed in the literature for these questions, with considerable differences being observed.  相似文献   

5.
The problem of simultaneous estimation of variance components is considered for a balanced hierarchical mixed model under a sum of squared error loss. A new class of estimators is suggested which dominate the usual sensible estimators. These estimators shrink towards the geometric mean of the component mean squares that appear in the ANOVA table. Numerical results are tabled to exhibit the improvement in risk under a simple model.  相似文献   

6.
The Poisson-binomial distribution is useful in many applied problems in engineering, actuarial science and data mining. The Poisson-binomial distribution models the distribution of the sum of independent but non-identically distributed random indicators whose success probabilities vary. In this paper, we extend the Poisson-binomial distribution to a generalized Poisson-binomial (GPB) distribution. The GPB distribution corresponds to the case where the random indicators are replaced by two-point random variables, which can take two arbitrary values instead of 0 and 1 as in the case of random indicators. The GPB distribution has found applications in many areas such as voting theory, actuarial science, warranty prediction and probability theory. As the GPB distribution has not been studied in detail so far, we introduce this distribution first and then derive its theoretical properties. We develop an efficient algorithm for the computation of its distribution function, using the fast Fourier transform. We test the accuracy of the developed algorithm by comparing it with enumeration-based exact method and the results from the binomial distribution. We also study the computational time of the algorithm under various parameter settings. Finally, we discuss the factors affecting the computational efficiency of the proposed algorithm and illustrate the use of the software package.  相似文献   

7.
财产保险中损失分布建模的方法性研究   总被引:4,自引:0,他引:4       下载免费PDF全文
王新军 《统计研究》2002,19(11):40-43
一、引言在财产保险中 ,保费计价、损失理赔是保险业务的核心问题 ,而保费的定价首先必须知道所考虑险种的损失分布。从大的保险范畴划分来看可以分为两类 :一类是寿险 ,另一类是非寿险。财产保险属于非寿险范畴 ,该险种不同于寿险的保费计价相对简单。因为寿命周期表提供了很大的帮助 ,各寿险公司均可参考利用。但是 ,财产保险就不同了 ,不同的保险标的 ,不同的灾情因素所服从的具体分布是不同的 ,即便是能够判断出所服从的损失分布其参数的确定也是相当困难的 ,有时同一个险种 ,同一个灾情因素随着时间和环境的变化其损失分布也在发生着不…  相似文献   

8.
Finding optimal, or at least good, maintenance and repair policies is crucial in reliability engineering. Likewise, describing life phases of human mortality is important when determining social policy or insurance premiums. In these tasks, one searches for distributions to fit data and then makes inferences about the population(s). In the present paper, we focus on bathtub‐type distributions and provide a view of certain problems, methods and solutions, and a few challenges, that can be encountered in reliability engineering, survival analysis, demography and actuarial science.  相似文献   

9.
神经网络模型与车险索赔频率预测   总被引:1,自引:0,他引:1       下载免费PDF全文
孟生旺 《统计研究》2012,29(3):22-26
汽车保险广受社会关注,且在财产保险公司具有举足轻重的地位,因此汽车保险的索赔频率预测模型一直是非寿险精算理论和应用研究的重点之一。目前最为流行的索赔频率预测模型是广义线性模型,其中包括泊松回归、负二项回归和泊松-逆高斯回归等。本文基于一组实际的车险损失数据,对索赔频率的各种广义线性模型与神经网络模型和回归树模型进行了比较,得出了一些新的结论,即神经网络模型的拟合效果优于广义线性模型,在广义线性模型中,泊松回归的拟合效果优于负二项回归和泊松-逆高斯回归。线性回归模型的拟合效果最差,回归树模型的拟合效果略好于线性回归模型。  相似文献   

10.
Modelling extreme wind speeds in regions prone to hurricanes   总被引:1,自引:0,他引:1  
Extreme wind speeds can arise as the result of a simple pressure differential, or a complex dynamic system such as a tropical storm. When sets of record values comprise a mixture of two or more different types of event, the standard models for extremes based on a single limiting distribution are not applicable. We develop a mixture model for extreme winds arising from two distinct processes. Working with sequences of annual maximum speeds obtained at hurricane prone locations in the USA, we take a Bayesian approach to inference, which allows the incorporation of prior information obtained from other sites. We model the extremal behaviour for the contrasting wind climates of Boston and Key West, and show that the standard models can give misleading results at such locations.  相似文献   

11.
Doubly periodic non-homogeneous Poisson models for hurricane data   总被引:3,自引:1,他引:2  
Non-homogeneous Poisson processes with periodic claim intensity rate have been proposed as claim counts in risk theory. Here a doubly periodic Poisson model with short- and long-term trends is studied. Beta-type intensity functions are presented as illustrations. The likelihood function and the maximum likelihood estimates of the model parameters are derived.Doubly periodic Poisson models are appropriate when the seasonality does not repeat exactly the same short-term pattern every year, but has a peak intensity that varies over a longer period. This reflects periodic environments like those forming hurricanes, in alternating El Niño/La Niña years. An application of the model to the data set of Atlantic hurricanes affecting the United States (1899–2000) is discussed in detail.  相似文献   

12.
Simultaneous estimation of parameters with p (≥ 2) components, where each component has a generalized life distribution, is considered under a sum of squared error loss function. Improved estimators are obtained which dominate the maximum likelihood and the niinimum mean square estimators. Robustness of the improved estimators is shown even when the component distributions are dependent. The result is extended to the estimation of the system reliability when the components are connected in series. Several numerical studies are performed to demonstrate the risk improvement and the Pitman closeness of the new estimators.  相似文献   

13.
The non-parametric maximum likelihood estimators (MLEs) are derived for survival functions associated with individual risks or system components in a reliability framework. Lifetimes are observed for systems that contain one or more of those components. Analogous to a competing risks model, the system is assumed to fail upon the first instance of any component failure; i.e. the system is configured in series. For any given risk or component type, the asymptotic distribution is shown to depend explicitly on the unknown survival function of the other risks, as well as the censoring distribution. Survival functions with increasing failure rate are investigated as a special case. The order restricted MLE is shown to be consistent under mild assumptions of the underlying component lifetime distributions.  相似文献   

14.
Contingent probabilities and means are ubiquitous in actuarial science. The correct interpretation of contingent probabilities and means as well as the probability theory behind them have been addressed by researchers. In this article, we explore their statistical aspect. We give non-parametric estimators of contingent probabilities and means. Then, we show that our estimators are strongly consistent. Moreover, we give the asymptotic distributions of our estimators. Finally, we provide several examples to demonstrate the applications of these estimators in actuarial science.  相似文献   

15.
After a brief historical survey of parametric survival models, from actuarial, biomedical, demographical and engineering sources, this paper discusses the persistent reasons why parametric models still play an important role in exploratory statistical research. The phase-type models are advanced as a flexible family of latent-class models with interpretable components. These models are now supported by computational statistical methods that make numerical calculation of likelihoods and statistical estimation of parameters feasible in theory for quite complicated settings. However, consideration of Fisher Information and likelihood-ratio type tests to discriminate between model families indicates that only the simplest phase-type model topologies can be stably estimated in practice, even on rather large datasets. An example of a parametric model with features of mixtures, multiple stages or ‘hits’, and a trapping-state is given to illustrate simple computational tools in R, both on simulated data and on a large SEER 1992–2002 breast-cancer dataset.  相似文献   

16.
Cross-validated likelihood is investigated as a tool for automatically determining the appropriate number of components (given the data) in finite mixture modeling, particularly in the context of model-based probabilistic clustering. The conceptual framework for the cross-validation approach to model selection is straightforward in the sense that models are judged directly on their estimated out-of-sample predictive performance. The cross-validation approach, as well as penalized likelihood and McLachlan's bootstrap method, are applied to two data sets and the results from all three methods are in close agreement. The second data set involves a well-known clustering problem from the atmospheric science literature using historical records of upper atmosphere geopotential height in the Northern hemisphere. Cross-validated likelihood provides an interpretable and objective solution to the atmospheric clustering problem. The clusters found are in agreement with prior analyses of the same data based on non-probabilistic clustering techniques.  相似文献   

17.
In an attempt to produce more realistic stress–strength models, this article considers the estimation of stress–strength reliability in a multi-component system with non-identical component strengths based on upper record values from the family of Kumaraswamy generalized distributions. The maximum likelihood estimator of the reliability, its asymptotic distribution and asymptotic confidence intervals are constructed. Bayes estimates under symmetric squared error loss function using conjugate prior distributions are computed and corresponding highest probability density credible intervals are also constructed. In Bayesian estimation, Lindley approximation and the Markov Chain Monte Carlo method are employed due to lack of explicit forms. For the first time using records, the uniformly minimum variance unbiased estimator and the closed form of Bayes estimator using conjugate and non-informative priors are derived for a common and known shape parameter of the stress and strength variates distributions. Comparisons of the performance of the estimators are carried out using Monte Carlo simulations, the mean squared error, bias and coverage probabilities. Finally, a demonstration is presented on how the proposed model may be utilized in materials science and engineering with the analysis of high-strength steel fatigue life data.  相似文献   

18.
We use a Bayesian multivariate time series model for the analysis of the dynamics of carbon monoxide atmospheric concentrations. The data are observed at four sites. It is assumed that the logarithm of the observed process can be represented as the sum of unobservable components: a trend, a daily periodicity, a stationary autoregressive signal and an erratic term. Bayesian analysis is performed via Gibbs sampling. In particular, we consider the problem of joint temporal prediction when data are observed at a few sites and it is not possible to fit a complex space–time model. A retrospective analysis of the trend component is also given, which is important in that it explains the evolution of the variability in the observed process.  相似文献   

19.
Partial moments are extensively used in actuarial science for the analysis of risks. Since the first order partial moments provide the expected loss in a stop-loss treaty with infinite cover as a function of priority, it is referred as the stop-loss transform. In the present work, we discuss distributional and geometric properties of the first and second order partial moments defined in terms of quantile function. Relationships of the scaled stop-loss transform curve with the Lorenz, Gini, Bonferroni and Leinkuhler curves are developed.  相似文献   

20.
MODELS AND DESIGNS FOR EXPERIMENTS WITH MIXTURES   总被引:2,自引:0,他引:2  
Properties such as the tensile strength of an alloy of. different metals and the freezing point of a mixture of liquid chemicals, depend on the proportions (by weight or volume) of the components present and not on the total amount of the mixture. In choosing a model to relate such a property to the proportions of the various components of the mixture, there arise intriguing difficulties due to the fact that proportions sum to unity. It is demonstrated how to construct models which allow for the possibility of inactive components (components that do not affect the property at all) or components with additive effects. The design of experiments to fit such models to data is then discussed with a view to determining whether a given component is inactive or has an additive effect. The optimal allocation of observations to simplex-lattice designs is considered for one of these models. The construction of D -optimal designs for these models is an open problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号