首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In almost all credibility models considered previously, the claims are assumed to be independent over risks. However, from the practical point of view, it is not always the case, and hence, in this article, we investigate the credibility premiums when risks are allowed to be generally dependent. Firstly, we re-build the credibility estimators for Bühlmann and Bühlmann–Straub credibility models under general dependence structure over risks. The methods are then extended to the regression credibility models.  相似文献   

2.
信度模型是非寿险经验费率厘定的主要方法。传统的Buhlmann-Straub信度模型可以表示为随机截距模型,而随机截距模型假设随机效应服从正态分布。在实际的保险损失数据中,部分个体风险的损失可能远远高于总体平均水平,从而使得不同个体风险之间的风险差异呈现右偏特征。在这种情况下,Buhlmann-Straub模型有可能低估高风险的信度保费。本文在随机截距模型中假设随机效应服从偏正态分布,求得了偏正态随机效应假设下的信度保费。可以证明,Buhlmann-Straub信度保费是其特例。模拟分析和实证研究的结果都表明,偏正态随机效应假设下的信度模型可以更好地预测高风险的信度保费,从而改进传统信度模型的保费估计结果。  相似文献   

3.
The generalized weighted premium includes many classical premium principles. Most important of all, some of them have positive safe-loading. Some work had been done previously, the credibility premium derived under this premium principle cannot be applied to practice directly due to the difficulties of calculation and the estimation of structure parameters. In this article, we consider a new form of credibility estimator under the generalized weighted premium principle. In addition, the consistency of the estimator is shown and the comparisons were analyzed with previous results in the simulations. The results show that this “new” estimator is better than existed estimators under mean square error sense. Finally, the structure parameters in credibility factor were estimated in the models of multitude contracts.  相似文献   

4.
Experience ratemaking plays a crucial role in general insurance in determining future premiums of individuals in a portfolio by assessing observed claims from the whole portfolio. This paper investigates this problem in which claims can be modeled by certain parametric family of distributions. The Dirichlet process mixtures are employed to model the distributions of the parameters so as to make two advantages: to produce exact Bayesian experience premiums for a class of premium principles generated from generic error functions and, at the same time, provide robust and flexible ways to avoid possible bias caused by traditionally used priors such as non informative priors or conjugate priors. In this paper, the Bayesian experience ratemaking under Dirichlet process mixture models are investigated and due to the lack of analytical forms of the conditional expectations of the quantities concerned, the Gibbs sampling schemes are designed for the purpose of approximations.  相似文献   

5.
Everyday we face all kinds of risks, and insurance is in the business of providing us a means to transfer or share these risks, usually to eliminate or reduce the resulting financial burden, in exchange for a predetermined price or tariff. Actuaries are considered professional experts in the economic assessment of uncertain events, and equipped with many statistical tools for analytics, they help formulate a fair and reasonable tariff associated with these risks. An important part of the process of establishing fair insurance tariffs is risk classification, which involves the grouping of risks into various classes that share a homogeneous set of characteristics allowing the actuary to reasonably price discriminate. This article is a survey paper on the statistical tools for risk classification used in insurance. Because of recent availability of more complex data in the industry together with the technology to analyze these data, we additionally discuss modern techniques that have recently emerged in the statistics discipline and can be used for risk classification. While several of the illustrations discussed in the paper focus on general, or non-life, insurance, several of the principles we examine can be similarly applied to life insurance. Furthermore, we also distinguish between a priori and a posteriori ratemaking. The former is a process which forms the basis for ratemaking when a policyholder is new and insufficient information may be available. The latter process uses additional historical information about policyholder claims when this becomes available. In effect, the resulting a posteriori premium allows one to correct and adjust the previous a priori premium making the price discrimination even more fair and reasonable.  相似文献   

6.
We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and Bühlmann (J R Stat Soc 72:417–473, 2010). We propose to apply a base selection method repeatedly to random subsamples of observations and subsets of covariates under scrutiny, and to select covariates based on their selection frequency. We analyse the effects and benefits of these extensions. Our analysis generalizes the theoretical results of Meinshausen and Bühlmann (J R Stat Soc 72:417–473, 2010) from the case of half-samples to subsamples of arbitrary size. We study, in a theoretical manner, the effect of taking random covariate subsets using a simplified score model. Finally we validate these extensions on numerical experiments on both synthetic and real datasets, and compare the obtained results in detail to the original stability selection method.  相似文献   

7.
Abstract

In this paper, we study Pareto-optimal reinsurance policies from the perspectives of an insurer and a reinsurer, assuming reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. By geometric approach, we determine the forms of the optimal policies among two classes of ceded loss functions, the class of increasing convex ceded loss functions and the class that the constraints on both ceded and retained loss functions are relaxed to increasing functions. Then we demonstrate the applicability of our results by giving the parameters of the optimal ceded loss functions under Dutch premium principle and Wang’s premium principle.  相似文献   

8.
Al though mixtures form a rich class of probability models, they often present difficulties for statistical inference. Likelihood functions are sometimes unbounded at certain values of the parameters, and densities often have no closed form. These features complicate hoth maximum-likelihood estimation and tests of fit based on the empirical distribution function. New inferential methods using sample characteristic functions (Cfs) and moment generating functions (MGFs) seem well-suited to mixtures. since these transforms often take simple form/ This paper reports a simulation study of the properties of estimators and tests of fit based on CFs, MGFs, and sample moments when applied to three specific families of thick tailed mixture distributios.  相似文献   

9.
In this article we develop a class of stochastic boosting (SB) algorithms, which build upon the work of Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007). They introduce boosting algorithms which correspond to standard boosting (e.g. Bühlmann and Hothorn, Stat. Sci. 22:477–505, 2007) except that the optimization algorithms are randomized; this idea is placed within a Bayesian framework. We show that the inferential procedure in Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007) is incorrect and further develop interpretational, computational and theoretical results which allow one to assess SB’s potential for classification and regression problems. To use SB, sequential Monte Carlo (SMC) methods are applied. As a result, it is found that SB can provide better predictions for classification problems than the corresponding boosting algorithm. A theoretical result is also given, which shows that the predictions of SB are not significantly worse than boosting, when the latter provides the best prediction. We also investigate the method on a real case study from machine learning.  相似文献   

10.
This paper deals with optimal window width choice in on-parametric lag or spectral window estimation of the spectral density of a stationary zero-mean process. Several approaches are reviewed: cross-validation-based methods as described by Hurvich(1985) BelträHo and Bloomfield (1987) and Hurvich and Belträo (1990); an iterative pro-cedure developed by Bühlmann (1996); and a bootstrap approach followed by Franke and Hardle (1992). These methods are compared in terms of the mean square error,the mean square percentage error, and a third measure of the istance between the true spectral density and its estimate. The comparison is based on a simulation study, the simulated processes being in he class of ARMA (5,5) processes. On the basis of simu-lation evidence we suggest to use a slightly modified version of Biihlmann's (1996)iterative method. This paper also makes a minor correction of the bootstrap criterion by Franke and Härdle (1992).  相似文献   

11.
风险保费预测是非寿险费率厘定的重要组成部分。在传统的分位回归厘定风险保费中,通常假设分位数水平是事先给定的,缺乏一定的客观性。为此,提出了一种应用分位回归厘定风险保费的新方法。基于破产概率确定保单组合的总风险保费,建立个体保单的分位回归模型,并与总风险保费建立等式关系,通过数值方法求解出分位数水平,实现对个体保单风险保费的预测。通过一组实际数据分析表明,该方法具有良好的预测效果。  相似文献   

12.
This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities.Many parameters, such as variance, may be expressed or approximated as linear combinations of products of expectations. The estimates of such parameters may be expressed as the same linear combinations of products of averages. The moments and cumulants of such estimates may be computed in a straightforward way if the terms of the estimates, moments and cumulants are represented as lists and the expectation operation defined as a transformation of lists. Vector space considerations lead to a unique representation of terms and hence to a simplification of results. Basic identities relating variables and their expectations induce transformations of lists, which transformations may be computed from the identities. In this way procedures for complex calculations are computed from basic identities.The procedures permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions. The examples include the calculation of unbiased estimates of cumulants, of cumulants of these, and of moments of bootstrap estimates.  相似文献   

13.

In this article we measure the local or infinitesimal sensitivity of a kind of Bayes estimates which appear in bonus–malus systems. Bonus–malus premiums can be viewed as a functional depending on the prior distribution. To measure when small changes in the prior cause large changes in the premium we compute the norm of the Fréchet derivative and propose a simple procedure to decide if a bonus–malus premium is robust. As an application, an example where the risk has a Poisson distribution and its parameter follows a Gamma prior distribution is presented under the net and variance premium principles.  相似文献   

14.
我国粮食单产保险纯费率厘定的实证研究   总被引:2,自引:0,他引:2       下载免费PDF全文
梁来存 《统计研究》2010,27(5):67-73
粮食保险属于政策性农业保险,其费率厘定应以风险区划为基础,准确的费率厘定将有助于政府制订出科学的保险支农政策。本文从以产量变化测度自然风险对粮食安全的影响这一新的视角建立了粮食安全自然风险影响的评价指标体系。利用系统聚类法、K-均值聚类法和模糊聚类法对我国粮食生产进行了省级保险风险区划,并以Fisher判别法、Bayes判别法和逐步判别法进行了回判验证。在对各省(市、区)粮食单产分布进行检验的基础上,选取经验费率法厘定了单产保险的纯费率,并结合风险区划结果和政策取向对纯费率进行了调整。进一步的研究需要实施较小基本单位的风险区划和费率厘定,基本单位越小,费率厘定越符合实际,但工作量越大。  相似文献   

15.
采用非参数核函数平滑法以辽宁省、黑龙江省以及大连市的水稻、玉米和大豆三种农作物历年单位面积产量为例拟合了单产损失分布,同时利用传统的正态概率密度对区域作物单产分布进行了拟合。在拟合损失分布的基础上,分别厘定出不同保险水平农作物区域产量保险的纯保险费率。经测算发现,传统的正态概率密度下厘定的纯保险费率均低于非参数核密度下测算的纯费率,正态法低估了农作物单产的风险。保险水平在70%80%间的参数法及非参数法测算的纯保险费率均低于政策性农业保险的现行费率。另外,在数据可得的基础上,还应该确定适当的厘定保费费率的区域以充分识别风险,更精确的计算保费。  相似文献   

16.
In this paper we apply the sequential bootstrap method proposed by Collet et al. [Bootstrap Central Limit theorem for chains of infinite order via Markov approximations, Markov Processes and Related Fields 11(3) (2005), pp. 443–464] to estimate the variance of the empirical mean of a special class of chains of infinite order called sparse chains. For this process, we show that we are able to compute numerically the true value of the standard error with any fixed error.

Our main goal is to present a comparison, for sparse chains, among sequential bootstrap, the block bootstrap method proposed by Künsch [The jackknife and the Bootstrap for general stationary observations, Ann. Statist. 17 (1989), pp. 1217–1241] and improved by Liu and Singh [Moving blocks jackknife and Bootstrap capture week dependence, in Exploring the limits of the Bootstrap, R. Lepage and L. Billard, eds., Wiley, New York, 1992, pp. 225–248] and the bootstrap method proposed by Bühlmann [Blockwise bootstrapped empirical process for stationary sequences, Ann. Statist. 22 (1994), pp. 995–1012].  相似文献   

17.
医疗费用预测是健康保险费率厘定的前提和基础。对于多年期的医疗费用数据,通常使用线性混合效应模型对其进行拟合,但线性混合效应模型对非线性关系的纵向数据建模具有一定的局限性。本文对线性混合效应模型进行扩展,根据医疗费用数据中变量之间的非线性关系,建立了多项式混合效应模型,并将其应用于一组医疗费用数据进行实证研究。结果表明,多项式混合效应模型对住院医疗费用的拟合效果显著优于通常使用的线性混合模型,在医疗费用管理和健康保险的费率厘定中具有重要的应用价值。  相似文献   

18.
Bayesian analysis of predictive values and related parameters of a diagnostic test are derived. In one case, the estimates are conditional on values of the prevalence of the disease; in the second case, the corresponding unconditional estimates are presented. Small-sample point estimates, posterior moments, and credibility intervals for all related parameters are obtained. Numerical methods of solution are also discussed.  相似文献   

19.
This article introduces a fast cross-validation algorithm that performs wavelet shrinkage on data sets of arbitrary size and irregular design and also simultaneously selects good values of the primary resolution and number of vanishing moments.We demonstrate the utility of our method by suggesting alternative estimates of the conditional mean of the well-known Ethanol data set. Our alternative estimates outperform the Kovac-Silverman method with a global variance estimate by 25% because of the careful selection of number of vanishing moments and primary resolution. Our alternative estimates are simpler than, and competitive with, results based on the Kovac-Silverman algorithm equipped with a local variance estimate.We include a detailed simulation study that illustrates how our cross-validation method successfully picks good values of the primary resolution and number of vanishing moments for unknown functions based on Walsh functions (to test the response to changing primary resolution) and piecewise polynomials with zero or one derivative (to test the response to function smoothness).  相似文献   

20.
王琳玉等 《统计研究》2020,37(12):75-90
高阶矩是刻画资产收益涨跌非对称和“尖峰厚尾”现象中不可忽略的系统性风险。本文基于我国上证50ETF期权数据采用无模型方法估计隐含波动率、隐含偏度和隐含峰度,通过自回归滑动平均模型提取期权隐含高阶矩新息(Innovations),将它们作为高阶矩风险的度量,探讨其对股票收益的预测作用。研究表明:①在控制换手率和股息率等变量后,隐含波动率对于上证50指数和市场未来4周的超额收益有显著负向的预测作用;②隐含偏度新息越低,上证50指数和市场的超额收益越高,这种预测能力在未来1周和未来4周均显著,但随着时间的推移,隐含偏度新息的预测能力逐渐下降;③隐含偏度风险对于我国股市横截面收益也有显著的解释能力,投资组合在隐含偏度风险因子上的风险暴露越大即因子载荷值越大,则未来的收益会越低;④隐含峰度新息总体上与股票收益负相关。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号