首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 156 毫秒
1.
为了增强银行的抵御风险能力,新巴塞尔协议提出在银行监管资本估计中应包含操作风险资本金,然而因为数据和模型等方面的原因,导致准确估计操作风险资本金绝非易事.文章采用Monte Carlo模拟技术,同时考虑到操作风险损失数据的不完整性,利用损失分布方法来度量我国商业银行操作风险,该方法能够让银行操作风险资本金的估计变得简单、容易,且具实际可操作性.  相似文献   

2.
对操作风险所要求的经济资本的度量以及配置能极大提高金融机构的风险控制能力。采用PCIT模型对操作风险度量时,阈值的选取是关键所在,它决定了拟合操作风险损失分布的近似程度。通过变点理论来定位Hill估计曲线开始进入稳定状态的位置,以精确地估计出阈值的大小。同时,为确保误差更小,结果更稳定,用平方误差积分法来估计POT模型的参数。结果表明,所改进的方法能为经济资本的度量提供有效的方法支持。  相似文献   

3.
作为巴塞尔新资本协议规定的七种操作风险损失类型之一,内部欺诈问题是中国商业银行的一个重大风险来源。以部分国内商业银行内部欺诈数据为样本,针对内部欺诈具有的低频率高损失的特点,借助广义Pareto分布(GPD)和对数正态分布对内部欺诈建立了一个风险度量模型,然后通过对尾部分布何时服从GPD进行检验,得到了精确的门限值,最后利用所建立的分布模型对内部欺诈类操作风险在险风险值、经济资本和最大可能损失进行了估计,说明了中国商业银行防范内部欺诈风险的重要性。  相似文献   

4.
目前,商业银行操作风险的度量大都是在操作风险损失数据的分布假定下、根据VaR风险度量方法给出资本需求(风险准备金),这一理论方法的基础是假定分布。然而商业银行操作风险的准备金往往又是一个基本确定的数值或需求区间,这就给风险准备金提出了比较严格的要求,否则将为商业银行操作带来一定的风险隐患。故根据分区多目标风险方法度量操作风险,并在此基础上根据信息熵的理论给出最优的资本需求(风险准备金)及其模型,其方法的优点是灵活简单,但要求初始密度函数的极值分布收敛于耿贝尔类型。为此给出实证分析,以说明两者之间的关系,这一理论方法可以为监管部门的管理提供一定程度的参考。  相似文献   

5.
杨旭  聂磊 《统计研究》2008,25(9):32-35
再保险人的整体风险管理能力、水平和行为将直接影响再保险公司的整体风险管理绩效以及整个保险市场的稳定。本文使用极值理论模拟了再保险业务的风险分布特征,比较了成数再保险和非比例再保险业务风险分布的差异,认为再保险业务损失分布不服从正态分布,具有厚尾性;成数业务损失分布具有均值大、方差小的特点,而非比例业务损失分布的均值较小,但方差较大;在高置信水平条件下,非比例业务的风险损失率远远大于成数业务。因此,再保险公司应当大力发展非比例再保险业务,并增强资本实力,积极拓展业范围,在国际市场上分散风险。  相似文献   

6.
文章利用极值理论中的BMM模型对商业银行操作风险损失极端值分布进行估计,采用广义极值分布构建VaR模型,组建极值数据组,运用极大似然估计法估计两个参数,进而计算操作风险损失VaR。最后结合我国商业银行1994~2008年的220个操作风险损失数据进行实证研究,结果显示BMM模型具有超越样本的估计能力,在数据较少条件下能得到较准确结果,用其度量商业银行的操作风险损失VaR是合理的,这为我国商业银行操作风险度量和管理提供一定的量化依据。  相似文献   

7.
基于贝叶斯方法的信用风险损失分布研究   总被引:1,自引:1,他引:0  
现代商业银行进行经济资本配置时,采用的损失分布函数都存在严重的失真问题。运用贝叶斯方法,充分利用各种信息对正态分布形式的信用损失分布进行了修正,得到信用风险损失分布的优化模型,结果表明:修正后的信用风险损失分布具有较高的精度,从而为商业银行经济资本管理提供了一种很实用的管理工具。  相似文献   

8.
一、我国商业银行操作风险现状分析 为了分析我国商业银行操作风险的损失事件类型分布情况和损失事件发生的业务部门分布情况,我们收集了2005-2012年国内媒体公开报道的189起操作风险损失事件。每一笔损失都记录了损失事件的类型、业务部门和损失金额。表1是对不同的事件类型和不同的业务部门的损失事件进行的分类统计。  相似文献   

9.
文章讨论了操作风险的度量问题,将操作风险事件的发生假定为Cox过程--一种比Poisson过程更一般的计数过程,损失分布为广义帕累托分布(GPD),各风险事件的相关性由Copula来描述;分析了金融机构整体操作风险的度量.文章还讨论了模型中参数的设定与估计问题,为了使得模型可以在实际中应用,Monce Carlo模拟也给与了考虑.  相似文献   

10.
文章用t-Copula函数刻画保险业务之间复杂的非线性相关性和风险的厚尾特点,考虑到保险业务小概率高风险的特点,对单个业务的边际分布用极值理论EVT的广义Pareto分布进行模拟;通过算例,比较了用在险价值VaR和期望损失ES两种方法估算的保险业务风险的差异,并得出基于EVT和Copula函数得出的整合风险的经济资本额更能刻画保险业务的实际风险的结论.  相似文献   

11.
In this paper the collective risk model with Poisson–Lindley and exponential distributions as the primary and secondary distributions, respectively, is developed in a detailed way. It is applied to determine the Bayes premium used in actuarial science and also to compute the regulatory capital in the analysis of operational risk. The results are illustrated with numerous examples and compared with other approaches proposed in the literature for these questions, with considerable differences being observed.  相似文献   

12.
This article compares four methods used to approximate value at risk (VaR) from the first four moments of a probability distribution: Cornish–Fisher, Edgeworth, Gram–Charlier, and Johnson distributions. Increasing rearrangements are applied to the first three methods. Simulation results suggest that for large sample situations, Johnson distributions yield the most accurate VaR approximation. For small sample situations with small tail probabilities, Johnson distributions yield the worst approximation. A particularly relevant case would be in banking applications for calculating the size of operational risk to cover certain loss types. For this case, the rearranged Gram–Charlier method is recommended.  相似文献   

13.
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk.

It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates.

The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data.  相似文献   


14.
李政宵  孟生旺 《统计研究》2018,35(1):91-103
非寿险精算的核心问题之一是对未决赔款准备金进行准确评估。非寿险未决赔款准备金评估通常使用增量赔款或累积赔款的流量三角形数据。在未决赔款准备金评估中,多条业务线的流量三角形数据之间通常存在一定的相依关系,这种相依关系对保险公司总准备金的评估结果具有重要影响。从本质上看,未决赔款准备金是一个随机变量,其损失分布存在一定的多样性。因此,在未决赔款准备金的评估中选择合适的分布至关重要。GB2分布是一种包含四个参数的连续型分布,具有灵活的密度函数,分布形状更加灵活,许多常见分布都是它的特例,适宜处理不同特点的未决赔款流量三角形数据。为了考虑不同业务线之间的相依关系对未决赔款准备金评估结果的影响,本文基于GB2分布建立了一种相依性准备金评估模型,该模型首先假设不同业务线的增量赔款服从GB2分布,并在分布的期望中引入事故年和进展年作为解释变量,引入日历年随机效应描述各条业务线之间的相依关系;然后借助贝叶斯HMC方法进行参数估计和未决赔款准备金预测,最后给出了总准备金的预测分布和评估结果。本文将该方法应用到两条业务线的流量三角形数据进行实证研究,并与现有其他方法进行了比较。实证研究结果表明,基于GB2分布的相依性准备金评估模型对未决赔款准备金的尾部风险和不确定性的考虑更加充分,更加适用于评估具有厚尾或者长尾特征的准备金数据。  相似文献   

15.
This paper describes a Bayesian approach to make inference for risk reserve processes with an unknown claim‐size distribution. A flexible model based on mixtures of Erlang distributions is proposed to approximate the special features frequently observed in insurance claim sizes, such as long tails and heterogeneity. A Bayesian density estimation approach for the claim sizes is implemented using reversible jump Markov chain Monte Carlo methods. An advantage of the considered mixture model is that it belongs to the class of phase‐type distributions, and thus explicit evaluations of the ruin probabilities are possible. Furthermore, from a statistical point of view, the parametric structure of the mixtures of the Erlang distribution offers some advantages compared with the whole over‐parametrized family of phase‐type distributions. Given the observed claim arrivals and claim sizes, we show how to estimate the ruin probabilities, as a function of the initial capital, and predictive intervals that give a measure of the uncertainty in the estimations.  相似文献   

16.
We consider robust Bayesian prediction of a function of unobserved data based on observed data under an asymmetric loss function. Under a general linear-exponential posterior risk function, the posterior regret gamma-minimax (PRGM), conditional gamma-minimax (CGM), and most stable (MS) predictors are obtained when the prior distribution belongs to a general class of prior distributions. We use this general form to find the PRGM, CGM, and MS predictors of a general linear combination of the finite population values under LINEX loss function on the basis of two classes of priors in a normal model. Also, under the general ε-contamination class of prior distributions, the PRGM predictor of a general linear combination of the finite population values is obtained. Finally, we provide a real-life example to predict a finite population mean and compare the estimated risk and risk bias of the obtained predictors under the LINEX loss function by a simulation study.  相似文献   

17.
Risk estimation is an important statistical question for the purposes of selecting a good estimator (i.e., model selection) and assessing its performance (i.e., estimating generalization error). This article introduces a general framework for cross-validation and derives distributional properties of cross-validated risk estimators in the context of estimator selection and performance assessment. Arbitrary classes of estimators are considered, including density estimators and predictors for both continuous and polychotomous outcomes. Results are provided for general full data loss functions (e.g., absolute and squared error, indicator, negative log density). A broad definition of cross-validation is used in order to cover leave-one-out cross-validation, V-fold cross-validation, Monte Carlo cross-validation, and bootstrap procedures. For estimator selection, finite sample risk bounds are derived and applied to establish the asymptotic optimality of cross-validation, in the sense that a selector based on a cross-validated risk estimator performs asymptotically as well as an optimal oracle selector based on the risk under the true, unknown data generating distribution. The asymptotic results are derived under the assumption that the size of the validation sets converges to infinity and hence do not cover leave-one-out cross-validation. For performance assessment, cross-validated risk estimators are shown to be consistent and asymptotically linear for the risk under the true data generating distribution and confidence intervals are derived for this unknown risk. Unlike previously published results, the theorems derived in this and our related articles apply to general data generating distributions, loss functions (i.e., parameters), estimators, and cross-validation procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号