首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A pivotal characteristic of credit defaults that is ignored by most credit scoring models is the rarity of the event. The most widely used model to estimate the probability of default is the logistic regression model. Since the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks, for example, underestimation of the default probability, which could be very risky for banks. In order to overcome these drawbacks, we propose the generalized extreme value regression model. In particular, in a generalized linear model (GLM) with the binary-dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure used is the maximum-likelihood method. This model accommodates skewness and it presents a generalisation of GLMs with complementary log–log link function. We analyse its performance by simulation studies. Finally, we apply the proposed model to empirical data on Italian small and medium enterprises.  相似文献   

2.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   

3.
以贝叶斯方法为基础构建了信用评级和违约概率模型,指出金融机构利用已有评级信息提高债务人信用风险评估准确性的途径,并以单个债务人违约概率度量方法和Merton理论为基础,考虑异质性导致的宏观经济冲击对债务人的不同影响,度量资产组合违约风险。利用相关数据对贝叶斯模型应用给出例证,结果表明贝叶斯方法具有更为灵活的框架和较好的预测能力。  相似文献   

4.
Many credit risk models are based on the selection of a single logistic regression model, on which to base parameter estimation. When many competing models are available, and without enough guidance from economical theory, model averaging represents an appealing alternative to the selection of single models. Despite model averaging approaches have been present in statistics for many years, only recently they are starting to receive attention in economics and finance applications. This contribution shows how Bayesian model averaging can be applied to credit risk estimation, a research area that has received a great deal of attention recently, especially in the light of the global financial crisis of the last few years and the correlated attempts to regulate international finance. The paper considers the use of logistic regression models under the Bayesian Model Averaging paradigm. We argue that Bayesian model averaging is not only more correct from a theoretical viewpoint, but also slightly superior, in terms of predictive performance, with respect to single selected models.  相似文献   

5.
Summary.  Model selection for marginal regression analysis of longitudinal data is challenging owing to the presence of correlation and the difficulty of specifying the full likelihood, particularly for correlated categorical data. The paper introduces a novel Bayesian information criterion type model selection procedure based on the quadratic inference function, which does not require the full likelihood or quasi-likelihood. With probability approaching 1, the criterion selects the most parsimonious correct model. Although a working correlation matrix is assumed, there is no need to estimate the nuisance parameters in the working correlation matrix; moreover, the model selection procedure is robust against the misspecification of the working correlation matrix. The criterion proposed can also be used to construct a data-driven Neyman smooth test for checking the goodness of fit of a postulated model. This test is especially useful and often yields much higher power in situations where the classical directional test behaves poorly. The finite sample performance of the model selection and model checking procedures is demonstrated through Monte Carlo studies and analysis of a clinical trial data set.  相似文献   

6.
Usually, parametric procedures used for conditional variance modelling are associated with model risk. Model risk may affect the volatility and conditional value at risk estimation process either due to estimation or misspecification risks. Hence, non-parametric artificial intelligence models can be considered as alternative models given that they do not rely on an explicit form of the volatility. In this paper, we consider the least-squares support vector regression (LS-SVR), weighted LS-SVR and Fixed size LS-SVR models in order to handle the problem of conditional risk estimation taking into account issues of model risk. A simulation study and a real application show the performance of proposed volatility and VaR models.  相似文献   

7.
This article proposes a new class of copula-based dynamic models for high-dimensional conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high-dimensional covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enables the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008–2009, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the precrisis period. Supplementary materials for this article are available online.  相似文献   

8.
A power transformation regression model is considered for exponentially distributed time to failure data with right censoring. Procedures for estimation of parameters by maximum likelihood and assessment of goodness of model fit are described and illustrated.  相似文献   

9.
刘弘 《统计研究》2008,25(7):61-65
随着国内金融市场的逐步开放,我国商业银行将面临日益激烈的市场竞争。提高商业银行的信用风险管理水平,增强市场竞争力,已经成为非常迫切的问题。除了完善信贷管理体制以外,研发信用风险模型以降低信用风险很有必要。鉴于目前的数据情况,现实的选择是研发贷款违约识别系统。本文对神经网络用于贷款违约识别做了实证研究,并与判别分析和决策树做了性能比较,得到了一些有意义的结论。  相似文献   

10.
The usefulness of logistic regression depends to a great extent on the correct specification of the relation between a binary response and characteristics of the unit on which the response is recoded. Currently used methods for testing for misspecification (lack of fit) of a proposed logistic regression model do not perform well when a data set contains almost as many distinct covariate vectors as experimental units, a condition referred to as sparsity. A new algorithm for grouping sparse data to create pseudo replicates and using them to test for lack of fit is developed. A simulation study illustrates settings in which the new test is superior to existing ones. Analysis of a dataset consisting of the ages of menarche of Warsaw girls is also used to compare the new and existing lack of fit tests.  相似文献   

11.
Summary. Standard goodness-of-fit tests for a parametric regression model against a series of nonparametric alternatives are based on residuals arising from a fitted model. When a parametric regression model is compared with a nonparametric model, goodness-of-fit testing can be naturally approached by evaluating the likelihood of the parametric model within a nonparametric framework. We employ the empirical likelihood for an α -mixing process to formulate a test statistic that measures the goodness of fit of a parametric regression model. The technique is based on a comparison with kernel smoothing estimators. The empirical likelihood formulation of the test has two attractive features. One is its automatic consideration of the variation that is associated with the nonparametric fit due to empirical likelihood's ability to Studentize internally. The other is that the asymptotic distribution of the test statistic is free of unknown parameters, avoiding plug-in estimation. We apply the test to a discretized diffusion model which has recently been considered in financial market analysis.  相似文献   

12.
The broken stick model is a model of the abundance of species in a habitat, and it has been widely extended. In this paper, we present results from exploratory data analysis of this model. To obtain some of the statistics, we formulate the broken stick model as a probability distribution function based on the same model, and we provide an expression for the cumulative distribution function, which is needed to obtain the results from exploratory data analysis. The inequalities we present are useful in ecological studies that apply broken stick models. These results are also useful for testing the goodness of fit of the broken stick model as an alternative to the chi square test, which has often been the main test used. Therefore, these results may be used in several alternative and complementary ways for testing the goodness of fit of the broken stick model.  相似文献   

13.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

14.
An overview is given of methodology for testing goodness of fit of parametric models using nonparametric function estimation techniques. The ideas are illustrated in two settings: the classical one-sample goodness-of-fit scenario and testing the goodness of fit of a polynomial regression model.  相似文献   

15.
The authors consider a semiparametric partially linear regression model with serially correlated errors. They propose a new way of estimating the error structure which has the advantage that it does not involve any nonparametric estimation. This allows them to develop an inference procedure consisting of a bandwidth selection method, an efficient semiparametric generalized least squares estimator of the parametric component, a goodness‐of‐fit test based on the bootstrap, and a technique for selecting significant covariates in the parametric component. They assess their approach through simulation studies and illustrate it with a concrete application.  相似文献   

16.
This article considers a probability generating function-based divergence statistic for parameter estimation. The performance and robustness of the proposed statistic in parameter estimation is studied for the negative binomial distribution by Monte Carlo simulation, especially in comparison with the maximum likelihood and minimum Hellinger distance estimation. Numerical examples are given as illustration of goodness of fit.  相似文献   

17.
In this study, an evaluation of Bayesian hierarchical models is made based on simulation scenarios to compare single-stage and multi-stage Bayesian estimations. Simulated datasets of lung cancer disease counts for men aged 65 and older across 44 wards in the London Health Authority were analysed using a range of spatially structured random effect components. The goals of this study are to determine which of these single-stage models perform best given a certain simulating model, how estimation methods (single- vs. multi-stage) compare in yielding posterior estimates of fixed effects in the presence of spatially structured random effects, and finally which of two spatial prior models – the Leroux or ICAR model, perform best in a multi-stage context under different assumptions concerning spatial correlation. Among the fitted single-stage models without covariates, we found that when there is low amount of variability in the distribution of disease counts, the BYM model is relatively robust to misspecification in terms of DIC, while the Leroux model is the least robust to misspecification. When these models were fit to data generated from models with covariates, we found that when there was one set of covariates – either spatially correlated or non-spatially correlated, changing the values of the fixed coefficients affected the ability of either the Leroux or ICAR model to fit the data well in terms of DIC. When there were multiple sets of spatially correlated covariates in the simulating model, however, we could not distinguish the goodness of fit to the data between these single-stage models. We found that the multi-stage modelling process via the Leroux and ICAR models generally reduced the variance of the posterior estimated fixed effects for data generated from models with covariates and a UH term compared to analogous single-stage models. Finally, we found the multi-stage Leroux model compares favourably to the multi-stage ICAR model in terms of DIC. We conclude that the mutli-stage Leroux model should be seriously considered in applications of Bayesian disease mapping when an investigator desires to fit a model with both fixed effects and spatially structured random effects to Poisson count data.  相似文献   

18.
ABSTRACT

The estimation of variance function plays an extremely important role in statistical inference of the regression models. In this paper we propose a variance modelling method for constructing the variance structure via combining the exponential polynomial modelling method and the kernel smoothing technique. A simple estimation method for the parameters in heteroscedastic linear regression models is developed when the covariance matrix is unknown diagonal and the variance function is a positive function of the mean. The consistency and asymptotic normality of the resulting estimators are established under some mild assumptions. In particular, a simple version of bootstrap test is adapted to test misspecification of the variance function. Some Monte Carlo simulation studies are carried out to examine the finite sample performance of the proposed methods. Finally, the methodologies are illustrated by the ozone concentration dataset.  相似文献   

19.
企业财务风险一直是风险管理理论和实务界关心的热点话题。运用判别分析和计量经济方法对重庆市某商业银行的461个样本企业2002-2005年的违约特征进行实证检验和预测。结果发现最重要的决定变量是资产负责率、酸性试验比率、资产净利率等7个财务比率以及企业所处的产业部门,考虑了异方差性的probit模型有更好的预测能力。  相似文献   

20.
The prediction of the time of default in a credit risk setting via survival analysis needs to take a high censoring rate into account. This rate is because default does not occur for the majority of debtors. Mixture cure models allow the part of the loan population that is unsusceptible to default to be modeled, distinct from time of default for the susceptible population. In this article, we extend the mixture cure model to include time-varying covariates. We illustrate the method via simulations and by incorporating macro-economic factors as predictors for an actual bank dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号