首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Value at risk and expected shortfall are the two most popular measures of financial risk. Here, we tabulate expressions for both these measures for over 100 parametric distributions, including all commonly known distributions, and illustrate a data application. We expect that this collection of expressions could serve as a source of reference and encourage further research with respect to measures of financial risk.  相似文献   

2.
Negative binomial (NB) regression is the most common full‐likelihood method for analysing count data exhibiting overdispersion with respect to the Poisson distribution. Usually most practitioners are content to fit one of two NB variants, however other important variants exist. It is demonstrated here that the VGAM R package can fit them all under a common statistical framework founded upon a generalised linear and additive model approach. Additionally, other modifications such as zero‐altered (hurdle), zero‐truncated and zero‐inflated NB distributions are naturally handled. Rootograms are also available for graphically checking the goodness of fit. Two data sets and some recently added features of the VGAM package are used here for illustration.  相似文献   

3.
Beanplot is a graphical method for visualizing univariate distributions. Density forecasts have an important role to play in many applications. Although graphical methods are widely used for illustrating distributions, suitable graphical methods to help for the purposes of analysis and comparison of density forecasters do not exist. This article explains how density forecasts and related observed densities are visualized parallel using beanplots in different groups of data. The visualization method is illustrated with industrial and simulated data. The functionality extends the plotting function of R package beanplot and the developed functions are made available for R programming language.  相似文献   

4.
This R package implements three types of goodness-of-fit tests for some widely used probability distributions where there are unknown parameters, namely tests based on data transformations, on the ratio of two estimators of a dispersion parameter, and correlation tests. Most of the considered tests have been proved to be powerful against a wide range of alternatives and some new ones are proposed here. The package's functionality is illustrated with several examples by using some data sets from the areas of environmental studies, biology and finance, among others.  相似文献   

5.
This article proposes a new class of copula-based dynamic models for high-dimensional conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high-dimensional covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enables the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008–2009, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the precrisis period. Supplementary materials for this article are available online.  相似文献   

6.
In this paper, we introduce a new class of bivariate distributions whose marginals are beta-generated distributions. Copulas are employed to construct this bivariate extension of the beta-generated distributions. It is shown that when Archimedean copulas and convex beta generators are used in generating bivariate distributions, the copulas of the resulting distributions also belong to the Archimedean family. The dependence of the proposed bivariate distributions is examined. Simulation results for beta generators and an application to financial risk management are presented.  相似文献   

7.
In this paper, we suggest a technique to quantify model risk, particularly model misspecification for binary response regression problems found in financial risk management, such as in credit risk modelling. We choose the probability of default model as one instance of many other credit risk models that may be misspecified in a financial institution. By way of illustrating the model misspecification for probability of default, we carry out quantification of two specific statistical predictive response techniques, namely the binary logistic regression and complementary log–log. The maximum likelihood estimation technique is employed for parameter estimation. The statistical inference, precisely the goodness of fit and model performance measurements, are assessed. Using the simulation dataset and Taiwan credit card default dataset, our finding reveals that with the same sample size and very small simulation iterations, the two techniques produce similar goodness-of-fit results but completely different performance measures. However, when the iterations increase, the binary logistic regression technique for balanced dataset reveals prominent goodness of fit and performance measures as opposed to the complementary log–log technique for both simulated and real datasets.  相似文献   

8.
In practice, a financial or actuarial data set may be a skewed or heavy-tailed and this motivates us to study a class of distribution functions in risk management theory that provide more information about these characteristics resulting in a more accurate risk analysis. In this paper, we consider a multivariate tail conditional expectation (MTCE) for multivariate scale mixtures of skew-normal (SMSN) distributions. This class of distributions contains skewed distributions and some members of this class can be used to analyse heavy-tailed data sets. We also provide a closed form for TCE in a univariate skew-normal distribution framework. Numerical examples are also provided for illustration.  相似文献   

9.
Inverse Gamma-Pareto composite distribution is considered as a model for heavy tailed data. The maximum likelihood (ML), smoothed empirical percentile (SM), and Bayes estimators (informative and non-informative) for the parameter θ, which is the boundary point for the supports of the two distributions are derived. A Bayesian predictive density is derived via a gamma prior for θ and the density is used to estimate risk measures. Accuracy of estimators of θ and the risk measures are assessed via simulation studies. It is shown that the informative Bayes estimator is consistently more accurate than ML, Smoothed, and the non-informative Bayes estimators.  相似文献   

10.
Several two component mixture models from the transformed gamma and transformed beta families are developed to assess risk performance. Their common statistical properties are given and applications to real insurance loss data are shown. A new data trimming approach for parameter estimation is proposed using the maximum likelihood estimation method. Assessment with respect to Value-at-Risk and Conditional Tail Expectation risk measures are presented. Of all the models examined, the mixture of inverse transformed gamma-Burr distributions consistently provides good results in terms of goodness-of-fit and risk estimation in the context of the Danish fire loss data.  相似文献   

11.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

12.
A considerable problem in statistics and risk management is finding distributions that capture the complex behaviour exhibited by financial data. The importance of higher order moments in decision making has been well recognized and there is increasing interest in modelling with distributions that are able to account for these effects. The Pearson system can be used to model a wide scale of distributions with various skewness and kurtosis. This paper provides computational examples of a new easily implemented method for selecting probability density functions from the Pearson family of distributions. We apply this method to daily, monthly, and annual series using a range of data from commodity markets to macroeconomic variables.  相似文献   

13.
Quantile-quantile plots are most commonly used to compare the shapes of distributions, but they may also be used in conjunction with partial orders on distributions to compare the level and dispersion of distributions that have different shapes. We discuss several easily recognized patterns in quantile-quantile plots that suffice to demonstrate that one distribution is smaller than another in terms of each of several partial orders. We illustrate with financial applications, proposing a quantile plot for comparing the risks and returns of portfolios of investments. As competing portfolios have distributions that differ in level, dispersion, and shape, it is not sufficient to compare portfolios using measures of location and dispersion, such as expected returns and variances; however, quantile plots, with suitable scaling, do aid in such comparisons. In two plots, we compare specific portfolios to the stock market as a whole, finding these portfolios to have higher returns, greater risks or dispersion, thicker tails than their greater dispersion alone would justify. Nonetheless, investors in these risky portfolios are more than adequately compensated for the risks undertaken.  相似文献   

14.
This paper is concerned with testing and dating structural breaks in the dependence structure of multivariate time series. We consider a cumulative sum (CUSUM) type test for constant copula-based dependence measures, such as Spearman''s rank correlation and quantile dependencies. The asymptotic null distribution is not known in closed form and critical values are estimated by an i.i.d. bootstrap procedure. We analyze size and power properties in a simulation study under different dependence measure settings, such as skewed and fat-tailed distributions. To date breakpoints and to decide whether two estimated break locations belong to the same break event, we propose a pivot confidence interval procedure. Finally, we apply the test to the historical data of 10 large financial firms during the last financial crisis from 2002 to mid-2013.  相似文献   

15.
Two-sample comparisons belonging to basic class of statistical inference are extensively applied in practice. There is a rich statistical literature regarding different parametric methods to address these problems. In this context, most of the powerful techniques are assumed to be based on normally distributed populations. In practice, the alternative distributions of compared samples are commonly unknown. In this case, one can propose a combined test based on the following decision rules: (a) the likelihood-ratio test (LRT) for equality of two normal populations and (b) the Shapiro–Wilk (S-W) test for normality. The rules (a) and (b) can be merged by, e.g., using the Bonferroni correction technique to offer the correct comparison of the samples distribution. Alternatively, we propose the exact density-based empirical likelihood (DBEL) ratio test. We develop the tsc package as the first R package available to perform the two-sample comparisons using the exact test procedures: the LRT; the LRT combined with the S-W test; as well as the newly developed DBEL ratio test. We demonstrate Monte Carlo (MC) results and a real data example to show an efficiency and excellent applicability of the developed procedure.  相似文献   

16.
In this work we propose Bayesian measures to quantify the influence of observations on the structural parameters of the simple measurement error model (MEM). Different influence measures, like those based on q-divergence between posterior distributions and Bayes risk, are studied to evaluate the influence. A strategy based on the perturbation function and MCMC samples is used to compute these measures. The samples from the posterior distributions are obtained by using the Metropolis-Hastings algorithm and assuming specific proper prior distributions. The results are illustrated with an application to a real example modeled with MEM in the literature.  相似文献   

17.
The generalized Rayleigh distribution was introduced and studied quite effectively in the literature. The closeness and separation between the distributions are extremely important for analyzing any lifetime data. In this spirit, both the generalized Rayleigh and Weibull distributions can be used for analyzing skewed datasets. In this article, we compare these two distributions based on the Fisher information measures and use it for discrimination purposes. It is evident that the Fisher information measures play an important role in separating between the distributions. The total information measures and the variances of the different percentile estimators are computed and presented. A real life dataset is analyzed for illustration purposes and a numerical comparison study is performed to assess our procedures in separating between these two distributions.  相似文献   

18.
We propose a bootstrap-based test of the null hypothesis of equality of two firms’ conditional risk measures (RMs) at a single point in time. The test can be applied to a wide class of conditional risk measures issued from parametric or semiparametric models. Our iterative testing procedure produces a grouped ranking of the RMs, which has direct application for systemic risk analysis. Firms within a group are statistically indistinguishable from each other, but significantly more risky than the firms belonging to lower ranked groups. A Monte Carlo simulation demonstrates that our test has good size and power properties. We apply the procedure to a sample of 94 U.S. financial institutions using ΔCoVaR, MES, and %SRISK. We find that for some periods and RMs, we cannot statistically distinguish the 40 most risky firms due to estimation uncertainty.  相似文献   

19.
In modelling financial return time series and time-varying volatility, the Gaussian and the Student-t distributions are widely used in stochastic volatility (SV) models. However, other distributions such as the Laplace distribution and generalized error distribution (GED) are also common in SV modelling. Therefore, this paper proposes the use of the generalized t (GT) distribution whose special cases are the Gaussian distribution, Student-t distribution, Laplace distribution and GED. Since the GT distribution is a member of the scale mixture of uniform (SMU) family of distribution, we handle the GT distribution via its SMU representation. We show this SMU form can substantially simplify the Gibbs sampler for Bayesian simulation-based computation and can provide a mean of identifying outliers. In an empirical study, we adopt a GT–SV model to fit the daily return of the exchange rate of Australian dollar to three other currencies and use the exchange rate to US dollar as a covariate. Model implementation relies on Bayesian Markov chain Monte Carlo algorithms using the WinBUGS package.  相似文献   

20.
Risk management of stock portfolios is a fundamental problem for the financial analysis since it indicates the potential losses of an investment at any given time. The objective of this study is to use bivariate static conditional copulas to quantify the dependence structure and to estimate the risk measure Value-at-Risk (VaR). There were selected stocks that have been performing outstandingly on the Brazilian Stock Exchange to compose pairs trading portfolios (B3, Gerdau, Magazine Luiza, and Petrobras). Due to the flexibility that this methodology offers in the construction of multivariate distributions and risk aggregation in finance, we used the copula-APARCH approach with the Normal, T-student, and Joe-Clayton copula functions. In most scenarios, the results showed a pattern of dependence at the extremes. Moreover, the copula form seems not to be relevant for VaR estimation, since in most portfolios the appropriate copulas lead to significant VaR estimates. It has found that the best models fitted provided conservative risk measures, estimates at 5% and 1%, in a scenario more aggressive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号