首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In canonical vector time series autoregressions, which permit dependence only on past values, the errors generally show contemporaneous correlation. By contrast structural vector autoregressions allow contemporaneous series dependence and assume errors with no contemporaneous correlation. Such models having a recursive structure can be described by a directed acyclic graph. We show, with the use of a real example, how the identification of these models may be assisted by examination of the conditional independence graph of contemporaneous and lagged variables. In this example we identify the causal dependence of monthly Italian bank loan interest rates on government bond and repurchase agreement rates. When the number of series is larger, the structural modelling of the canonical errors alone is a useful initial step, and we first present such an example to demonstrate the general approach to identifying a directed graphical model.  相似文献   

2.
The average household in the UK is in debt to the tune of nearly £9000—and that is not including the mortgage. Consumer borrowing, on credit cards, car and retail finance deals, overdrafts and unsecured personal loans was £4525 per average UK adult at the end of February 2007. Are we drowning in debt? Chris Tapp , Associate Director of Credit Action, reports.  相似文献   

3.
城市发展联系着产业发展的微观面与国家战略的宏观面,是经济增长的中观桥梁。采用对数平均迪氏指数法分析渭南银行贷款对城市经济的结构化效率,并将其与产业结构联系起来,搭建起金融支持与实体经济发展之间的桥梁。基于以上认识,研究以银行贷款为主要形式的渭南金融资源是否推动了渭南市经济的发展。同时,研究渭南金融资源是通过何种途径支持渭南市经济发展的问题。研究结果显示,渭南市银行业贷款投放所产生的活动效应对GDP变动总效应的贡献最大,效率效应次之,结构效应则对GDP变动效应作用不明显。  相似文献   

4.
个人住房抵押贷款 (本文简称个贷 )既是国家刺激内需的重要手段 ,又是商业银行经营的重要产品。只有刺激内需的宏观使命与盈利最大化的管理价值的真正结合 ,才能使个贷获得持续发展的动力。本文以中国某银行为例 ,通过将个贷利润的主要影响因素的影响力予以数量化、将个贷利润的形成机制予以模型化 ,为我国商业银行个贷利润提升提供了新的思路 ,从而使个贷在“能够盈利而且最大盈利”的前提下履行其宏观使命。  一、模型基本形式的确定根据会计学原理 ,银行个贷的税前利润(Profit)应等于个贷的收益 (Return)和个贷的成本(Cost)之差。银…  相似文献   

5.
将Bonus—MalLis模型应用到银行贷款业务,通过调整银行的Bonus-MalLis贷款利率来减少贷款欺诈行为。主要是建立借款人的银行个人信用体系,通过借款人上阶段的还款利率和表现决定其下阶段还款利率,这样就提供了一种与完全审计机制不同的奖惩机制。在一些简单假设下可证明Bonus—Malus利率将会消除所有欺诈行为,而非仅仅减少欺诈行为。  相似文献   

6.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   

7.
Two periodic-review models arising in insurance are considered in the framework of cost approach. The first one treats the insurance company performance under assumption of capital injections and reinsurance. The second one deals with assets selling and bank loans. The aim of the article is to demonstrate the method for establishing the optimal control of such applied stochastic models and to study their stability. For this purpose, sensitivity analysis is carried out. Numerical results are also provided.  相似文献   

8.
Large-scale Bayesian spatial modelling of air pollution for policy support   总被引:1,自引:0,他引:1  
The potential effects of air pollution are a major concern both in terms of the environment and in relation to human health. In order to support environmental policy, there is a need for accurate measurements of the concentrations of pollutants at high geographical resolution over large regions. However, within such regions, there are likely to be areas where the monitoring information will be sparse and so methods are required to accurately predict concentrations. Set within a Bayesian framework, models are developed which exploit the relationships between pollution and geographical covariate information, such as land use, climate and transport variables together with spatial structure. Candidate models are compared based on their ability to predict a set of validation sites. The chosen model is used to perform large-scale prediction of nitrogen dioxide at a 1×1 km resolution for the entire EU. The models allow probabilistic statements to be made with regard to the levels of air pollution that might be experienced in each area. When combined with population data, such information can be invaluable in informing policy by indicating areas for which improvements may be given priority.  相似文献   

9.
Variable selection is an important task in regression analysis. Performance of the statistical model highly depends on the determination of the subset of predictors. There are several methods to select most relevant variables to construct a good model. However in practice, the dependent variable may have positive continuous values and not normally distributed. In such situations, gamma distribution is more suitable than normal for building a regression model. This paper introduces an heuristic approach to perform variable selection using artificial bee colony optimization for gamma regression models. We evaluated the proposed method against with classical selection methods such as backward and stepwise. Both simulation studies and real data set examples proved the accuracy of our selection procedure.  相似文献   

10.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

11.
Abstract. We propose an extension of graphical log‐linear models to allow for symmetry constraints on some interaction parameters that represent homologous factors. The conditional independence structure of such quasi‐symmetric (QS) graphical models is described by an undirected graph with coloured edges, in which a particular colour corresponds to a set of equality constraints on a set of parameters. Unlike standard QS models, the proposed models apply with contingency tables for which only some variables or sets of the variables have the same categories. We study the graphical properties of such models, including conditions for decomposition of model parameters and of maximum likelihood estimates.  相似文献   

12.
国际利率水平决定的数量论证及其风险的衡量   总被引:3,自引:0,他引:3       下载免费PDF全文
从80年代的墨西哥和巴西等拉美发展中国家发生的严重债务危机,到今天震撼欧洲的货币金融风波,可谓一波未平、一波又起。如何有效地回避国际金融风险已成为目前世界各国迫切需要解决的重大课题。本文首先就国际利率水平变动的范围作一严格的、理论上的界定;然后给出两个度量国际利率风险的计算公式,从而,为受资国准确计算国际利率风险,以便有效地回避或降低国际利率风险提供一条可择途径。  相似文献   

13.
The prediction of the time of default in a credit risk setting via survival analysis needs to take a high censoring rate into account. This rate is because default does not occur for the majority of debtors. Mixture cure models allow the part of the loan population that is unsusceptible to default to be modeled, distinct from time of default for the susceptible population. In this article, we extend the mixture cure model to include time-varying covariates. We illustrate the method via simulations and by incorporating macro-economic factors as predictors for an actual bank dataset.  相似文献   

14.
Summary.  A statistical analysis of a bank's credit card database is presented. The database is a snapshot of accounts whose holders have missed a payment on a given month but who do not subsequently default. The variables on which there is information are observable measures on the account (such as profit and activity), and whether actions that are available to the bank (such as letters and telephone calls) have been taken. A primary objective for the bank is to gain insight into the effect that collections activity has on on-going account usage. A neglog transformation that highlights features that are hidden on the original scale and improves the joint distribution of the covariates is introduced. Quantile regression, a novel methodology to the credit scoring industry, is used as it is relatively assumption free, and it is suspected that different relationships may be manifest in different parts of the response distribution. The large size is handled by selecting relatively small subsamples for training and then building empirical distributions from repeated samples for validation. In the application to the database of clients who have missed a single payment a substantive finding is that the predictor of the median of the target variable contains different variables from those of the predictor of the 30% quantile. This suggests that different mechanisms may be at play in different parts of the distribution.  相似文献   

15.
Summary.  We deal with contingency table data that are used to examine the relationships between a set of categorical variables or factors. We assume that such relationships can be adequately described by the cond`itional independence structure that is imposed by an undirected graphical model. If the contingency table is large, a desirable simplified interpretation can be achieved by combining some categories, or levels, of the factors. We introduce conditions under which such an operation does not alter the Markov properties of the graph. Implementation of these conditions leads to Bayesian model uncertainty procedures based on reversible jump Markov chain Monte Carlo methods. The methodology is illustrated on a 2×3×4 and up to a 4×5×5×2×2 contingency table.  相似文献   

16.
Model checking with discrete data regressions can be difficult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodness-of-fit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fitted to a historical data set on behavioural learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and specific discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are sufficiently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals defined from an underlying continuous model and quantile–quantile plots of these residuals.  相似文献   

17.
Latent variable models are widely used for jointly modeling of mixed data including nominal, ordinal, count and continuous data. In this paper, we consider a latent variable model for jointly modeling relationships between mixed binary, count and continuous variables with some observed covariates. We assume that, given a latent variable, mixed variables of interest are independent and count and continuous variables have Poisson distribution and normal distribution, respectively. As such data may be extracted from different subpopulations, consideration of an unobserved heterogeneity has to be taken into account. A mixture distribution is considered (for the distribution of the latent variable) which accounts the heterogeneity. The generalized EM algorithm which uses the Newton–Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. The standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. Analysis of the primary biliary cirrhosis data is presented as an application of the proposed model.  相似文献   

18.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   

19.
Shi, Wang, Murray-Smith and Titterington (Biometrics 63:714–723, 2007) proposed a Gaussian process functional regression (GPFR) model to model functional response curves with a set of functional covariates. Two main problems are addressed by their method: modelling nonlinear and nonparametric regression relationship and modelling covariance structure and mean structure simultaneously. The method gives very good results for curve fitting and prediction but side-steps the problem of heterogeneity. In this paper we present a new method for modelling functional data with ‘spatially’ indexed data, i.e., the heterogeneity is dependent on factors such as region and individual patient’s information. For data collected from different sources, we assume that the data corresponding to each curve (or batch) follows a Gaussian process functional regression model as a lower-level model, and introduce an allocation model for the latent indicator variables as a higher-level model. This higher-level model is dependent on the information related to each batch. This method takes advantage of both GPFR and mixture models and therefore improves the accuracy of predictions. The mixture model has also been used for curve clustering, but focusing on the problem of clustering functional relationships between response curve and covariates, i.e. the clustering is based on the surface shape of the functional response against the set of functional covariates. The model is examined on simulated data and real data.  相似文献   

20.
Structural vector autoregressive analysis for cointegrated variables   总被引:1,自引:0,他引:1  
Summary Vector autoregressive (VAR) models are capable of capturing the dynamic structure of many time series variables. Impulse response functions are typically used to investigate the relationships between the variables included in such models. In this context the relevant impulses or innovations or shocks to be traced out in an impulse response analysis have to be specified by imposing appropriate identifying restrictions. Taking into account the cointegration structure of the variables offers interesting possibilities for imposing identifying restrictions. Therefore VAR models which explicitly take into account the cointegration structure of the variables, so-called vector error correction models, are considered. Specification, estimation and validation of reduced form vector error correction models is briefly outlined and imposing structural short- and long-run restrictions within these models is discussed. I thank an anonymous reader for comments on an earlier draft of this paper that helped me to improve the exposition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号