首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《统计学通讯:理论与方法》2012,41(16-17):3126-3137
This article proposes a permutation procedure for evaluating the performance of different classification methods. In particular, we focus on two of the most widespread and used classification methodologies: latent class analysis and k-means clustering. The classification performance is assessed by means of a permutation procedure which allows for a direct comparison of the methods, the development of a statistical test, and points out better potential solutions. Our proposal provides an innovative framework for the validation of the data partitioning and offers a guide in the choice of which classification procedure should be used  相似文献   

2.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Randomness in financial markets has been recognized for over a century: Bachelier (1900), Cowles (1932), Kendall (1953), and Samuelson (1959). Risk thus enters into efficient portfolio design: Fisher (1906), Williams (1936), Working (1948), Markowitz (1952). Reward versus risk decisions then depend upon utility to the investor: Bernoulli (1738), Kelly (1956), Sharpe (1964), and Modigliani (1997). Returns of a portfolio adjusted to risk are measured by a number of ratios: Treynor, Sharpe, Sortino, M2, among others. I will propose a refinement of such ratios. This possibility was mentioned in my recent book: Antieigenvalue analysis, World-Scientific (2011). The result is a new set of growth-to-return risk-based financial ratios of ratios.  相似文献   

4.
Models incorporating “latent” variables have been commonplace in financial, social, and behavioral sciences. Factor model, the most popular latent model, explains the continuous observed variables in a smaller set of latent variables (factors) in a matter of linear relationship. However, complex data often simultaneously display asymmetric dependence, asymptotic dependence, and positive (negative) dependence between random variables, which linearity and Gaussian distributions and many other extant distributions are not capable of modeling. This article proposes a nonlinear factor model that can model the above-mentioned variable dependence features but still possesses a simple form of factor structure. The random variables, marginally distributed as unit Fréchet distributions, are decomposed into max linear functions of underlying Fréchet idiosyncratic risks, transformed from Gaussian copula, and independent shared external Fréchet risks. By allowing the random variables to share underlying (latent) pervasive risks with random impact parameters, various dependence structures are created. This innovates a new promising technique to generate families of distributions with simple interpretations. We dive in the multivariate extreme value properties of the proposed model and investigate maximum composite likelihood methods for the impact parameters of the latent risks. The estimates are shown to be consistent. The estimation schemes are illustrated on several sets of simulated data, where comparisons of performance are addressed. We employ a bootstrap method to obtain standard errors in real data analysis. Real application to financial data reveals inherent dependencies that previous work has not disclosed and demonstrates the model’s interpretability to real data. Supplementary materials for this article are available online.  相似文献   

5.
A data-driven approach for modeling volatility dynamics and co-movements in financial markets is introduced. Special emphasis is given to multivariate conditionally heteroscedastic factor models in which the volatilities of the latent factors depend on their past values, and the parameters are driven by regime switching in a latent state variable. We propose an innovative indirect estimation method based on the generalized EM algorithm principle combined with a structured variational approach that can handle models with large cross-sectional dimensions. Extensive Monte Carlo simulations and preliminary experiments with financial data show promising results.  相似文献   

6.
A new method of modeling coronary artery calcium (CAC) is needed in order to properly understand the probability of onset and growth of CAC. CAC remains a controversial indicator of cardiovascular disease (CVD) risk, but this may be due to ill-equipped methods of specifying CAC during the analysis phase of studies reporting an analysis where CAC is the primary outcome. The modern method of two-part latent growth modeling may represent a strong alternative to the myriad of existing methods for modeling CAC. We provide a brief overview of existing methods of analysis used for CAC before introducing the general latent growth curve model, how it extends into a two-part (semicontinuous) growth model, and how the ubiquitous problem of missing data can be effectively handled. We then present an example of how to model CAC using this framework. We demonstrate that utilizing this type of modeling strategy can result in traditional predictors of CAC (e.g. age, gender, and high-density lipoprotein cholesterol), exerting a different impact on the two different, yet simultaneous, operationalizations of CAC. This method of analyzing CAC could inform future analyses of CAC and inform subsequent discussions about the nature of its potential to inform long-term CVD risk and heart events.  相似文献   

7.
本文首先梳理了我国金融业分类的演化,借鉴国际标准产业分类体系(ISIC4.0)、新加坡金融业分类和我国香港金融业分类的基础上,对我国2011年公布的《国民经济行业分类与代码(GB/T4754–2011)》中金融业的分类提出了改进建议;其次,基于改进的金融业分类条件下,分析我国季度金融业增加值计算方法的缺陷,提出了比重模型和回归模型来改进计算方法;最后,利用2008普查年度某地区金融业数据进行实证研究,发现两个模型的计算结果均较准确。  相似文献   

8.
Data analytic methods for latent partially ordered classification models   总被引:1,自引:0,他引:1  
Summary. A general framework is presented for data analysis of latent finite partially ordered classification models. When the latent models are complex, data analytic validation of model fits and of the analysis of the statistical properties of the experiments is essential for obtaining reliable and accurate results. Empirical results are analysed from an application to cognitive modelling in educational testing. It is demonstrated that sequential analytic methods can dramatically reduce the amount of testing that is needed to make accurate classifications.  相似文献   

9.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

10.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

11.
I explain why at-the-money implied volatility is a biased and inefficient forecast of future realized volatility using the insights from the empirical option-pricing literature. First, I explain how the risk premia, which manifest themselves through disparity between objective and risk-neutral probability measures, lead to the disparity between realized and implied volatilities. Second, I show that this disparity is a function of the latent spot volatility, which I estimate using the historical volatility and high–low range. An empirical exercise that is based on at-the-money implied volatility series of foreign currencies and stock market indexes, is supportive of my risk premia-based explanation of the bias.  相似文献   

12.
Abstract. The first goal of this article is to consider influence analysis of principal Hessian directions (pHd) and highlight how such an analysis can provide valuable insight into its behaviour. Such insight includes reasons as to why pHd can sometimes return informative results when it is not expected to do so, and why many prefer a residuals‐based pHd method over its response‐based counterpart. The secondary goal of this article is to introduce a new influence measure applicable to many dimension reduction methods based on average squared canonical correlations. A general form of this measure is also given, allowing for application to dimension reduction methods other than pHd. A sample version of the measure is considered, with respect to pHd, with two example data sets.  相似文献   

13.
在金融风险的度量中,拟合分布的选取直接影响到风险度量的精度问题。针对金融收益序列的动态变化,在SV模型中引入广义双曲线学生偏t分布(SV-GHSKt)拟合金融收益序列的尖峰厚尾、不对称以及杠杆效应等特征,通过马尔科夫蒙特卡洛模拟的方法将收益率序列转化为标准残差序列,然后用极值理论的POT模型拟合标准残差序列尾部分布,进而建立一种新的金融风险度量模型———基于SV-GHSKt-POT的动态VaR模型。用该模型对上证综合指数做实证研究,结果表明,SV-GHSKt-POT的动态VaR模型能很好地模拟金融收益序列的尖峰厚尾性、波动集聚性及杠杆效应,并且能够合理有效地提高风险测度的精度,尤其在高的置信水平下表现更好。  相似文献   

14.
叶五一  张明  缪柏其 《统计研究》2012,29(11):79-83
 在险价值VaR是一种非常重要的金融风险度量方法,近期也有很多关于动态VaR以及条件VaR (CVaR) 等方面的研究。根据金融资产的收益率具有重尾特征这一事实,本文假定金融资产收益率服从重尾分布,并假定重尾分布的尾部指数随着收益率发生变化。本文基于尾部指数回归模型对重尾分布的尾部指数进行估计,进而得到收益率尾部数据所服从的条件分布,并首次运用该方法对条件VaR进行估计。本文对沪深300指数进行了实证研究,得到CVaR的估计,并对估计得到的CVaR的预测效果作出检验,并与传统VaR估计方法进行了对比,实证结果发现本文的方法的预测效果更好。  相似文献   

15.
Model-based clustering methods for continuous data are well established and commonly used in a wide range of applications. However, model-based clustering methods for categorical data are less standard. Latent class analysis is a commonly used method for model-based clustering of binary data and/or categorical data, but due to an assumed local independence structure there may not be a correspondence between the estimated latent classes and groups in the population of interest. The mixture of latent trait analyzers model extends latent class analysis by assuming a model for the categorical response variables that depends on both a categorical latent class and a continuous latent trait variable; the discrete latent class accommodates group structure and the continuous latent trait accommodates dependence within these groups. Fitting the mixture of latent trait analyzers model is potentially difficult because the likelihood function involves an integral that cannot be evaluated analytically. We develop a variational approach for fitting the mixture of latent trait models and this provides an efficient model fitting strategy. The mixture of latent trait analyzers model is demonstrated on the analysis of data from the National Long Term Care Survey (NLTCS) and voting in the U.S. Congress. The model is shown to yield intuitive clustering results and it gives a much better fit than either latent class analysis or latent trait analysis alone.  相似文献   

16.
VaR估计中的概率分布设定风险与改进   总被引:3,自引:1,他引:2       下载免费PDF全文
李腊生  孙春花 《统计研究》2010,27(10):40-46
 在金融风险管理中,金融风险的事先判断具有极其重要的意义,然而金融机构金融决策事前支持技术的缺陷常常被忽略,在金融投资收益率概率分布估计方法尚未建立以前,将样本数据特征纳入风险度量的计算则不失为一种改进风险判断的有效途径。本文选择度量金融风险的主流方法—VaR技术来讨论概率分布设定风险,探讨依据数据特征改进和扩展VaR计算方法,通过对Delta-正态方法与Delta-Gamma-Cornish-Fisher扩展方法估计VaR值的比较,从实证分析角度论证了扩展方法在VaR估计中的有效性与稳健性。  相似文献   

17.
In this article, we introduce a new method for modelling curves with dynamic structures, using a non-parametric approach formulated as a state space model. The non-parametric approach is based on the use of penalised splines, represented as a dynamic mixed model. This formulation can capture the dynamic evolution of curves using a limited number of latent factors, allowing an accurate fit with a small number of parameters. We also present a new method to determine the optimal smoothing parameter through an adaptive procedure, using a formulation analogous to a model of stochastic volatility (SV). The non-parametric state space model allows unifying different methods applied to data with a functional structure in finance. We present the advantages and limitations of this method through simulation studies and also by comparing its predictive performance with other parametric and non-parametric methods used in financial applications using data on the term structure of interest rates.  相似文献   

18.
19.
ABSTRACT

We introduce a new methodology for estimating the parameters of a two-sided jump model, which aims at decomposing the daily stock return evolution into (unobservable) positive and negative jumps as well as Brownian noise. The parameters of interest are the jump beta coefficients which measure the influence of the market jumps on the stock returns, and are latent components. For this purpose, at first we use the Variance Gamma (VG) distribution which is frequently used in modeling financial time series and leads to the revelation of the hidden market jumps' distributions. Then, our method is based on the central moments of the stock returns for estimating the parameters of the model. It is proved that the proposed method provides always a solution in terms of the jump beta coefficients. We thus achieve a semi-parametric fit to the empirical data. The methodology itself serves as a criterion to test the fit of any sets of parameters to the empirical returns. The analysis is applied to NASDAQ and Google returns during the 2006–2008 period.  相似文献   

20.
上市公司往往存在粉饰财务数据来美化企业经营状况的动机,这会降低财务风险预警模型预测的准确性。文章利用Benford律和Myer指数两种数据质量评估方法,构建Benford和Myer质量因子,引入BP神经网络模型,构造BM-BP神经网络财务风险预警模型;并进一步利用2000—2019年中国A股上市公司数据,评价数据质量因子对财务风险预警模型预测准确性的影响,分析新模型预测准确性的稳定性。实证分析结果显示:Benford和Myer质量因子提高了BP神经网络财务风险预警模型预测的准确性;在不同质量因子的比较结果中,包含评选指标Benford和Myer质量因子的BP神经网络财务风险预警模型具有较高的预测准确率和较低的二类误判率,稳定性良好;利用决策树算法筛选指标有效提高了新模型的预测准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号