首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
在研究神经网络算法和主成分分析理论的基础上,针对股票市场的高度非线性特征,结合主成分分析预处理方法,对原始交易数据进行降维,减少数据规模,提出一种改进的RBF神经网络模型对股票市场进行预测.通过实验对比表明,文章提出的模型具有收敛速度快、预测准确度高等特点,应用前景较好.  相似文献   

2.
基于som网络-主成分-BP网络的股价预测   总被引:5,自引:1,他引:4  
文章提出一种基于som网络-主成分-BP网络预测模型,用于股市收盘价的实时预测。首先采用som神经网络将特性分散的样本划分成不同的子类,然后采用主成分分析方法对影响目标数据的众多变量进行降维处理,在此基础上,构建了股市收盘价的BP神经网络预测模型,大大改善了预报的精度和效率,通过对采集的股市数据进行测试,表明本文提出方法的有效性。  相似文献   

3.
文章运用主成分分析和BP神经网络方法,提出了用于衡量企业内部控制信息披露质量的主成分神经网络评价模型.运用主成分分析法对内部控制信息披露质量的系列指标进行降维,选出衡量内控信息披露质量的关键输入指标,并在此基础上运用BP神经网络算法,构建内部控制信息披露质量的主成分神经网络评价模型.  相似文献   

4.
核主成分分析KPCA是近年来提出的一个十分有效的数据降维方法,但它并不能保证所提取的第一主成分最适用于降维后的数据分类。粗糙集RS理论是处理这类问题的一个有效方法。提出一个基于KPCA与RS理论的支持向量分类机SVC,利用RS理论和信息熵原理对运用KP(A进行特征提取后的训练样本进行特征选择,保留重要特征,力求减小求解问题的规模,提高SVC的性能。在构建2006年上市公司财务困境预警模型的数值实验中,以KPCA、RS理论作为前置系统的SVC取得了良好效果。  相似文献   

5.
本文分别基于ARMA模型,主成分分析模型和神经网络模型对黑龙江省空气质量数据进行了分析和预测。首先,基于ARMA模型,本文对黑龙江省未来的空气状况数据进行预测并检验了其预测精度。其次,采用主成分分析对大气污染物等自变量进行降维,选取了有效的主成分,并对AQI进行一定刻画。最后,借助神经网络的计算机手段,对数据中变量的复杂关系做深入挖掘,以对前面的分析结果进行合理补充。  相似文献   

6.
文章以美国威斯康星州的乳腺癌调查数据为例,分别采用SIS和TCS算法对高维数据进行降维处理,尝试将改进的Logistic广义线性模型对降维后的变量进行拟合.再与传统的一般线性模型、Logistic广义线性模型相比,结果表明,基于算法降维后的Logistic广义线性模型预测误差更小,其中基于TCS算法降维后的广义线性模型在拟合中要明显优于SIS算法降维后的广义线性模型.  相似文献   

7.
文章为了提高GM(1,1)模型的预测精度,提出一种基于数据变换和背景值优化的GM(1,1)模型.考虑通过弱化缓冲算子得到原始数据序列的缓冲序列,并对缓冲序列进行对数变换,而后对GM(1,1)模型的背景值进行优化.实例结果表明新建GM(1,1)模型降低了误差,提高了预测精度.  相似文献   

8.
主成分分析和因子分析应用中值得注意的问题   总被引:3,自引:0,他引:3  
王学民 《统计与决策》2007,(11):142-143
主成分分析和因子分析方法是对多元数据进行降维分析的强有力工具,在我国已得到了越来越广泛的应用,并有  相似文献   

9.
已有产业结构同构化指标适用于两两地区之间的同构化测度,缺乏从整体角度提出多个地区的测度方法。运用成分数据统计理论,在三元图直观反映产业结构整体演变的基础上,提出结构中心和结构离散度指标来整体测度产业结构同构化,并对中国31个省份以及东部、中部、西部和东北地区不同区域的产业结构同构化进行实证分析。研究表明:中国31个省份整体同构化程度比较稳定,东部区域同构化程度最低,而不同区域则呈现出不同的同构化趋势。  相似文献   

10.
主成分方法在经济管理综合评价应用中的误区   总被引:4,自引:0,他引:4  
主成分分析是一种常用的多元统计方法,由于其降维的思想与多指标评价指标序化的要求非常接近,而且这种方法能浓缩信息,使指标降维,从而简化指标的结构,深刻反映问题的本质.近年来更多地被应用于社会学、经济学、管理学的评价中,逐渐成为一种独具特色的多指标评价技术.但在运用中也出现了一些问题,本文就在这些方面作一些探讨.  相似文献   

11.
An important drawback of the standard logarithmic series distribution (LSD) in several practical applications is that it excludes the zero observation from its support. The LSD with non-negative support is not much studied in the literature. Recently Kumar and Riyaz [On the zero-inflated LSD and its modification. Statistica (accepted for publication). 2013] considered a distribution in this respect namely ‘zero-inflated logarithmic series distribution (ZILSD)’. Through this paper we propose an alternative form of the ZILSD and study some of its properties. We obtain expressions for its probability-generating function, mean and variance, and develop certain recurrence relations for its probabilities, raw moments and factorial moments. The parameters of the model are estimated by the method of moments and the method of maximum likelihood, and certain test procedures are considered for testing the significance of the additional parameter of the distribution. The distribution has been fitted to certain real-life data sets for illustrating its usefulness compared with certain existing models available in the literature. Further, a simulation study is conducted for assessing the performance of the estimators.  相似文献   

12.
提高工业取用水监测数据质量是目前国家水资源监控能力建设的重要内容,而奇异值问题已成为影响监测数据质量的关键短板。本文在解析现阶段工业取用水监测数据奇异值主要类型基础上,以国家水资源管理系统数据库中工业取用水监测数据为样本,利用小波变换模极大值模型提取工业取用水监测数据时频变化特征,并利用傅里叶函数对其残差序列进行修正,进而运用相对误差控制方法挖掘监测数据奇异值。在此基础上,采用混沌粒子群优化的最小二乘支持向量机模型重构填补奇异值数据。研究结果表明:小波变换模极大值模型能够较好地提取工业取用水监测数据序列的时频变化特征,但是同时容易导致监测数据的信息损失,利用傅里叶函数对小波变换进行残差修正则可进一步提升取用水监测数据序列的特征提取效果;以小波变换模极大值特征序列为基础,通过相对误差控制可实现对监测数据奇异值的高效挖掘;对于挖掘出的奇异值重构填补问题,可选取混沌粒子群优化的最小二乘支持向量机模型,其重构精度要优于多项式曲线拟合等传统统计学方法和普通最小二乘支持向量机模型。上述工业取用水监测数据奇异值挖掘重构策略为现阶段国家水资源监控能力建设的推进提供了重要技术方法支持。  相似文献   

13.
In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of “observable” or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P 500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.  相似文献   

14.
The Volatility of Realized Volatility   总被引:4,自引:1,他引:3  
In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of “observable” or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P 500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.  相似文献   

15.
Bimodal mixture Weibull distribution being a special case of mixture Weibull distribution has been used recently as a suitable model for heterogeneous data sets in many practical applications. The bimodal mixture Weibull term represents a mixture of two Weibull distributions. Although many estimation methods have been proposed for the bimodal mixture Weibull distribution, there is not a comprehensive comparison. This paper presents a detailed comparison of five kinds of numerical methods, such as maximum likelihood estimation, least-squares method, method of moments, method of logarithmic moments and percentile method (PM) in terms of several criteria by simulation study. Also parameter estimation methods are applied to real data.  相似文献   

16.
基于国内外理论研究,通过定性、定量、因果以及总体与局部等不同的视角,通过建立双对数模型和关联模型,用格兰杰因果检验等方法分析了中国产业结构变动与能源消费之间的关系。研究证实:中国产业结构变动与能源消费之间存在密切关系;在产业结构中,如果能耗水平高的产业比重大,整个国民经济的能源消费量就高;能源消费除了与三次产业之间的变动有关,还与各产业内部结构有关。  相似文献   

17.
This paper demonstrates the utilization of wavelet-based tools for the analysis and prediction of financial time series exhibiting strong long-range dependence (LRD). Commonly emerging markets' stock returns are characterized by LRD. Therefore, we track the LRD evolvement for the return series of six Southeast European stock indices through the application of a wavelet-based semi-parametric method. We further engage the á trous wavelet transform in order to extract deeper knowledge on the returns term structure and utilize it for prediction purposes. In particular, a multiscale autoregressive (MAR) model is fitted and its out-of-sample forecast performance is benchmarked to that of ARMA. Additionally, a data-driven MAR feature selection procedure is outlined. We find that the wavelet-based method captures adequately LRD dynamics both in calm as well as in turmoil periods detecting the presence of transitional changes. At the same time, the MAR model handles with the complicated autocorrelation structure implied by the LRD in a parsimonious way achieving better performance.  相似文献   

18.
Weibull distributions have received wide ranging applications in many areas including reliability, hydrology and communication systems. Many estimation methods have been proposed for Weibull distributions. But there has not been a comprehensive comparison of these estimation methods. Most studies have focused on comparing the maximum likelihood estimation (MLE) with one of the other approaches. In this paper, we first propose an L-moment estimator for the Weibull distribution. Then, a comprehensive comparison is made of the following methods: the method of maximum likelihood estimation (MLE), the method of logarithmic moments, the percentile method, the method of moments and the method of L-moments.  相似文献   

19.
Online auctions have become increasingly popular in recent years, and as a consequence there is a growing body of empirical research on this topic. Most of that research treats data from online auctions as cross-sectional, and consequently ignores the changing dynamics that occur during an auction. In this article we take a different look at online auctions and propose to study an auction's price evolution and associated price dynamics. Specifically, we develop a dynamic forecasting system to predict the price of an ongoing auction. By dynamic, we mean that the model can predict the price of an auction “in progress” and can update its prediction based on newly arriving information. Forecasting price in online auctions is challenging because traditional forecasting methods cannot adequately account for two features of online auction data: (1) the unequal spacing of bids and (2) the changing dynamics of price and bidding throughout the auction. Our dynamic forecasting model accounts for these special features by using modern functional data analysis techniques. Specifically, we estimate an auction's price velocity and acceleration and use these dynamics, together with other auction-related information, to develop a dynamic functional forecasting model. We also use the functional context to systematically describe the empirical regularities of auction dynamics. We apply our method to a novel set of Harry Potter and Microsoft Xbox data and show that our forecasting model outperforms traditional methods.  相似文献   

20.
In this paper we propose a general cure rate aging model. Our approach enables different underlying activation mechanisms which lead to the event of interest. The number of competing causes of the event of interest is assumed to follow a logarithmic distribution. The model is parameterized in terms of the cured fraction which is then linked to covariates. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis for the proposed model. Moreover, some discussions on the model selection to compare the fitted models are given, as well as case deletion influence diagnostics are developed for the joint posterior distribution based on the ψ-divergence, which has several divergence measures as particular cases, such as the Kullback–Leibler (K-L), J-distance, L1 norm, and χ2-square divergence measures. Simulation studies are performed and experimental results are illustrated based on a real malignant melanoma data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号