首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
基于函数型主成分的中国股市波动研究   总被引:1,自引:0,他引:1  
运用函数型主成分分析方法,对中国沪市89支股票的月度收益率进行分析,其结果表明函数型主成分方法能够较为准确地捕捉到月度收益率的时间波动特征,特别是它在时间上的变化方向和形式,为股票收益率的建模和预测提供科学依据.  相似文献   

2.
函数型数据的共同主成分分析探究及展望   总被引:2,自引:0,他引:2  
函数型数据的主成分分析(FPCA)已经成功应用在许多领域,但它主要研究的是单样本问题.本文详细讨论了一种新近发展的函数型数据分析的理论--函数型共同主成分(CGPC)分析方法,它主要应用于检验两组函数型随机样本的分布情况.CFPC方法的理论基础是将两组函数型样本进行Karhunen-Loeve(KL)展开,并用Bootstrap方法检验两组样本的均值函数、特征值和特征函数的一致性.最后,我们对CFPC的理论研究和应用前景进行了展望.  相似文献   

3.
田茂再  梅波 《统计研究》2019,36(8):114-128
本文考虑函数型数据的结构特征,针对两类函数型变量分位回归模型(函数型因变量对标量自变量和函数型因变量对函数型自变量),基于函数型倾斜分位曲线的定义构建新型函数型倾斜分位回归模型。对于第二类模型,本文分别考虑样条基函数对模型系数展开和函数型主成分基函数对函数型自变量展开,得到倾斜分位回归模型的基本形式。参数估计采用成分梯度Boosting算法最小化加权非对称损失函数,提高计算效率。在理论上证明了倾斜分位回归模型的系数估计量均服从渐近正态分布。模拟和实证研究结果显示,倾斜分位回归模型比已有的逐点分位回归模型具有更好的拟合效果。根据积分均方预测误差准则,本文提出的模型有一致较好的预测能力。  相似文献   

4.
死亡率预测模型的新进展   总被引:4,自引:0,他引:4  
近半个世纪以来,世界范围内,人口死亡率整体上呈现下降趋势。而依据传统死亡模型对死亡率的预测往往高于实际水平,这给养老金财务安排和养老年金的成本核算带来了严重的不利影响。本文沿着死亡率模型的发展轨迹,回顾和总结了各类死亡率预测模型,对死亡率预测模型的最新进展做出评述,并对中国死亡率预测模型的选取给出了建议。  相似文献   

5.
金融市场的交易是不间断的,价格始终高频的更新,在金融数据的研究中,经常遇到函数型数据.文章主要建立部分函数型线性回归模型,分析函数型数据在上证指数预测中的应用,根据函数型数据分析的原理及其求解主成分分析的方法,使用Matlab对上证指数进行预测.  相似文献   

6.
文章在一个一般性的框架下研究了利用基函数展开进行函数型数据聚类的问题.在这个框架之下,大量传统的聚类方法都可以直接应用到函数型数据分析.另外,我们将Pearson相似系数引入函数型数据聚类分析,解决了欧式距离无法刻画曲线之间形态差异的问题.  相似文献   

7.
基于经济数据的函数性特征,引入函数型数据分析方法,研究发现经济数据中的面板数据可作为函数型数据的特例,函数型数据分析方法在处理高维数据、缺失数据以及样本观测点不规则分布等特殊的数据类型有独特的优势。着重介绍和拓展了主微分分析方法,在集合了主成分分析方法优势的同时从微分方程的解出发探讨数据的特征。通过对全国银行间同业拆借利率进行主微分分析,显示出主微分分析方法能够揭示其它方法所不能反映的数据特征。  相似文献   

8.
基于均衡关系的中国人口死亡率预测模型   总被引:1,自引:0,他引:1  
人口死亡率反映了人口的死亡程度,准确预测死亡率是人口科学及人口经济学研究的重点之一,同时也是长寿风险测量的重要数据基础。基于Lee-Carter模型,探索中国大陆与台湾地区死亡率的相关性,通过协整分析考虑两地死亡率的长期均衡关系,创新性地建立基于相关性的向量误差修正模型(VECM),克服传统自回归移动平均模型(ARIMA)使用有限数据进行预测的局限性;均方预测误差作为检验标准,结果表明:基于VECM模型的预测效果比传统的预测效果更佳;基于中国大陆地区和台湾地区的死亡率长期均衡关系,可以为两地联合长寿债券的定价提供重要参考。  相似文献   

9.
本文基于B样条函数最小二乘法的非参数回归与时间序列方法相结合,建立了时间序列的预测模型。该方法有较高的预测精确度,可以描述复杂模型,并且用实例进行了分析。  相似文献   

10.
文章讨论了在岭型主成分估计下的数据删除模型,得到了该模型与最小二乘估计下模型的诊断统计量之间的等价关系,在此基础上根据W-K统计量的思想提出了两种度量方法,并通过实例论证了该方法的可行性。  相似文献   

11.
Recently, several methodologies to perform geostatistical analysis of functional data have been proposed. All of them assume that the spatial functional process considered is stationary. However, in practice, we often have nonstationary functional data because there exists an explicit spatial trend in the mean. Here, we propose a methodology to extend kriging predictors for functional data to the case where the mean function is not constant through the region of interest. We consider an approach based on the classical residual kriging method used in univariate geostatistics. We propose a three steps procedure. Initially, a functional regression model is used to detrend the mean. Then we apply kriging methods for functional data to the regression residuals to predict a residual curve at a non-data location. Finally, the prediction curve is obtained as the sum of the trend and the residual prediction. We apply the methodology to salinity data corresponding to 21 salinity curves recorded at the Ciénaga Grande de Santa Marta estuary, located in the Caribbean coast of Colombia. A cross-validation analysis was carried out to track the performance of the proposed methodology.  相似文献   

12.
We look at prediction in regression models under squared loss for the random x case with many explanatory variables. Model reduction is done by conditioning upon only a small number of linear combinations of the original variables. The corresponding reduced model will then essentially be the population model for the chemometricians' partial least squares algorithm. Estimation of the selection matrix under this model is briefly discussed, and analoguous results for the case with multivariate response are formulated. Finally, it is shown that an assumption of multinormality may be weakened to assuming elliptically symmetric distribution, and that some of the results are valid without any distributional assumption at all.  相似文献   

13.
ABSTRACT

One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.  相似文献   

14.
The beta-binomial distribution, which is generated by a simple mixture model, has been widely applied in the social, physical, and health sciences. Problems of estimation, inference, and prediction have been addressed in the past, but not in a Bayesian framework. This article develops Bayesian procedures for the beta-binomial model and, using a suitable reparameterization, establishes a conjugate-type property for a beta family of priors. The transformed parameters have interesting interpretations, especially in marketing applications, and are likely to be more stable. More specifically, one of these parameters is the market share and the other is a measure of the heterogeneity of the customer population. Analytical results are developed for the posterior and prediction quantities, although the numerical evaluation is not trivial. Since the posterior moments are more easily calculated, we also propose the use of posterior approximation using the Pearson system. A particular case (when there are two trials), which occurs in taste testing, brand choice, media exposure, and some epidemiological applications, is analyzed in detail. Simulated and real data are used to demonstrate the feasibility of the calculations. The simulation results effectively demonstrate the superiority of Bayesian estimators, particularly in small samples, even with uniform (“non-informed”) priors. Naturally, “informed” priors can give even better results. The real data on television viewing behavior are used to illustrate the prediction results. In our analysis, several problems with the maximum likelihood estimators are encountered. The superior properties and performance of the Bayesian estimators and the excellent approximation results are strong indications that our results will be potentially of high value in small sample applications of the beta-binomial and in cases in which significant prior information exists.  相似文献   

15.
Functional data analysis has become an important area of research because of its ability of handling high‐dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models and, in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area‐level data and fit a varying coefficient linear mixed effect model where the varying coefficients are semiparametrically modelled via B‐splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.  相似文献   

16.
Tahar Mourid 《Statistics》2013,47(2):125-138
We present a generalization of some previous works (Bosq, Mourid, Pumo) about the functional forecast of a Banach autoregressive processes. We are mainly concerned with order p , p >1, autoregressive processes which appear to be a natural extension of the well-known R d -valued autoregressive processes to a functional framework. This modelization provides an new approach for estimating and for predicting a continuous time stochastic process over an entire time interval. Using results from [12] we prove asymptotic properties of estimators of the parameters and predictors which are based upon a principal component decomposition of a Hilbert-Schmidt operator with unknown eigenvectors.  相似文献   

17.
王芝皓等 《统计研究》2021,38(7):127-139
在实际数据分析中经常会遇到零膨胀计数数据作为响应变量与函数型随机变量和随机向量作为预测变量相关联。本文考虑函数型部分变系数零膨胀模型 (FPVCZIM),模型中无穷维的斜率函数用函数型主成分基逼近,系数函数用B-样条进行拟合。通过EM 算法得到估计量,讨论其理论性质,在一些正则条件下获得了斜率函数和系数函数估计量的收敛速度。有限样本的Monte Carlo 模拟研究和真实数据分析被用来解释本文提出的方法。  相似文献   

18.
The generalized Poisson (GP) regression is an increasingly popular approach for modeling overdispersed as well as underdispersed count data. Several parameterizations have been performed for the GP regression, and the two well known models, the GP-1 and the GP-2, have been applied. The GP-P regression, which has been recently proposed, has the advantage of nesting the GP-1 and the GP-2 parametrically, besides allowing the statistical tests of the GP-1 and the GP-2 against a more general alternative. In several cases, count data often have excessive number of zero outcomes than are expected in the Poisson. This zero-inflation phenomenon is a specific cause of overdispersion, and the zero-inflated Poisson (ZIP) regression model has been proposed. However, if the data continue to suggest additional overdispersion, the zero-inflated negative binomial (ZINB-1 and ZINB-2) and the zero-inflated generalized Poisson (ZIGP-1 and ZIGP-2) regression models have been considered as alternatives. This article proposes a functional form of the ZIGP which mixes a distribution degenerate at zero with a GP-P distribution. The suggested model has the advantage of nesting the ZIP and the two well known ZIGP (ZIGP-1 and ZIGP-2) regression models, besides allowing the statistical tests of the ZIGP-1 and the ZIGP-2 against a more general alternative. The ZIP and the functional form of the ZIGP regression models are fitted, compared and tested on two sets of count data; the Malaysian insurance claim data and the German healthcare data.  相似文献   

19.
In functional magnetic resonance imaging, spatial activation patterns are commonly estimated using a non-parametric smoothing approach. Significant peaks or clusters in the smoothed image are subsequently identified by testing the null hypothesis of lack of activation in every volume element of the scans. A weakness of this approach is the lack of a model for the activation pattern; this makes it difficult to determine the variance of estimates, to test specific neuroscientific hypotheses or to incorporate prior information about the brain area under study in the analysis. These issues may be addressed by formulating explicit spatial models for the activation and using simulation methods for inference. We present one such approach, based on a marked point process prior. Informally, one may think of the points as centres of activation, and the marks as parameters describing the shape and area of the surrounding cluster. We present an MCMC algorithm for making inference in the model and compare the approach with a traditional non-parametric method, using both simulated and visual stimulation data. Finally we discuss extensions of the model and the inferential framework to account for non-stationary responses and spatio-temporal correlation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号