首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
在期刊评价中,无论采用主成分分析还是因子分析进行评价,均无法根据评价值直接判断优劣,只能排序,这就削弱了主成分分析与因子分析评价的解释力,本质上是评价值的绝对判断标准问题。文章在分析学术评价特点以及成长曲线原理的基础上,提出采用Sigmoid函数对主成分分析或因子分析的评价值进行标准化,并基于JCR2017经济学期刊的数据,采用主成分分析和因子分析进行评价,然后对评价值进行标准化,并分析了标准化结果的数据特征。结果表明:标准化可以提升主成分分析与因子分析评价结果的解释力,可以对评价对象发展阶段进行判断,平滑了评价值且抑制了水平较高的评价对象,使得评价结果更加接近正态分布。  相似文献   

2.
周世军 《统计教育》2008,(10):60-62
由于利用主成分进行综合评价时,一般是基于当期的截面数据而获得的评价结果,没有考虑到上一期情况的影响,因此该方法为一种静态的综合评价方法。针对这一方面的不足,本文设计了根据主成分分析法得出的主成分综合得分,引入一种奖惩因子将其主成分综合得分转换为主成分排序指数,从而形成了以主成分排序指数为评价依据的动态主成分综合评价法。  相似文献   

3.
针对产业结构择优评价过程中存在的不确定性,文章提出一种群组专家参与的产业结构择优评价方法.分析了影响产业结构择优评价的主要因素,提出对不同来源的评价人群进行问卷调查,采用统计分析的方式获取基于正态分布的产业结构评价值;针对正态分布函数计算负责的特性,利用置信度将正态分布转化成区间数,给出一种群组专家参与的区间型产业结构评价方法用于产业结构的择优评价.最后实证分析了某区域内产业结构升级模式的择优评价过程,验证了本文方法的合理性和可行性.  相似文献   

4.
传统过程能力指数基于总体质量特性值服从正态分布假设,而在生产领域中,总体质量特性值的取值往往仅局限于某个实数域区间,在该区间以外反映质量特性值的数据不可能出现。文章基于此背景,给出了分布中心与公差中心重合及不重合两种情况下双侧等截尾正态分布的过程能力指数的计算公式及参数估计方法,并给出了算例分析。  相似文献   

5.
许多经济变量(如GDP)水平序列随着时间变化具有单调趋势,截面数据(如各地区GDP)之间存在差异,为了研究经济变量在一段时间内的平均发展水平和相互关系,文章基于区间型符号数据的研究视角,提出了一种基于分位数思想的Bayesian回归方法,用以分析内部存在非对称分布散点的区间数据,既可以估计数据的区间,也可以预测数据在此区间内的偏度和离散程度。在模拟研究中,通过对评价指标数值的假设检验分析了该模型相对于上、下限和中点半径模型的效果,并根据真实数据中存在异常信息的现象,在模拟数据中加入异常值,进一步验证分位数方法的优势和稳健性。在实证研究中,运用提出的分位数方法,上、下限法和中点半径法对我国各地区GDP和工业生产总值年度数据进行区间回归分析,评价指标显示分位数模型Bayesian方法具有更优的拟合和预测效果,在GDP发展水平不同的地区,工业增长的贡献存在差异。  相似文献   

6.
基于非线性主成分和聚类分析的综合评价方法   总被引:1,自引:0,他引:1  
针对传统主成分在处理非线性问题上的不足,阐述了传统方法在数据无量纲化中“中心标准化”的缺点和处理“线性”数据时的缺陷,给出了数据无量纲化和处理“非线性”数据时的改进方法,并建立了一种基于“对数中心化”的非线性主成分分析和聚类分析的新的综合评价方法。实验表明,该方法能有效地处理非线性数据。  相似文献   

7.
基于不确定信息的虚拟企业合作伙伴的选择方法   总被引:1,自引:0,他引:1  
马蕾 《统计与决策》2012,(11):182-185
在文章运用统计学中的主成分分析法进行统计分析构建了科学合理的虚拟企业合作伙伴评价指标;继而采用区间数作为评价信息标度值,选用着点累计法针对多人参与评价的虚拟企业合作伙伴选择进行分析。最后得出一种处理包含不确定信息的虚拟企业合作伙伴选择方法,并实例分析了某科技信息公司虚拟企业合作伙伴选择过程。通过实例分析,验证本方法的可操作性和合理性。  相似文献   

8.
目前国内外提出的综合评价方法有几十种之多,如综合指数法、层次分析法、熵值法、主成分分析法等.这些方法可统称为单一评价法,单一评价法可分为两大类:主观赋权评价法和客观赋权评价法.前者如Delphi法、综合指数法等,后者如熵值法,主成分分析法等.评价方法不同在于评价者出发的角度不同,为了让评价的结果让人信服,有专家提出有必要选择多种方法进行评价,再将几种评价方法结果进行组合,这就是所谓的组合评价方法.  相似文献   

9.
文章在已有的赋权方法的基础上,针对主客观组合赋权在实际问题中的优势,提出了基于专家打分法与区间中心点距离法的一种组合赋权法(即几何平均组合赋权法),说明了主客观赋权法独立赋权的片面性,并通过实例讨论分析了基于专家打分与区间中心点距离组合赋权法(即几何平均组合赋权法)的合理性。  相似文献   

10.
基于稳健主成分回归的统计数据可靠性评估方法   总被引:1,自引:0,他引:1       下载免费PDF全文
 稳健主成分回归(RPCR)是稳健主成分分析和稳健回归分析结合使用的一种方法,本文首次运用稳健的RPCR及异常值诊断方法,对2008年我国地区经济增长横截面数据可靠性做了评估。评估结果表明:稳健的RPCR方法能更好的克服异常值的影响,使估计结果更加可靠,并能有效的克服经典的主成分回归(CPCR)方法容易出现的多个异常点的掩盖现象;基本可以认为2008年地区经济增长与相关指标数据是匹配的,但部分地区的经济增长数据可能存在可靠性问题。  相似文献   

11.
Semi-parametric modelling of interval-valued data is of great practical importance, as exampled by applications in economic and financial data analysis. We propose a flexible semi-parametric modelling of interval-valued data by integrating the partial linear regression model based on the Center & Range method, and investigate its estimation procedure. Furthermore, we introduce a test statistic that allows one to decide between a parametric linear model and a semi-parametric model, and approximate its null asymptotic distribution based on wild Bootstrap method to obtain the critical values. Extensive simulation studies are carried out to evaluate the performance of the proposed methodology and the new test. Moreover, several empirical data sets are analysed to document its practical applications.  相似文献   

12.
In this paper, a new estimation procedure based on composite quantile regression and functional principal component analysis (PCA) method is proposed for the partially functional linear regression models (PFLRMs). The proposed estimation method can simultaneously estimate both the parametric regression coefficients and functional coefficient components without specification of the error distributions. The proposed estimation method is shown to be more efficient empirically for non-normal random error, especially for Cauchy error, and almost as efficient for normal random errors. Furthermore, based on the proposed estimation procedure, we use the penalized composite quantile regression method to study variable selection for parametric part in the PFLRMs. Under certain regularity conditions, consistency, asymptotic normality, and Oracle property of the resulting estimators are derived. Simulation studies and a real data analysis are conducted to assess the finite sample performance of the proposed methods.  相似文献   

13.
This paper introduces regularized functional principal component analysis for multidimensional functional data sets, utilizing Gaussian basis functions. An essential point in a functional approach via basis expansions is the evaluation of the matrix for the integral of the product of any two bases (cross-product matrix). Advantages of the use of the Gaussian type of basis functions in the functional approach are that its cross-product matrix can be easily calculated, and it creates a much more flexible instrument for transforming each individual's observation into a functional form. The proposed method is applied to the analysis of three-dimensional (3D) protein structural data that can be referred to as unbalanced data. It is shown that our method extracts useful information from unbalanced data through the application. Numerical experiments are conducted to investigate the effectiveness of our method via Gaussian basis functions, compared to the method based on B-splines. On performing regularized functional principal component analysis with B-splines, we also derive the exact form of its cross-product matrix. The numerical results show that our methodology is superior to the method based on B-splines for unbalanced data.  相似文献   

14.
Mixture of linear regression models provide a popular treatment for modeling nonlinear regression relationship. The traditional estimation of mixture of regression models is based on Gaussian error assumption. It is well known that such assumption is sensitive to outliers and extreme values. To overcome this issue, a new class of finite mixture of quantile regressions (FMQR) is proposed in this article. Compared with the existing Gaussian mixture regression models, the proposed FMQR model can provide a complete specification on the conditional distribution of response variable for each component. From the likelihood point of view, the FMQR model is equivalent to the finite mixture of regression models based on errors following asymmetric Laplace distribution (ALD), which can be regarded as an extension to the traditional mixture of regression models with normal error terms. An EM algorithm is proposed to obtain the parameter estimates of the FMQR model by combining a hierarchical representation of the ALD. Finally, the iterated weighted least square estimation for each mixture component of the FMQR model is derived. Simulation studies are conducted to illustrate the finite sample performance of the estimation procedure. Analysis of an aphid data set is used to illustrate our methodologies.  相似文献   

15.
When functional data are not homogenous, for example, when there are multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this article, we propose a new estimation procedure for the mixture of Gaussian processes, to incorporate both functional and inhomogenous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from expectation-maximization (EM) algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset.  相似文献   

16.
聂斌  杜梦莹  廖丹 《统计研究》2012,29(9):88-94
 在统计过程控制的第I阶段,准确识别运行状态发生漂移的时间点是决定控制效果的关键。本文以多维空间的数据离心程度作为判定变点规则的标准,通过概率密度轮廓将单一观测值序列转化为多维空间中的数据点,运用数据深度技术构造特征变量,并建立变点定位规则。仿真性能分析的结果表明新方法能够在不需要假设过程服从正态分布的前提下对变点位置进行精确定位。在比较研究中也表现出良好的综合性能。  相似文献   

17.
Parameters of a finite mixture model are often estimated by the expectation–maximization (EM) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fitting finite mixture models. Our method, called the iterative Monte Carlo classification (IMCC), is also an iterative fitting procedure. Within each iteration, it first estimates the membership probabilities for each data point, namely the conditional probability of a data point belonging to a particular mixing component given that the data point value is obtained, it then classifies each data point into a component distribution using the estimated conditional probabilities and the Monte Carlo method. It finally updates the parameters of each component distribution based on the classified data. Simulation studies were conducted to compare IMCC with some other algorithms for fitting mixture normal, and mixture t, densities.  相似文献   

18.
Univariate time series often take the form of a collection of curves observed sequentially over time. Examples of these include hourly ground-level ozone concentration curves. These curves can be viewed as a time series of functions observed at equally spaced intervals over a dense grid. Since functional time series may contain various types of outliers, we introduce a robust functional time series forecasting method to down-weigh the influence of outliers in forecasting. Through a robust principal component analysis based on projection pursuit, a time series of functions can be decomposed into a set of robust dynamic functional principal components and their associated scores. Conditioning on the estimated functional principal components, the crux of the curve-forecasting problem lies in modelling and forecasting principal component scores, through a robust vector autoregressive forecasting method. Via a simulation study and an empirical study on forecasting ground-level ozone concentration, the robust method demonstrates the superior forecast accuracy that dynamic functional principal component regression entails. The robust method also shows the superior estimation accuracy of the parameters in the vector autoregressive models for modelling and forecasting principal component scores, and thus improves curve forecast accuracy.  相似文献   

19.
The Gaussian rank correlation equals the usual correlation coefficient computed from the normal scores of the data. Although its influence function is unbounded, it still has attractive robustness properties. In particular, its breakdown point is above 12%. Moreover, the estimator is consistent and asymptotically efficient at the normal distribution. The correlation matrix obtained from pairwise Gaussian rank correlations is always positive semidefinite, and very easy to compute, also in high dimensions. We compare the properties of the Gaussian rank correlation with the popular Kendall and Spearman correlation measures. A simulation study confirms the good efficiency and robustness properties of the Gaussian rank correlation. In the empirical application, we show how it can be used for multivariate outlier detection based on robust principal component analysis.  相似文献   

20.
This article mainly considers interval estimation of the scale and shape parameters of the generalized exponential (GE) distribution. We adopt the generalized fiducial method to construct a kind of new confidence intervals for the parameters of interest and compare them with the frequentist and Bayesian methods. In addition, we give the comparison of the point estimation based on the frequentist, generalized fiducial and Bayesian methods. Simulation results show that a new procedure based on generalized fiducial inference is more applicable than the non-fiducial methods for the point and interval estimation of the GE distribution. Finally, two lifetime data sets are used to illustrate the application of our new procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号