首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
文章从经济统计的实践应用视角对自由度概念的产生背景、原因进行了比较分析,认为对自由度概念的界定应当以抽样样本与统计总体的关系为契入点,而不应从统计方法角度出发。文章还通过案例对自由度在经济统计中的应用做了分析。  相似文献   

2.
文章对于两个正态总体N(μ1,σ12),N(μ2,σ22),讨论了统计假设H0:μ1=μ2,σ12=σ22←→H1:μ1≠μ2或σ12≠σ22.并基于Hellinger距离与参数的最大似然估计,建立了一个检验统计量.在一定的条件下证明了该统计量渐近服从自由度为2的卡方分布.用随机数值模拟的方法研究了该统计量的稳健性,并且与似然比检验进行了比较.  相似文献   

3.
进行计量经济分析时一般都要检验模型是否存在自相关性,但目前常用的几种自相关检验方法都不同程度地存在一些问题,对此进行进一步的研究有重要意义。对于一阶自相关性检验,DW检验是最常用的方法,但其存在两个不确定区域。针对给定的解释变量,运用模拟方法,可以得到DW检验的临界值,从而克服了其存在不确定区域的缺陷。回归检验法则无可用的临界值,也可以用模拟方法计算其临界值,而且除检验功效很接近1的情形外,回归检验法的功效显著大于DW检验,可以替代DW检验。当样本量不是很大时,LM检验统计量的临界值与卡方分布的临界值差距较大,不能使用标准卡方临界值。在LM检验中,通常通过对最高阶滞后项系数进行t检验以确定自相关的阶数,但LM检验中最高阶滞后项系数的t统计量与标准t分布有较大差距,也不能用t分布临界值。  相似文献   

4.
含方程误差的重复测量误差模型解决了协变量真值与响应变量真值之间存在的不完全匹配问题.为使中小型样本量下的假设检验结果更为准确,文章基于多元正态分布推导改进形式的Skovgaard似然比检验统计量,提高其在原假设下收敛到卡方分布的渐近速度,并应用该检验统计量对重复测量误差模型中回归参数的显著性进行假设检验.模拟研究的结果表明改进的似然比检验统计量在有限样本检验下的优越性;实例分析中通过检验气温与气压之间回归参数的显著性来说明该方法的实用性.  相似文献   

5.
李友平 《统计与决策》2007,(12):143-144
本文探讨了社会经济问题研究中统计假设检验与统计抽样分布中“自由度”的概念、计算,及其对统计分布的影响。  相似文献   

6.
Box-Pierce Q检验采用近似卡方分布分析时间序列的平稳性特征,其检验统计量的参数选取将影响到检验结果.文章多个Q值提取平稳性特征,在此基础上建立新的平稳性判定准则,该准则是自相关函数序列收敛的充分条件;采用欧氏函数作为平稳性特征的相似性度量,借助k-means聚类建立平稳性分类方法;该方法在平稳性分析过程中充分考虑了样本之间的关联性,避免了传统Box-PierceQ检验对统计分布和临界表的过度依赖.实验结果表明,新方法能有效地处理海量时间序列数据,且准确率高于Q检验和ADF检验.  相似文献   

7.
空间面板数据模型由于考虑了经济变量间的空间相关性,其优势日益凸显,已成为计量经济学的热点研究领域。将空间相关性与动态模式同时扩展到面板模型中的空间动态面板模型,不仅考虑了经济变量之间的空间相关性,还考虑了时间上的滞后性,是空间面板模型的发展,增强了模型的解释力。考虑一种带固定个体效应、因变量的时间滞后项、因变量与随机误差项均存在空间自相关性的空间动态面板回归模型,提出了在个体数n和时间数T都很大,且T相对地大于n的条件下空间动态面板模型中时间滞后效应存在性的LM和LR检验方法,其检验方法包括联合检验、一维及二维的边际和条件检验;推导出这些检验在零假设下的极限分布;其极限分布均服从卡方分布。通过模拟试验研究检验统计量的小样本性质,结果显示其具有优良的统计性质。  相似文献   

8.
测量两定距变量的相关程度,通常采用皮尔逊的积差相关系数,其公式为:y的样本数值,x与y分别为变量x与y的样本平均数值,n为样本数。尽管大家都在应用,但却没有人对皮尔逊积差相关系数进行系统的科学解释,其结果,一方面僵化了人们对相关概念的认识,另一方面也给相关分析在实际应用中带来了局限性。为此,本文试图从不同的角度对两定距变量相关系数进行解释,并在此基础上对其应用做些探讨。一、从协变趋势性上理解相关系数r从变量空间x-y的样本散点图上看,变量的公布呈线带状,即具有协变趋势性。当我们获得了关于变量x与y的样本数…  相似文献   

9.
利用经验似然方法,讨论缺失数据下广义线性模型中参数的置信域问题,得到了对数经验似然比统计量的渐近分布为标准卡方分布;给出参数的一些估计量及其渐近分布,利用数据模拟解释了所提出的方法。  相似文献   

10.
文章针对统计参数点估计中对估计结果的误差或可信度缺乏定量度量的问题,在总体样本独立同分布的假设条件下,研究了统计参数点估计中采样窗口大小(即样本容量)与样本均值、标准差统计误差之间的关系,提出了统计量点估计置信度的概念,并给出了借助计算机仿真确定在特定统计结果下统计量置信度与样本容量关系的方法。仿真和应用实例表明文章所引入的点估计置信度概念具有一定的普适作用,可以进一步优化补充点估计优劣评价方法。  相似文献   

11.
One of the most famous controversies in the history of Statistics regards the number of the degrees of freedom of a chi-square test. In 1900, Pearson introduced the chi-square test for goodness of fit without recognizing that the degrees of freedom depend on the number of estimated parameters under the null hypothesis. Yule tried an ‘experimental’ approach to check the results by a short series of ‘experiments’. Nowadays, an open-source language such as R gives the opportunity to empirically check the adequateness of Pearson's arguments. Pearson paid crucial attention to the relative error, which he stated ‘will, as a rule, be small’. However, this point is fallacious, as is made evident by the simulations carried out with R. The simulations concentrate on 2×2 tables where the fallacy of the argument is most evident. Moreover, this is one of the most employed cases in the research field.  相似文献   

12.
The Cornish-Fisher expansion of the Pearson type VI distribution is known to be reasonably accurate when both degrees of freedom are relatively large (say greater than or equal to 5). However, when either or both degrees of freedom are less than 5, the accuracy of the computed percentage point begins to suffer; in some cases severely. To correct for this, the error surface in the degrees of freedom plane is modeled by least squares curve fitting for selected levels of tail probability (.025, .05, and .10) which can be used to adjust the percentage point obtained from the usual Cornish-Fisher expansion. This adjustment procedure produces a computing algorithm that computes percentage points of the Pearson type VI distribution at the above probability levels, accurate to at least + 1 in 3 digits in approximately 11 milliseconds per subroutine call on an IBM 370/145. This adjusted routine is valid for both degrees of freedom greater than or equal to 1.  相似文献   

13.
中国市场化程度的争论依然十分激烈,介绍评述加拿大弗拉瑟研究所“世界经济自由度”及其中国市场化程度的评价;在对各个主要评价标准进行对比分析的基础上,认为EFWindex的视角是一个比较恰当的标准。中国人均GDP之自然对数能够十分精致地拟合基于EFWIdex的中国经济自由度,回归方程为:“中国连环自由度”=1.23+0.686Ln(人均GDP)。在回归分析的基础上,预测2030年前中国将在哪些年份赶上一些特定国家于2005年(被评价年)经济自由度。  相似文献   

14.
The methodic use of Shannon's entropy as a basic concept, complementing probability, leads to a new class of statistics which provides, inter alia, a measure of mutual dissimilarity y between several frequency distributions. Application to contin-gency tables with any number of dimensions yields a dimension-less, standardised contingency coefficient which depends on the direction of inference and will combine multiplicatively with the number of observed events. This class of statistics further in-cludes a continuous modification W of the number of degrees of freedom in a table, and a measure Q of its overall information content. Numerical illustrations and comparisons with former re-sults are worked out. Direct applications include the optimal partition of a quasicontinuum into cells by maximizing Q, the ordering of unordered tables by minimising local values of y, and a tentative absolute weighting of inductive inference based on the minimal necessary shift, required by an hypothesis, between the actually observed data and a set of assumed future events.  相似文献   

15.
An explicit decomposition on asymptotically independent distributed as chi-squared with one degree of freedom components of the Pearson–Fisher and Dzhaparidze–Nikulin tests is presented. The decomposition is formally the same for both tests and is valid for any partitioning of a sample space. Vector-valued tests, components of which can be not only different scalar tests based on the same sample, but also scalar tests based on components or groups of components of the same statistic are considered. Numerical examples illustrating the idea are presented.  相似文献   

16.
The distribution(s) of future response(s) given a set of data from an informative experiment is known as prediction distribution. The paper derives the prediction distribution(s) from a linear regression model with a multivari-ate Student-t error distribution using the structural relations of the model. We observe that the prediction distribution(s) are multivariate t-variate(s) with degrees of freedom which do not depend on the degrees of freedom of the error distribution.  相似文献   

17.
Approximate t-tests of single degree of freedom hypotheses in generalized least squares analyses (GLS) of mixed linear models using restricted maximum likelihood (REML) estimates of variance components have been previously developed by Giesbrecht and Burns (GB), and by Jeske and Harville (JH), using method of moment approximations for the degrees of freedom (df) for the tstatistics. This paper proposes approximate Fstatistics for tests of multiple df hypotheses using one-moment and two-moment approximations which may be viewed as extensions of the GB and JH methods. The paper focuses specifically on tests of hypotheses concerning the main-plot treatment factor in split-plot experiments with missing data. Simulation results indicate usually satisfactory control of Type I error rates.  相似文献   

18.
ABSTRACT

In the mathematical statistics, in order to close approximately to the cumulative distribution function of standard normal distribution, the Fisher z transformation is the widely employed explicit elementary function, and is used to estimate the confidence interval of Pearson product moment correlation coefficient. A new Sigmoid-like function is suggested to replace the Fisher z transformation, and the new explicit elementary function is not more complicated than the Fisher z transformation. The new Sigmoid-like function can be 4.677 times more accurate than the Fisher z transformation.  相似文献   

19.
从逻辑和历史两个角度对推断统计学的起源进行了尝试性的探索,通过对推断统计学概念的辨析以及历史事实的追溯,指出推断统计学的历史起源于1750年代。  相似文献   

20.
Some subtle difficulties in optimal design are highlighted by the example of unreplicated field trials laid out on plots with spatial errors defined by uniformity trials. There is a dual problem of the arrangement of control plots and maximizing the number of test‐line entries. A simulation study is conducted by randomizing the allocation of genotypes to the plots of four uniformity trials in accordance with the rules defining a number of competing designs. Results are summarized in terms of the ‘SE ratio’, which reflects the improvement in precision of a given design relative to a completely random design on the same plots. The definition of the SE ratio overcomes problems induced by differential shrinkage and consequent precision of test and control lines. A general result applying to all designs shows a curvilinear improvement in SE ratio with increasing error degrees of freedom of the design. The actual arrangement of check plots is of less importance than their increasing number, which contributes to increasing error degrees of freedom. Overall measures, including expected genetic gain, are used to illustrate the choice of a balance between the total number of test‐line entries and the error degrees of freedom.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号