首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 953 毫秒
1.
从医学应用统计方法的历史看,非参数统计适用于定量数据也适用于按等级优劣(严重程度或数据符合按大小顺序,名次等)数据,且易学易懂使用方便。非参数统计主要包括:配对数据比较、成组数据比较、完全随机设计数据比较、随机区组资料的非参数统计法、等级相关、Ridit检验方法、按等级分组数据比较、游程(Runs)检验、随机性z检验、多组问两两比较、正态性D检验等。  相似文献   

2.
什么是统计   总被引:1,自引:0,他引:1  
李成瑞 《中国统计》2003,(10):11-12
什么是统计?提出这样一个问题对于不少统计人员来说,可能感到有些突然.因为我们大都具备统计的基本知识,以及或长或短的实践经验,难道还不懂得统计这一概念吗?不错,我们通过学习和实践对统计这一概念都有一定的了解,然而任何概念都不是永恒不变的.  相似文献   

3.
什么是统计?提出这样一个问题 对于不少统计人员来说,可能感到有些突然。因为我们大都具备统计的基本知识,以及或长或短的实践经验,难道还不懂得统计这一概念吗?不错,我们通过学习和实践对统计这一概念都有一定的了解,然而任何概念都不是永恒不变的。因为概念是反映对象特有属性的思维形式,它的形成过程就是人们对客观实际从感性认识到理  相似文献   

4.
“工资统计数准吗?”这是个萦绕在很多人脑海中的问题。作为一个多年从事劳动统计的人员,我的看法是:“基本准,而且越来越准。”有不少人对此观点不以为然,  相似文献   

5.
文章基于随机占优方法的理论,结合近年来迅速发展的排列检验方法,构建出一个区分小样本优劣的统计检验方法.该方法使用二阶随机占优的定义作为检验统计量,提出针对小样本问题的检验假设,并给出相应的判定准则.文章将该统计检验方法实际应用于可转换债券市场上普遍关注的“发行宣告”问题,统计结果显示,可转换债券发行宣告后,大部分公司股票价格有负反应表现.  相似文献   

6.
对于包含近单整时间序列的预测模型,在进行Scheffe检验时由于内生性问题的影响,导致参数统计量的检验水平过于保守,由此也相应降低了检验功效。通过加入解释变量的超前项与滞后项差分项的动态方法进行修正,并对修正前后的统计量有限样本性质进行仿真比较,结果显示这一修正方法可以有效降低内生性问题对Scheffe检验结果的影响。在小样本条件下,经过修正的Scheffe检验不仅提高了统计量的检验功效,同时明显减少了检验水平的扭曲现象。  相似文献   

7.
编读往来     
读者来言:尊敬的严社长:您好!作为一位具有近十年基层统计工作经历的统计人,我想对您说一点心里话。真实不是统计的生命吗?那么说一点真话,也应该是统计人的第二个生命吧!统计的作用应该说是越来越重要了,但基层统计的现状真的堪忧!机构改革中,首先改革的就是乡镇统计站。那好不容易才建立起来的统计站,几乎全被撤消合并了。乡镇统计员素质挺高,多是大中专毕业生,但更换相当频繁,干个一二年,三四年,有的连半年都不到,就给换了。这些新统计员的业务素质成吗?以我这些年的感受来说,绝对不行。问题是为什么会出现这种现状呢?这么说比较好理解:…  相似文献   

8.
欧阳敏华  章贵军 《统计研究》2016,33(12):101-109
在STAR模型框架下,考虑时间序列具有线性确定性趋势成分,本文建立了一个递归退势单位根检验统计量,推导了其渐近分布;并在考虑初始条件情形下,对递归退势、OLS和GLS退势单位根检验统计量的有限样本性质进行了细致的比较研究。若忽略初始条件的影响,GLS退势和递归退势单位根检验统计量的检验势都显著高于OLS退势。随着初始条件的增大,GLS退势单位根检验统计量的检验势下降得比较厉害,递归退势单位根检验统计量的检验势较为稳定,且在样本量较大情形下更具优势。  相似文献   

9.
梁小筠 《上海统计》2000,(10):22-25
正态分布是自然界最重要的分布,它能描述许多随机现象.以总体服从正态分布为前提的统计方法已被越来越多的统计工作者所掌握.然而,在一个实际问题中,总体一定是正态分布吗?如果不顾这个前提成立与否,盲目套用公式,可能影响统计方法的效果.因此,正态性检验是统计方法应用中的重要问题.长期以来,我国有关的教科书沿袭前苏联的模式,在谈到正态性检验时,只介绍拟合优度检验和柯尔莫哥洛夫  相似文献   

10.
边限检验理论及几点讨论   总被引:4,自引:0,他引:4  
检验经济变量之间长期关系的协整技术要求变量是同阶单整的,这不可避免地涉及一定程度的预检验问题,而预检验问题会增加变量间长期关系分析的不确定性。当不能确定变量的单整类型时,边限检验理论提出了一个可以直接检验一个变量和一组解释变量之间长期关系的新方法。在介绍了边限检验方法中基本的VAR模型和假设及边限检验方法中用到的重要统计量——Wald统计量和T统计量及它们各自的渐近分布形式后,说明了边限检验理论在理论和实际运用当中需要注意的几个问题,最后通过实例分析说明了边限检验理论的运用。  相似文献   

11.
Two types of decision errors can be made when using a quality control chart for non-conforming units (p-chart). A Type I error occurs when the process is not out of control but a search for an assignable cause is performed unnecessarily. A Type II error occurs when the process is out of control but a search for an assignable cause is not performed. The probability of a Type I error is under direct control of the decision-maker while the probability of a Type II error depends, in part, on the sample size. A simple sample size formula is presented for determining the required sample size for a p-chart with specified probabilities of Type I and Type II errors.  相似文献   

12.
The impact of ignoring the stratification effect on the probability of a Type I error is investigated. The evaluation is in a clinical setting where the treatments may have different response rates among the strata. Deviation from the nominal probability of a Type I error, α, depends on the stratification imbalance and the heterogeneity in the response rates; it appears that the latter has a larger impact. The probability of a Type I error is depicted for cases in which the heterogeneity in the response rate is present but there is no stratification imbalance. Three-dimensional graphs are used to demonstrate the simultaneous impact of heterogeneity in response rates and of stratification imbalance.  相似文献   

13.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes.  相似文献   

14.
The independence assumption in statistical significance testing becomes increasingly crucial and unforgiving as sample size increases. Seemingly, inconsequential violations of this assumption can substantially increase the probability of a Type I error if sample sizes are large. In the case of Student's t test, it is found that correlations within samples in a range from 0.01 to 0.05 can lead to rejection of a true null hypothesis with high probability, if N is 50, 100 or larger.  相似文献   

15.
The ANOVA-F test is the most popular and commonly used procedure for comparing J independent groups. However, it is well known that this method is very sensitive to non-normality, which has led to the derivation of alternative techniques based on robust estimators. In this work, ANOVA-F-test, trimmed mean Welch test, bootstrap-t trimmed mean Welch test, Schrader and Hettmansperger method with trimmed means, a percentile bootstrap method with trimmed means and a newly proposed method were compared in terms of both the Type I error probability and power. The proposed method compares well with ANOVA-F and other alternatives under various situations.  相似文献   

16.
Testing for the equality of regression coefficients across two regressions is a problem considered by analysts in a variety of fields. If the variances of the errors of the two regressions are not equal, then it is known that the standard large sample F-test used to test the equality of the coefficients is compromised by the fact that its actual size can differ substantially from the stated level of significance in small samples. This article addresses this problem and borrows from the literature on the Behrens-Fisher problem to provide some simple modifications of the large sample test which allows one to better control the probability of committing a Type I error. Empirical evidence is presented which indicates that the suggested modifications provide tests which are superior to well-known alternative tests over a wide range of the parameter space.  相似文献   

17.
In this paper, gamma ( 5 ,2) distribution is considered as a failure model for the economic statistical design of x ¥ control charts. The study shows that the statistical performance of control charts can be improved significantly, with only a slight increase in the cost, by adding constraints to the optimization problem. The use of an economic statistical design instead of an economic design results in control charts that may be less expensive to implement, that have lower false alarm rates, and that have a higher probability of detecting process shifts. Numerical examples are presented to support this proposition. The results of economic statistical design are compared with those of a pure economic design. The effects of adding constraints for statistical performance measures, such as Type I error rate and the power of the chart, are extensively investigated.  相似文献   

18.
Because the usual F test for equal means is not robust to unequal variances, Brown and Forsythe (1974a) suggest replacing F with the statistics F or W which are based on the Satterthwaite and Welch adjusted degrees of freedom procedures. This paper reports practical situations where both F and W give * unsatisfactory results. In particular, both F and W may not provide adequate control over Type I errors. Moreover, for equal variances, but unequal sample sizes, W should be avoided in favor of F (or F ), but for equal sample sizes, and possibly unequal variances, W was the only satisfactory statistic. New results on power are included as well. The paper also considers the effect of using F or W only after a significant test for equal variances has been obtained, and new results on the robustness of the F test are described. It is found that even for equal sample sizes as large as 50 per treatment group, there are practical situations where the F test does not provide adequately control over the probability of a Type I error.  相似文献   

19.
20.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号