共查询到20条相似文献,搜索用时 953 毫秒
1.
从医学应用统计方法的历史看,非参数统计适用于定量数据也适用于按等级优劣(严重程度或数据符合按大小顺序,名次等)数据,且易学易懂使用方便。非参数统计主要包括:配对数据比较、成组数据比较、完全随机设计数据比较、随机区组资料的非参数统计法、等级相关、Ridit检验方法、按等级分组数据比较、游程(Runs)检验、随机性z检验、多组问两两比较、正态性D检验等。 相似文献
2.
3.
什么是统计?提出这样一个问题 对于不少统计人员来说,可能感到有些突然。因为我们大都具备统计的基本知识,以及或长或短的实践经验,难道还不懂得统计这一概念吗?不错,我们通过学习和实践对统计这一概念都有一定的了解,然而任何概念都不是永恒不变的。因为概念是反映对象特有属性的思维形式,它的形成过程就是人们对客观实际从感性认识到理 相似文献
4.
5.
6.
对于包含近单整时间序列的预测模型,在进行Scheffe检验时由于内生性问题的影响,导致参数统计量的检验水平过于保守,由此也相应降低了检验功效。通过加入解释变量的超前项与滞后项差分项的动态方法进行修正,并对修正前后的统计量有限样本性质进行仿真比较,结果显示这一修正方法可以有效降低内生性问题对Scheffe检验结果的影响。在小样本条件下,经过修正的Scheffe检验不仅提高了统计量的检验功效,同时明显减少了检验水平的扭曲现象。 相似文献
7.
8.
在STAR模型框架下,考虑时间序列具有线性确定性趋势成分,本文建立了一个递归退势单位根检验统计量,推导了其渐近分布;并在考虑初始条件情形下,对递归退势、OLS和GLS退势单位根检验统计量的有限样本性质进行了细致的比较研究。若忽略初始条件的影响,GLS退势和递归退势单位根检验统计量的检验势都显著高于OLS退势。随着初始条件的增大,GLS退势单位根检验统计量的检验势下降得比较厉害,递归退势单位根检验统计量的检验势较为稳定,且在样本量较大情形下更具优势。 相似文献
9.
10.
边限检验理论及几点讨论 总被引:4,自引:0,他引:4
检验经济变量之间长期关系的协整技术要求变量是同阶单整的,这不可避免地涉及一定程度的预检验问题,而预检验问题会增加变量间长期关系分析的不确定性。当不能确定变量的单整类型时,边限检验理论提出了一个可以直接检验一个变量和一组解释变量之间长期关系的新方法。在介绍了边限检验方法中基本的VAR模型和假设及边限检验方法中用到的重要统计量——Wald统计量和T统计量及它们各自的渐近分布形式后,说明了边限检验理论在理论和实际运用当中需要注意的几个问题,最后通过实例分析说明了边限检验理论的运用。 相似文献
11.
Douglas G. Bonett 《Journal of applied statistics》1993,20(3):375-379
Two types of decision errors can be made when using a quality control chart for non-conforming units (p-chart). A Type I error occurs when the process is not out of control but a search for an assignable cause is performed unnecessarily. A Type II error occurs when the process is out of control but a search for an assignable cause is not performed. The probability of a Type I error is under direct control of the decision-maker while the probability of a Type II error depends, in part, on the sample size. A simple sample size formula is presented for determining the required sample size for a p-chart with specified probabilities of Type I and Type II errors. 相似文献
12.
Kazem Kazempour 《The American statistician》2013,67(2):170-174
The impact of ignoring the stratification effect on the probability of a Type I error is investigated. The evaluation is in a clinical setting where the treatments may have different response rates among the strata. Deviation from the nominal probability of a Type I error, α, depends on the stratification imbalance and the heterogeneity in the response rates; it appears that the latter has a larger impact. The probability of a Type I error is depicted for cases in which the heterogeneity in the response rate is present but there is no stratification imbalance. Three-dimensional graphs are used to demonstrate the simultaneous impact of heterogeneity in response rates and of stratification imbalance. 相似文献
13.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes. 相似文献
14.
Inflated statistical significance of student's t test associated with small intersubject correlation
《Journal of Statistical Computation and Simulation》2012,82(9):691-696
The independence assumption in statistical significance testing becomes increasingly crucial and unforgiving as sample size increases. Seemingly, inconsequential violations of this assumption can substantially increase the probability of a Type I error if sample sizes are large. In the case of Student's t test, it is found that correlations within samples in a range from 0.01 to 0.05 can lead to rejection of a true null hypothesis with high probability, if N is 50, 100 or larger. 相似文献
15.
The ANOVA-F test is the most popular and commonly used procedure for comparing J independent groups. However, it is well known that this method is very sensitive to non-normality, which has led to the derivation of alternative techniques based on robust estimators. In this work, ANOVA-F-test, trimmed mean Welch test, bootstrap-t trimmed mean Welch test, Schrader and Hettmansperger method with trimmed means, a percentile bootstrap method with trimmed means and a newly proposed method were compared in terms of both the Type I error probability and power. The proposed method compares well with ANOVA-F and other alternatives under various situations. 相似文献
16.
Dennis Oberhelman 《统计学通讯:模拟与计算》2013,42(1):99-121
Testing for the equality of regression coefficients across two regressions is a problem considered by analysts in a variety of fields. If the variances of the errors of the two regressions are not equal, then it is known that the standard large sample F-test used to test the equality of the coefficients is compromised by the fact that its actual size can differ substantially from the stated level of significance in small samples. This article addresses this problem and borrows from the literature on the Behrens-Fisher problem to provide some simple modifications of the large sample test which allows one to better control the probability of committing a Type I error. Empirical evidence is presented which indicates that the suggested modifications provide tests which are superior to well-known alternative tests over a wide range of the parameter space. 相似文献
17.
In this paper, gamma ( 5 ,2) distribution is considered as a failure model for the economic statistical design of x ¥ control charts. The study shows that the statistical performance of control charts can be improved significantly, with only a slight increase in the cost, by adding constraints to the optimization problem. The use of an economic statistical design instead of an economic design results in control charts that may be less expensive to implement, that have lower false alarm rates, and that have a higher probability of detecting process shifts. Numerical examples are presented to support this proposition. The results of economic statistical design are compared with those of a pure economic design. The effects of adding constraints for statistical performance measures, such as Type I error rate and the power of the chart, are extensively investigated. 相似文献
18.
Because the usual F test for equal means is not robust to unequal variances, Brown and Forsythe (1974a) suggest replacing F with the statistics F or W which are based on the Satterthwaite and Welch adjusted degrees of freedom procedures. This paper reports practical situations where both F and W give * unsatisfactory results. In particular, both F and W may not provide adequate control over Type I errors. Moreover, for equal variances, but unequal sample sizes, W should be avoided in favor of F (or F ), but for equal sample sizes, and possibly unequal variances, W was the only satisfactory statistic. New results on power are included as well. The paper also considers the effect of using F or W only after a significant test for equal variances has been obtained, and new results on the robustness of the F test are described. It is found that even for equal sample sizes as large as 50 per treatment group, there are practical situations where the F test does not provide adequately control over the probability of a Type I error. 相似文献
19.
20.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd. 相似文献