首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 359 毫秒
1.
文章生成概化理论p×i、p×i×h、p×(i:h)三种不同设计下的正态数据、多项数据和二项数据,用Jackknife方法和Traditional方法估计数据的方差分量、标准误和置信区间,并比较这两种方法的性能。结果表明:(1)Jackknife方法在方差分量估计和标准误估计上都较为准确;(2)相较于Traditional方法,Jackknife方法在方差分量置信区间估计上略有不足。(3)相较于Traditional方法,Jackknife方法估计的准确性不随数据类型、研究设计和方差分量的不同而产生波动,具有更强的稳健性。  相似文献   

2.
MCMC是一种动态的蒙特卡洛方法,可以用于估计概化理论的方差分量。MCMC方法估计出的方差分量受限于抽样,不同的抽样样本,所估计的方差分量可能不一样,需要对其变异量进行探讨。文章采用蒙特卡洛(Monte Carlo)数据模拟技术,在正态分布下讨论有无先验信息对MCMC方法估计概化理论方差分量变异量的影响。结果发现,有先验信息的MCMC方法估计方差分量标准误较无先验信息的MCMC方法要精确些,但随着i的样本容量增大,这种趋势减小;有先验信息的MCMC方法和无先验信息的MCMC方法估计方差分量置信区间,随着i的样本容量增大,精确度相当。  相似文献   

3.
概化理论又称为方差分量模型,其方差分量估计受限于抽样,不同的抽样样本估计的方差分量可能不一样.为了降低估计的误差,应该重视考察方差分量的变异量(如置信区间).Bootstrap方法是一种有放回的再抽样方法,可用于估计概化理论的方差分量置信区间.文章采用蒙特卡洛模拟技术,比较Bootstrap的PC和BCa方法估计概化理论方差分量置信区间的性能.结果发现:(1)与未校正的方法相比,校正的Bootstrap的PC和BCa方法估计概化理论的方差分量置信区间更为可靠;(2)校正的Bootstrap的BCa方法估计概化理论的方差分量置信区间,要优于校正的Bootstrap的PC方法.  相似文献   

4.
使用Monte Carlo模拟技术生成多项分布数据,比较四种Bootstrap方法估计概化理论方差分量置信区间的性能,四种Bootstrap方法分别是Bootstrap-PC、Bootstrap-t、Bootstrap-BCa和Bootstrap-ABC方法.结果表明:(1)从整体上看,四种Bootstrap方法估计方差分量置信区间的包含率,校正的Bootstrap方法要优于未校正的Bootstrap方法;(2)校正的Bootstrap-PC和Bootstrap-t方法相当,校正的Bootstrap-BCa与Bootstrap-ABC方法相当,校正的Bootstrap-BCa和Bootstrap-ABC方法要优于校正的Bootstrap-PC和Bootstrap-t方法.  相似文献   

5.
对含两个方差分量的一般线性混合模型,对其随机效应方差分量的组合谱分解估计进行改进.在正态假设下,考虑了一个不变估计类,证明了在均方误差意义下,在该估计类中不存在一致最优估计,但在一个重要子估计类中,找到了一致最优估计,并用截断方法得到了优于组合谱分解估计正部的正估计.  相似文献   

6.
心理与教育测量的应用领域发生了较大变化,被测群体的知识和能力等特质在一定程度上不再服从偏度为0的分布.文章利用广义双曲线分布性质,模拟生成一定偏度的偏态分布数据,探讨数据不同偏度对概化理论方差分量估计的影响.结果表明:利用广义双曲线分布性质可以有效模拟生成概化理论所需要的偏态分布数据;广义双曲线分布模拟的偏态分布数据对概化理论各种方法估计方差分量有影响.  相似文献   

7.
当对插补所得的“完整数据集”使用标准的完全数据统计方法的时候,往往会低估插补估计量的方差.Bootstrap方法(自助法)是非参数统计中的一种重要的统计方法,是基于原始观测数据进行重复抽样,能充分的利用已知数据,不需要对未知总体进行任何的分布假设或增加新的样本信息,进而再利用现有的统计模型对总体的分布特性进行统计推断.本文首先运用多重插补的方法对缺失数据进行了插补,之后利用Bootstrap方法对插补之后的数据进行了插补统计量的方差估计,结果表明运用Bootstrap方法进行插补统计量的方差估计更科学更准确.  相似文献   

8.
文章系统研究了Bootstrap统计量的枢轴化与近似枢轴化方法,比较BCa方法、学生化Bootstrap法、嵌套Bootstrap方法的优缺点,给出了它们的区别与联系,最后结合实例验证了前面结论.特别得出:方差稳定变换是良好枢轴化的基础,且BCa方法具有简单分位点法的变换保持性质,但是由于学生化Bootstrap置信区间的覆盖率对数据中的异常点比较敏感,没有分位点法的变换保持性质,故它的实际效果不一定好.  相似文献   

9.
Bootstrap方法是一种有放回的再抽样方法,可用于平均数假设检验的估计.采用蒙特卡洛数据模拟技术,模拟正态分布数据.设计研究程序,探讨在不同的样本量和再抽样次数不同情况下,Bootstrap方法在平均数假设检验中应用,所适宜的样本容量,将一类错误率作为对比条件.结果表明,跨越三种比较条件,只有当样本量大于等于5且模拟次数大于等于1000次时,才能得到满足条件的一类错误率,即表明使用Bootstrap方法才会取得较好的效果.  相似文献   

10.
随机信息中正态方差的灰色估计   总被引:1,自引:1,他引:0  
利用随机信息进行参数估计,是数理统计学的基本内容.但经典统计学的理论和方法,都是建立在参数是明确数据的基础上.而现实社会经济生活中的参数,具有大量不确定性或认识的模糊灰色性.文章在Neyman的置信区间理论基础上,借助灰色系统的方法,在随机样本的信息下,对正态方差的灰色估计进行了研究,求出了正态方差的灰数估计及其白化权函数;并列举实例以示其应用.  相似文献   

11.
The quality of estimation of variance components depends on the design used as well as on the unknown values of the variance components. In this article, three designs are compared, namely, the balanced, staggered, and inverted nested designs for the three-fold nested random model. The comparison is based on the so-called quantile dispersion graphs using analysis of variance (ANOVA) and maximum likelihood (ML) estimates of the variance components. It is demonstrated that the staggered nested design gives more stable estimates of the variance component for the highest nesting factor than the balanced design. The reverse, however, is true in case of lower nested factors. A comparison between ANOVA and ML estimation of the variance components is also made using each of the aforementioned designs.  相似文献   

12.
Random effects models are considered for count data obtained in a cross or nested classification. The main feature of the proposed models is the use of the additive effects on the original scale in contrast to the commonly used log scale. The rationale behind this approach is given. The estimation of variance components is based on the usual mean square approach. Directly analogous results to those from the analysis of variance models for continuous data are obtained. The usual Poisson dispersion test procedure can be used not only to test for no overall random effects but also to assess the adequacy of the model. Individual variance component can be tested by using the usual F-test. To get a reliable estimate, a large number of factor levels seem to be required.  相似文献   

13.
In this paper, maximum likelihood and Bayes estimators of the parameters, reliability and hazard functions have been obtained for two-parameter bathtub-shaped lifetime distribution when sample is available from progressive Type-II censoring scheme. The Markov chain Monte Carlo (MCMC) method is used to compute the Bayes estimates of the model parameters. It has been assumed that the parameters have gamma priors and they are independently distributed. Gibbs within the Metropolis–Hasting algorithm has been applied to generate MCMC samples from the posterior density function. Based on the generated samples, the Bayes estimates and highest posterior density credible intervals of the unknown parameters as well as reliability and hazard functions have been computed. The results of Bayes estimators are obtained under both the balanced-squared error loss and balanced linear-exponential (BLINEX) loss. Moreover, based on the asymptotic normality of the maximum likelihood estimators the approximate confidence intervals (CIs) are obtained. In order to construct the asymptotic CI of the reliability and hazard functions, we need to find the variance of them, which are approximated by delta and Bootstrap methods. Two real data sets have been analyzed to demonstrate how the proposed methods can be used in practice.  相似文献   

14.
Geometric aspects of linear model theory are surveyed as they bear on mean estimation, or variance covariance component estimation. It is outlined that notions associated with linear subspaces suffice for those of the customary procedures which are solely based on linear, or multilinear algebra. While conceptually simple, these methods do not always respect convexity constraints which naturally arise in variance component estimation.

Previous work on negative estimates of variance is reviewed, followed by a more detailed study of the non-negative definite analogue of the MINQUE procedure. Some characterizations are proposed which are based on convex duality theory. Optimal estimators now correspond to (non-linear) projections onto closed convex cones, they are easy to visualise, but hard to compute. No ultimate solution can be recommended, instead the paper concludes with a list of open problems.  相似文献   

15.
This article studies the hypothesis testing and interval estimation for the among-group variance component in unbalanced heteroscedastic one-fold nested design. Based on the concepts of generalized p-value and generalized confidence interval, tests and confidence intervals for the among-group variance component are developed. Furthermore, some simulation results are presented to compare the performance of the proposed approach with those of existing approaches. It is found that the proposed approach and one of the existing approaches can maintain the nominal confidence level across a wide array of scenarios, and therefore are recommended to use in practical problems. Finally, a real example is illustrated.  相似文献   

16.
In this paper, a new small domain estimator for area-level data is proposed. The proposed estimator is driven by a real problem of estimating the mean price of habitation transaction at a regional level in a European country, using data collected from a longitudinal survey conducted by a national statistical office. At the desired level of inference, it is not possible to provide accurate direct estimates because the sample sizes in these domains are very small. An area-level model with a heterogeneous covariance structure of random effects assists the proposed combined estimator. This model is an extension of a model due to Fay and Herriot [5], but it integrates information across domains and over several periods of time. In addition, a modified method of estimation of variance components for time-series and cross-sectional area-level models is proposed by including the design weights. A Monte Carlo simulation, based on real data, is conducted to investigate the performance of the proposed estimators in comparison with other estimators frequently used in small area estimation problems. In particular, we compare the performance of these estimators with the estimator based on the Rao–Yu model [23]. The simulation study also accesses the performance of the modified variance component estimators in comparison with the traditional ANOVA method. Simulation results show that the estimators proposed perform better than the other estimators in terms of both precision and bias.  相似文献   

17.
Estimation of the single-index model with a discontinuous unknown link function is considered in this paper. Existed refined minimum average variance estimation (rMAVE) method can estimate the single-index parameter and unknown link function simultaneously by minimising the average pointwise conditional variance, where the conditional variance can be estimated using the local linear fit method with centred kernel function. When there are jumps in the link function, big biases around jumps can appear. For this reason, we embed the jump-preserving technique in the rMAVE method, then propose an adaptive jump-preserving estimation procedure for the single-index model. Concretely speaking, the conditional variance is obtained by the one among local linear fits with centred, left-sided and right-sided kernel functions who has minimum weighted residual mean squares. The resulting estimators can preserve the jumps well and also give smooth estimates of the continuity parts. Asymptotic properties are established under some mild conditions. Simulations and real data analysis show the proposed method works well.  相似文献   

18.
Bootstrap methods are proposed for estimating sampling distributions and associated statistics for regression parameters in multivariate survival data. We use an Independence Working Model (IWM) approach, fitting margins independently, to obtain consistent estimates of the parameters in the marginal models. Resampling procedures, however, are applied to an appropriate joint distribution to estimate covariance matrices, make bias corrections, and construct confidence intervals. The proposed methods allow for fixed or random explanatory variables, the latter case using extensions of existing resampling schemes (Loughin,1995), and they permit the possibility of random censoring. An application is shown for the viral positivity time data previously analyzed by Wei, Lin, and Weissfeld (1989). A simulation study of small-sample properties shows that the proposed bootstrap procedures provide substantial improvements in variance estimation over the robust variance estimator commonly used with the IWM. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
A systematic procedure for the derivation of linearized variables for the estimation of sampling errors of complex nonlinear statistics involved in the analysis of poverty and income inequality is developed. The linearized variable extends the use of standard variance estimation formulae, developed for linear statistics such as sample aggregates, to nonlinear statistics. The context is that of cross-sectional samples of complex design and reasonably large size, as typically used in population-based surveys. Results of application of the procedure to a wide range of poverty and inequality measures are presented. A standardized software for the purpose has been developed and can be provided to interested users on request. Procedures are provided for the estimation of the design effect and its decomposition into the contribution of unequal sample weights and of other design complexities such as clustering and stratification. The consequence of treating a complex statistic as a simple ratio in estimating its sampling error is also quantified. The second theme of the paper is to compare the linearization approach with an alternative approach based on the concept of replication, namely the Jackknife repeated replication (JRR) method. The basis and application of the JRR method is described, the exposition paralleling that of the linearization method but in somewhat less detail. Based on data from an actual national survey, estimates of standard errors and design effects from the two methods are analysed and compared. The numerical results confirm that the two alternative approaches generally give very similar results, though notable differences can exist for certain statistics. Relative advantages and limitations of the approaches are identified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号