首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
五、样本量估计 二阶段抽样调查样本量的估计包括以下几部分内容: 1.一级样本量。一级样本单位的多少取决于调查一、二级单位费用的比例和调查精度。一般来讲,在二级样本单位数已经确定的情况下,一级样本单位愈多,调查精度愈高,费用愈大。如果所有的一级单位都作为一级样本单位,那么二阶段抽样就变成一阶段分层抽样。这样做虽然能提高调查精度,但费用增大,操作难度也增大。  相似文献   

2.
王国明 《统计研究》1989,6(6):19-25
一、国家统计局全国农产量抽样调查抽样程序 第一步,以省一级行政区为单位,省内所有的县组成一个抽样总体,抽取约35%的县。 第二步,以抽中的县为单位,县内所有的行政村组成抽样子总体,抽取15-20个村。 第三步,以抽中的村为单位,抽取农作物地块。 上述的每一步都采用有关标识排队等距抽样的方法。以省抽县为例,具体步骤如下: 1.设X_i、S_1分别代表第i号县的前三年平均亩产、面积。根据X_i的大小将所有的县进行排序。经排序后得到X_1 ≤X_j(i>j)。  相似文献   

3.
李腊生 《统计研究》2002,19(10):37-40
一、引言在经济决策中 ,时间价值问题是决策者必须加以考虑的问题。现有有关时间价值的研究表明 ,相同的货币收益或成本在不同的时期具有不同的价值 ,这种因时间上的差异所导致的价值差别就构成了决策中的时间价值量 ,为了对不同时期的货币收益或成本加以比较与综合 ,通常的办法是对未来的货币量进行折现 ,从而使不同时期的货币量均以现值的形式表现。从理论上分析 ,未来时期货币量的现值为 :V0 =kt(1+R1 ) (1+R2 )… (1+Rt) (1)  其中 :Ri 为第i期的贴现率 ,kt 为第t期的货币量 :由于在现实经济活动中Ri(i =1,2 ,… ,t)…  相似文献   

4.
一、比率的抽样估计介绍在抽样估计中,根据一个简单随机样本要估计的量通常是两个变量的比率,这两个变量随单位的不同而不同,当抽样单位是由一组或一群个体所组成,而我们又希望获得总体的每个个体的均值时,就经常会遇到两个变量的比率的估计。假定总体由N个单元(个体)组  相似文献   

5.
文章对于两阶段抽样月度调查中的平衡两层次样本轮换问题,构造了一种新的平衡两层次样本轮换模式.对于一级单元规模较大的两阶段抽样月度调查,该模式中一级单元的轮换模式为8-4-8(16),二级单元的轮换模式为2-10-2(4).  相似文献   

6.
抽样调查就是从被研究对象总体的所有单位中按照随机原则抽取部分单位进行调查,以调查所得的指标来推断总体相应的指标.由于抽样调查具有省时、省力及调查资料能对总体作出可靠的估计等优点,因而近年来在社会经济统计调查中已被广泛应用.一般来说,开展抽样调查要经过抽样方案设计,作本单位的调查和抽样推断等三个阶段.本讲就抽样方案设计和抽样推断作一简单介绍.一、抽样方案设计抽样方案设计是抽样调查最重要的部分,因为它关系到拉个抽样效果的优劣.一般可按以下步获进行:1、妇闻抽样框妇制抽样担,首先应该确定调查总体的范围…  相似文献   

7.
两阶段抽样首先是将总体n划分为若干组r,而每组包含一定的单位数mi,第一步先从r组中随机抽取r组,第二步又从中选的r组中分别随机抽取mi个单位构成一个样本.  相似文献   

8.
在抽样调查中,有时必须在固定时间内估计随着时间变化的总体参数.如果在时间b总体的单位值和下一个时间h+1的同一单位值存在相关,则可以利用以前样本所包含的信息来改善现或总体参数的估计.部分轮换抽样法就是每次抽样时剔除上次样本中的部分单位而其余部分单位仍被保留下来,被剔除的单位用新单位来代替.此法每次保留部分单位,既维持部分单位的连续性,以降低费用,又能加入部分新单位来代管总体的变化.它对定期公布的资伯特别适用.本文研究轮换抽样的原理及参数估计.一、连续时间的抽样设在时间h时总体平均值为Yk,方基为。‘…  相似文献   

9.
六、各级样本的具体抽选方法1.一级样本单位的抽选.第一步,将抽样总体中除省会城市、自治区首府及计划单列市以外的所有一级单位按上年销售总额由大到小排队并编号.第二步,计算抽选距离,即组距K=N'/n'(k为最接近于N'/n'的整数).其中n'是总体中不含省会城市、自治区首府及计划单列市以外的所有一级单位总数,n'同样是  相似文献   

10.
MPPS抽样下Hansen-Hurwitz估计量的扩展   总被引:2,自引:0,他引:2       下载免费PDF全文
一、问题的提出一般来说 ,在抽样调查中 ,我们会尽量利用已知的辅助信息来提高估计的精度。下面考虑一个包含N个单元的有限总体 ,设y表示我们要研究的变量 ,x是一个已知而且是正的辅助变量 ,且它与变量y高度相关 ,yi 和xi 分别表示第i个单元的研究变量值和辅助变量值 (其中i =1,… ,N)。这样在抽样设计阶段 ,我们就可以充分利用辅助变量x。比如说 ,在放回抽样中 ,我们以pi 作为第i个总体单元被抽中的概率 (其中i =1,… ,N) ,而pi 的大小取决于已知的辅助变量值xi,即满足pi >0 ,∑Ni =1pi =1。在这样一种放回的不等概率抽样中 ,如果让每个…  相似文献   

11.
The balanced half-sample and jackknife variance estimation techniques are used to estimate the variance of the combined ratio estimate. An empirical sampling study is conducted using computer-generated populations to investigate the variance, bias and mean square error of these variance estimators and results are compared to theoretical results derived elsewhere for the linear case. Results indicate that either the balanced half-sample or jackknife method may be used effectively for estimating the variance of the combined ratio estimate.  相似文献   

12.
The properties of a method of estimating the ratio of parameters for ordered categorical response regression models are discussed. If the link function relating the response variable to the linear combination of covariates is unknown then it is only possible to estimate the ratio of regression parameters. This ratio of parameters has a substitutability or relative importance interpretation.

The maximum likelihood estimate of the ratio of parameters, assuming a logistic function (McCullagh, 1980), is found to have very small bias for a wide variety of true link functions. Further it is shown using Monte Carlo simulations that this maximum likelihood estimate, has good coverage properties, even if the link function is incorrectly specified. It is demonstrated that combining adjacent categories to make the response binary can result in an analysis which is appreciably less efficient. The size of the efficiency loss on, among other factors, the marginal distribution in the ordered categories  相似文献   

13.
排序集抽样下利用辅助变量中位数构建了总体均值的改进比率估计模型,分析了该比率估计量的偏差和均方误差,并与简单随机抽样下的比率估计比较,证明了改进后的比率估计均方误差更小。以农作物播种面积和产量为研究对象进行实例分析,研究表明,基于排序集样本和辅助变量中位数的比率估计方法可以有效提高估计精度,验证了该构造方法的可行性。  相似文献   

14.
The performance of the balanced half-sample, jackknife and linearization methods for estimating the variance of the combined ratio estimate is studied by means of a computer simulation using artificially generated non-normally distributed populations.

The results of this investigation demonstrate that the variance estimates for the combined ratio estimate may be highly biased and unstable when the underlying distributions are non-normal. This is particularly true when the number of observations available from each stratum is small. The jack-  相似文献   

15.
Bayesian marginal inference via candidate's formula   总被引:2,自引:0,他引:2  
Computing marginal probabilities is an important and fundamental issue in Bayesian inference. We present a simple method which arises from a likelihood identity for computation. The likelihood identity, called Candidate's formula, sets the marginal probability as a ratio of the prior likelihood to the posterior density. Based on Markov chain Monte Carlo output simulated from the posterior distribution, a nonparametric kernel estimate is used to estimate the posterior density contained in that ratio. This derived nonparametric Candidate's estimate requires only one evaluation of the posterior density estimate at a point. The optimal point for such evaluation can be chosen to minimize the expected mean square relative error. The results show that the best point is not necessarily the posterior mode, but rather a point compromising between high density and low Hessian. For high dimensional problems, we introduce a variance reduction approach to ease the tension caused by data sparseness. A simulation study is presented.  相似文献   

16.
One of the important theoretical developments in successive sampling has been to provide an optimum estimate by combining two independent estimates (i) a double-sampling regression estimate from the matched portion of the sample using one auxiliary variable with (ii) a mean per unit estimate based on the unmatched portion of the sample. Theory has been generalized in the present paper to provide the optimum estimate by combining a double-sampling multivariate ratio or regression estimate using p auxiliary variables (p≥1) from the matched portion of the sample with a mean per unit estimate from the unmatched portion of the sample. Results have been presented for the more general and practical case when the samples on the two occasions are of unequal size.  相似文献   

17.
Summary. The Cox proportional hazards model, which is widely used for the analysis of treatment and prognostic effects with censored survival data, makes the assumption that the hazard ratio is constant over time. Nonparametric estimators have been developed for an extended model in which the hazard ratio is allowed to change over time. Estimators based on residuals are appealing as they are easy to use and relate in a simple way to the more restricted Cox model estimator. After fitting a Cox model and calculating the residuals, one can obtain a crude estimate of the time-varying coefficients by adding a smooth of the residuals to the initial (constant) estimate. Treating the crude estimate as the fit, one can re-estimate the residuals. Iteration leads to consistent estimation of the nonparametric time-varying coefficients. This approach leads to clear guidelines for residual analysis in applications. The results are illustrated by an analysis of the Medical Research Council's myeloma trials, and by simulation.  相似文献   

18.
The authors examine the robustness of empirical likelihood ratio (ELR) confidence intervals for the mean and M‐estimate of location. They show that the ELR interval for the mean has an asymptotic breakdown point of zero. They also give a formula for computing the breakdown point of the ELR interval for M‐estimate. Through a numerical study, they further examine the relative advantages of the ELR interval to the commonly used confidence intervals based on the asymptotic distribution of the M‐estimate.  相似文献   

19.
We developed methods for estimating the causal risk difference and causal risk ratio in randomized trials with noncompliance. The developed estimator is unbiased under the assumption that biases due to noncompliance are identical between both treatment arms. The biases are defined as the difference or ratio between the expectations of potential outcomes for a group that received the test treatment and that for the control group in each randomly assigned group. Although the instrumental variable estimator yields an unbiased estimate under a sharp null hypothesis but may yield a biased estimate under a non-null hypothesis, the bias of the developed estimator does not depend on whether this hypothesis holds. Then the estimate of the causal effect from the developed estimator may have a smaller bias than that from the instrumental variable estimator when the treatment effect exists. There is not yet a standard method for coping with noncompliance, and thus it is important to evaluate estimates under different assumptions. The developed estimator can serve this purpose. Its application to a field trial for coronary heart disease is provided.  相似文献   

20.
We have observations for a t distribution with unknown mean, variance, and degrees of freedom, each of which we wish to estimate. The major problem lies in the estimate of the degrees of freedom. We show that a relatively efficient yet very simple estimator is a given function of the ratio of percentile estimates. We derive the appropriate estimator, provide equations for transformation and standard errors, contrast this with other estimators, and give examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号