共查询到19条相似文献,搜索用时 203 毫秒
1.
内容提要:十七大报告中提出的“财产性收入”的概念,引起了广泛的关注。国家统计局对其进行了界定并通过“城镇住户调查”对居民收入进行了分类和统计。然而,我国的“财产性收入”与联合国国民经济核算体系(SNA)中的内容存在较大冲突,由此进行的居民收入分类和统计也和美、加等发达国家存在很大差异。本文通过SNA的账户设置和经济内涵,分析了房租收入的非财产性收入属性,并在此基础上提出了我国居民收入类型的调整建议。 相似文献
2.
3.
4.
在市场经济条件下,社会经济统计数据可以看作是一种特殊的产品,这种产品兼有商品的属性和非商品的属性。国家统计系统是生产这种知识含量和科技含量较高的产品的自然垄断行业。评价这种特殊产品的质量,应从三个层面入手。第一个层面是生产这种特殊产品的“原材料”,即专业术语所讲的原始数据;第二层面是生产这种产品的设计思想,即这种产品对用户需求的适应性和先进性;第三个层面是生产这种产品的“工艺流程”。生产流程中任何一个环节出现问题都会影响产品的质量和可靠性。从这样的思路出发,我对如何评价统计数据质量与可靠性提出一些看法。 相似文献
5.
林业碳汇的经济价值是决定林业碳汇生产和交易的重要指标。基于浙江省温州市碳控排企业调查数据,运用条件价值法(CVM),引入计划行为理论,从碳控排企业支付意愿视角探讨林业碳汇经济价值及其影响因素。结果表明,碳控排企业对林业碳汇的支付意愿是"是否愿意支付"和"愿意支付多少金额"两个决策过程的统一。受访企业负责人个体特征、企业特征、气候变化认知、行为态度、主观规范变量显著正向影响碳控排企业是否愿意为林业碳汇支付;受访企业负责人个体特征、企业特征、林业碳汇认知、行为态度、知觉行为控制、行为经验变量显著正向影响碳控排企业的林业碳汇支付金额,反向行为执行意向则产生显著负向影响,利用PCE模型计算得到其平均支付金额,即林业碳汇经济价值为47.36元/t·CO_2。 相似文献
6.
“新机制”改革的核心任务是,建立农村义务教育财政投入的长效机制,其成效评价应该是重点考察它的财政投入效应。利用FRDD方法,对2006年“新机制”改革的实证研究表明,在数量上,西部农村地区的人均义务教育财政投入确实得到了显著提高;在财政资源配置结构上,西部农村落后地区获得了更多转移支付,而中等收入地区的义务教育财政投入水平仍然较低,财政中立性原则没有得到充分体现;在管理制度上,“新机制”改革没能有效提高县级政府的义务教育财政努力程度,存在显著的挤出效应,降低了财政资源的使用效率。 相似文献
7.
本文分别在线性Engle-Granger协整模型和非线性指数平滑迁移自回归误差修正模型 (ESTAR-ECM) 的框架下,对我国名义利率与通货膨胀率序列进行了长期均衡关系的检验。发现线性协整模型不能捕捉到我国名义利率与通货膨胀率的长期均衡关系,而对于ESTAR-ECM模型,无论利用商业银行1年期贷款利率还是7天期银行间同业拆借利率作为名义利率的代理变量,均证实名义利率与通货膨胀率具有长期稳定的均衡关系,表明“费雪效应”在我国是成立的。但由于“费雪效应”系数小于1,表明名义利率与通货膨胀率之间仅存在弱的“费雪效应”。其意义在于,我国利率政策对稳定通胀预期、抑制通货膨胀具有一定的正面效应,但由于利率对通货膨胀反应不足,导致完全依靠利率政策控制目前较高的通货膨胀有一定的困难。 相似文献
8.
本文通过对留守与流动儿童定义分歧的梳理及要素特征分析,明确“留守”与“流动”儿童的概念、内涵与外延;设计以“户籍资料”为基础,分别构建以“教育部门”、“公安部门”为主的留守、流动儿童登记体系,建立关于留守与流动儿童登记的长效机制。针对留守与流动儿童这两类特殊群体,设计研究的调查表,该表由基本信息与综合信息两部分组成,并构建以“登记表信息及户籍信息”为依据,以“统计部门为主”统计调查体系。通过留守与流动儿童登记与调查体系的设计,并加以实施,便于准确地掌握情况、了解问题,为留守与流动儿童的学术研究提供基础的信息资料,为相关部门制定对策提供必要的依据。 相似文献
9.
在贫困脆弱性和贫困动态理论的逻辑归纳与阐述基础上,利用辽宁省重点贫困县调查样本数据,采用聚类分析、脆弱度分析以及敏感度分析等方法实际测度了贫困县贫困状态的分布情况和脆弱程度;从贫困动态的角度,对贫困户1997~2006年间的贫困发生频次、深度及持续时段进行了统计分析。研究发现,各个贫困县的贫困脆弱性程度存在一定的差异,且受多重致贫因素的影响呈现出一定的类型分布特征;贫困动态的分析在一定程度上支持了贫困脆弱度的结果;贫困时段分析与以往研究发现略有不同,呈现出“暂时贫困”和“长期贫困”二者“并重”的特征,而非“单一分布”的“偏态”, 真正有效率的扶贫救济制度应当准确地锁定在持续贫困的家庭上,政策相应地要具有差异性和弹性,这样才有可能快速及时的解决攻坚阶段的贫困问题。 相似文献
10.
近几年,随着国家经济体制改革的深入,许多企业采用”承包、租赁等新的经营形式,职工工资电由计划控制逐渐向“弹忏一体”的调控方式转变。本女就新的经济形势下承包租赁制企业中的劳动统计作、Z简单的分析、探讨。一、国家劳动统计制度规定的职工工资总额收*L议总额作为贯要的!周村*力统计指标.足衡展职r生活水’1’和计算离退休金及订关费用的重要依据、按照H家制度规定.职**资总额是指各中位在一定!对剧内由接支付给本’Y位个部职肝的劳动报酬总额。将单位支付给职工的劳动报酬以及其他根据何关规定支付的工资.不论资金水源… 相似文献
11.
In this paper, a generalized definition of the well known economic price index is used to derive some similarities and dissimilarities between statistical and economic price index theory. The relationships between statistical and economic index numbers are analyzed by means of two tests taken from axiomatic (statistical) approaches which can also be formulated in an economic context. 相似文献
12.
几何分布是离散型寿命分布中最为重要的分布之一,许多产品的寿命(比如开关等)都可以用几何分布来描述。由于几何分布的无记忆性,它在可靠性理论与应用概率模型中有着非常重要的地位。目前,对关于几何分布在全样本场合、截尾样本场合以及加速寿命试验场合下参数的统计分析已经有了广泛的研究。并且有着重要的理论与应用价值。因此将不完全数据场合下的几何分布问题转化为指数分布问题,再利用指数分布的已有结果首次得到了几何分布在缺失数据场合和分组数据场合下参数的近似点估计,Monte—Carlo模拟算例结果令人满意,说明该方法是可行的。 相似文献
13.
本文应用系统工程理论和多元统计的逐步回归分析方法处理和筛选小儿佝偻病调查的数据,分析研究小儿佝偻病发病的主要因素,为防治小儿佝偻病和提高诊断的准确性提供重要的参考依据。 相似文献
14.
15.
We give an overview of several aspects arising in the statistical analysis of extreme risks with actuarial applications in view. In particular it is demonstrated that empirical process theory is a very powerful tool, both for the asymptotic analysis of extreme value estimators and to devise tools for the validation of the underlying model assumptions. While the focus of the paper is on univariate tail risk analysis, the basic ideas of the analysis of the extremal dependence between different risks are also outlined. Here we emphasize some of the limitations of classical multivariate extreme value theory and sketch how a different model proposed by Ledford and Tawn can help to avoid pitfalls. Finally, these theoretical results are used to analyze a data set of large claim sizes from health insurance. 相似文献
16.
Recursive and en-bloc approaches to signal extraction 总被引:1,自引:0,他引:1
Peter Young 《Journal of applied statistics》1999,26(1):103-128
In the literature on unobservable component models , three main statistical instruments have been used for signal extraction: fixed interval smoothing (FIS), which derives from Kalman's seminal work on optimal state-space filter theory in the time domain; Wiener-Kolmogorov-Whittle optimal signal extraction (OSE) theory, which is normally set in the frequency domain and dominates the field of classical statistics; and regularization , which was developed mainly by numerical analysts but is referred to as 'smoothing' in the statistical literature (such as smoothing splines, kernel smoothers and local regression). Although some minor recognition of the interrelationship between these methods can be discerned from the literature, no clear discussion of their equivalence has appeared. This paper exposes clearly the interrelationships between the three methods; highlights important properties of the smoothing filters used in signal extraction; and stresses the advantages of the FIS algorithms as a practical solution to signal extraction and smoothing problems. It also emphasizes the importance of the classical OSE theory as an analytical tool for obtaining a better understanding of the problem of signal extraction. 相似文献
17.
The variance of the sampling distribution of the sample mean is derived for two sampling designs in which a single cluster is randomly drawn from an autocorrelated population. The derivations are motivated by potential applications to statistical quality control, where a "one-cluster" sampling design may often be used because of ease of implementation, and where it is likely that process output is autocorrelated Scenarios in statistical process control for which either non-overlapping or overlapping clusters are appropriate are described The sampling design variance under non-overlapping clusters is related to the sampling design variance under overlapping clusters through the use of a circular population. 相似文献
18.
《Journal of statistical planning and inference》2005,134(1):180-193
Likelihood is widely used in statistical applications, both for the full parameter by obvious direct calculation and for component interest parameters by recent asymptotic theory. Often, however, we want more detailed information concerning an inference procedure, information such as say the distribution function of a measure of departure which would then permit power calculations or a detailed display of p-values for a range of parameter values. We investigate how such distribution function approximations can be obtained from minimal information concerning the likelihood function, a minimum that is often available in many applications. The resulting expressions clearly indicate the source of the various ingredients from likelihood, and they also provide a basis for understanding how nonnormality of the likelihood function affects related p-values. Moreover they provide the basis for removing a computational singularity that arises near the maximum likelihood value when recently developed significance function formulas are used. 相似文献
19.
Statistics in epidemiology: the case-control study 总被引:1,自引:0,他引:1
Breslow NE 《Journal of the American Statistical Association》1996,91(433):14-28
This article presents a general review of the major trends in the conceptualization, development, and success of case-control methods for the study of disease causation and prevention. "Recent work on nested case-control, case-cohort, and two-stage case control designs demonstrates the continuing impact of statistical thinking on epidemiology. The influence of R. A. Fisher's work on these developments is mentioned wherever possible. His objections to the drawing of causal conclusions from observational data on cigarette smoking and lung cancer are used to introduce the problems of measurement error and confounding bias." 相似文献